
No matter where you are at in your AI journey, you will find tailored guidance here to help you take the next step.
Whether you are drafting your first AI policy or aligning with global standards, each pathway aims to give you the practical tools to build, grow, and sustain AI governance in your organisation.
Use these pathways to help you get started quickly, bring teams into alignment with practices, and track your progress with confidence.

What is in each pathway?
Each pathway includes two core components to help you move from idea to execution:

Key Practices
A set of practical actions and tools you can adopt or tailor. We link each to recommended frameworks, templates and resources to guide your implementation.

Outputs
Tangible indicators of progress. These help you track what “good” looks like as your capability matures.
Together, these components make it easier to take action and grow your AI Governance maturity over time.

Understand the risks of AI
Before you govern AI, it helps to understand what makes it risky. Some of the most common risks include:

Bias and discrimination
AI systems can replicate or amplify unfair patterns in data, leading to discriminatory outcomes (e.g. in hiring, lending, policing, and healthcare).
Opacity
Many AI systems operate as “black boxes.” When their decisions can’t be explained, it becomes harder to ensure fairness, safety, or accountability.
Privacy risks
AI can infer sensitive information, re-identify individuals in anonymised datasets, or unintentionally expose personal data.
Security vulnerabilities
AI models can be manipulated or attacked through methods like prompt injection, data poisoning, or model extraction, putting systems and users at risk.
Over-reliance
When people trust AI systems too much, without verifying outputs or applying judgment, it can lead to poor decisions, especially in high-stakes contexts like health, finance, or legal settings.
Misuse by malicious actors
AI can be intentionally used to cause harm, for example, generating disinformation, creating deep fakes, or enabling fraud, scams, or cyberattacks.
Accountability gaps
When things go wrong, it’s not always clear who is accountable: the developer, deployer, or user. This makes governance and legal compliance more difficult.
These risks are not theoretical, they can undermine trust, increase liability, and cause serious harm. Identifying which apply to your organisation is the first step to managing them effectively.
→ Explore the MIT AI Risk Repository for practical examples of over 70 known risks and how to manage them.
Not all risks are equal
To help organisations focus their effort, many international frameworks use the concept of risk tiers. These tiers reflect how harmful a system could be in context, from low or manageable risk (e.g. using AI for formatting meeting notes), to high or unacceptable risk (e.g. using AI in criminal sentencing).
Risk tiers help you match the level of governance to the level of potential harm and avoid over-engineering controls for low-risk use cases.
→ For deeper guidance, read the NIST AI Risk Management Framework, the OECD Framework for the Classification of AI Systems (public sector) or the Risk Tiers: Towards a Gold Standard for Advanced AI, an emerging international framework for structuring AI risk thinking across the development lifecycle.
The EU AI Act offers a clear example of a risk-based classification system for AI. For a deep dive into how it defines “high-risk” and what that means for governance, visit the Library section.
Match your mitigations to the risk
Once you understand where a system fits, you can match the level of risk to the appropriate controls. For example:

Lower-risk systems may only require transparency (e.g. disclosing that AI is used) and basic user training

Medium-risk systems might need stronger oversight, bias testing, or tighter access controls

Higher-risk systems typically call for deeper evaluation, red-teaming, or continuous monitoring. In some cases, choosing not to deploy at all!
This tiered approach helps you take a proportionate and practical path and apply the right level of governance to the risks involved.Each AI Governance pathway assumes your organisation understands and complies with core legal requirements, including those related to privacy, data protection, anti-discrimination and intellectual property.
These obligations form a foundation for all responsible AI practices. In New Zealand, the Privacy Commissioner’s guidance on AI and the IPPs explains how the Privacy Act applies to AI, including expectations around transparency, automated decisions, and the responsible use of personal information. If you’re unsure where to start, talk to your legal, risk or compliance advisors early. It is also good practice to review relevant local and international regulations as your use of AI evolves.

Not sure where you are? Start HERE
It may not be obvious where your organisation fits in relation to the AI Governance pathways. Many maturity assessments focus on AI adoption levels, but that’s only part of the picture.
In reality, you might be well-developed in some areas (e.g. documentation or data governance) and just getting started in others (e.g., auditing or transparency). Many of the controls needed for responsible AI often overlap with existing privacy, cybersecurity, or broader risk management frameworks already in place. Build on those structures. Reuse existing teams, policies and processes wherever possible, and adapt them to meet the unique challenges posed by AI. This will save effort, improve alignment, and help AI governance become part of how your organisation already works, not an isolated add-on.
Avoid overthinking. Start where it feels right and pick the entry point that reflects your current capability and ambition. If you want a clearer picture, these resources can help:
NIST-based AI Governance Maturity Model
A flexible, questionnaire-driven tool for all organisation types, aligned to the NIST AI RMF.
Helps assess responsible AI and security needs across different types of genaI use cases (e.g., text, image, or code generation), useful for tailoring governance to risk.
Microsoft RAI maturity assessment
Useful for larger organisations managing multiple AI systems and seeking structured maturity benchmarks.
You don’t need to use every tool. Choose the one(s) that best fit your goals. If you are starting from scratch, skip the maturity models for now. Instead, begin with a simple audit of where AI systems or tools might already be in use across your organisation. If nothing is being used yet, focus on understanding internal demand and interest in AI. This will help identify your AI risk profile and clarify your next steps.
It is also important to consider your organisation’s role in the AI supply chain: are you a developer, a user or both?. This will shape your governance priorities.