The AI Ready Pathway offers a straightforward approach for organisations beginning their AI governance journey.
It focuses on laying the foundation for responsible AI through essential practices that help you start to proactively manage risks, maintain transparency, and ensure effective human oversight.
Adopting these practices builds trust and will help prepare your organisation for future AI developments.

Not every organisation needs to apply every practice in the same way. If you are only experimenting with basic generative AI tools, a simple acceptable use policy and awareness training may be a sensible place to start. This pathway is designed to support proportionate adoption based on your AI risk profile and governance needs.

Key practices

Establish AI Governance Ownership & Responsibilities
- Aligns AI activities with organisational values and ethics.
- Builds trust internally.
- Provides a baseline for accountability.

Define Responsible AI Principles
- Creates a shared understanding across the organisation.
- Enables consistent decisions about AI use.
- Supports a culture of transparency and accountability around AI.

Develop an AI Policy
- Aligns AI activities with organisational values and ethics.
- Builds trust internally.
- Provides a baseline for accountability.

Align with Organisational Risk Appetite
- Aligns AI use with broader risk settings
- Enables consistent and proportionate oversight
- Reduces the risk of delays, misalignment, or unintended harm.

Implement AI Risk Assessments
- Helps identify and prioritise higher-risk AI systems early.
- Enhances decision-making clarity.
- Strengthens compliance and ethical practices.

Develop an AI System Inventory
- Provides visibility of AI systems used across the organisation
- Enhances transparency and governance oversight.
- Enables quicker identification of risks and accountability.

Build Capability
- Promotes responsible AI practices.
- Reduces risks from uninformed use.
- Encourages continuous improvement and learning.

Define Human Oversight Points
- Prevents over-reliance on automation.
- Helps reduce risks of AI-driven harm.
- Enhances public trust and accountability.

Outputs
Appointment of an AI governance lead or formation of a cross-functional working group.
Published one pager that summarises the organisation’s Responsible AI principles.
Approved AI Policy or Charter aligned with your organisation’s principles and values.
A summary explainer for staff can serve as a great communication tool. Your organisation may want to make publicly available a summary of the policy. Include or reference Acceptable Use guidance for generative AI, depending on how your organisation structures its policies.
Signed statement or documented confirmation that AI risks align with organisational risk appetite, supported by internal guidance.
A documented AI risk or impact assessment process and completed risk assessments for identified systems.
Documented AI System Inventory and regularly updated status reports.
Training materials and records of training sessions completed by staff. Internal awareness campaign materials (e.g., posters, FAQs).
Documentation of human oversight roles and procedures for each system.
By following these key practices, your organisation will effectively navigate the first stage of responsible AI governance, creating a strong foundation for further growth and maturity.