From Compliance to Confidence: Who Governs the Algorithms?
By Orla Brown, Education & Assessment Consultant | EdTech & AI Ethics Champion
The Inspection That Looked Perfect — Until It Didn’t
In a fictional FE college that could easily be real, the morning buzz before inspection day was unmistakable: half caffeine, half chaos. Staffrooms rattled with energy; three people deep at the printer queue, one machine inevitably flashing “paper jam.”
“Of all days,” someone muttered, tapping the side panel.
Down the corridor, a deputy principal rehearsed her briefing: “Dashboards are green across the board. We’re inspection-ready.”
And on paper, they were.
By mid-morning, inspectors arrived: calm, polite, efficient. In one such classroom, a Level 2 learner beamed.
“This assignment really stretched me.”
“How did you approach it?” an inspector asked.
“I started with AI. It helps me get going.”
No misconduct, just a quiet pattern: staff and students relying heavily on generative tools, with no policy, no audit trail, and no governance.
Achievement looked excellent. Understanding had hollowed out. Strong numbers. Weak reasoning.
The Governance Gap We Don’t Talk About
AI is no longer a future concept because it already plans lessons, marks work and writes feedback. Yet few colleges can say who, at board level, owns the ethical, operational or learner-impact risks of these systems. AI’s potential is extraordinary: governance simply ensures we realise it responsibly. As the OECD’s Governing with AI (2025) notes, effective oversight means learning with AI, not just about it, building adaptive governance that evolves as technology does.
AI without governance doesn’t just create risk, it creates illusion.
Performance without progression.
The National Cyber Security Centre (NCSC 2025) warns that digital and cyber risk must now sit with governors, not just IT teams: assurance has to become strategic, not merely technical. Ofqual (2024) adds that public confidence in assessment relies on human oversight of algorithmic marking. The OECD (2024) links mature governance to higher productivity and trust in AI-enabled economies.
Meanwhile, UNESCO (2021) and UNICEF (2025) both caution that AI in education is outpacing oversight frameworks, risking new forms of inequity for lower-attaining or vulnerable learners.
Different Colleges, Different Journeys
Across FE, AI adoption looks less like a system and more like a patchwork. Some colleges experiment with adaptive marking; others pilot staff CPD, within colleges, departments sit on different rungs of Jisc’s Digital Maturity Model (2025), a “spiky profile” of early adopters, cautious explorers and digital minimalists. The AI Curriculum Integration Framework (2025) recognises that institutions progress unevenly across disciplines. True integration happens only when staff development, confidence, and context align.
A computing team may integrate AI lesson planning confidently; a hairdressing department might still debate whether ChatGPT counts as cheating. This unevenness mirrors wider society. The MIT GenAI Divide (2025) shows that confidence in AI correlates strongly with familiarity; the OECD Trust in AI Study (2024) finds that scepticism peaks where digital literacy is lowest.
We trust what we use, and we use what we trust; governance must work with this diversity, supporting growth rather than imposing uniform compliance.
From Compliance to Confidence
Compliance is the foundation, not the finish line.
Excellence is ensuring every digital decision and innovation works for the learner.
Governance isn’t a brake on innovation. It’s how we own every digital decision. UNESCO (2021) calls this “governing transformation while it happens”. It is the hardest leadership task of all.

The ALT Framework for Ethical Learning Technology (FELT) (ALT 2021) urges institutions to embed ethics into every design and delivery decision. It reminds us that digital transformation should rest on awareness, reflection and integrity, not box-ticking compliance.
And there’s a practical incentive too: inspection readiness. Boards able to evidence digital assurance will also be inspection-ready as Ofsted begins to test how technology shapes curriculum intent, impact and safeguarding. In many ways, effective AI governance mirrors Ofsted’s familiar trilogy: Intent, Implementation, and Impact. The intent is clarity of purpose; the implementation is transparent, ethical deployment; the impact is learner progress and trust.
The Proposal — Appoint a Digital Governor
It’s time for boards to add a new kind of leadership: a Digital Governor, or dedicated sub-committee, to own AI, data and EdTech strategy.
Imagine if every governing body treated digital responsibility like safeguarding: a named lead providing structured oversight and accountability for the digital decisions shaping learner experience.
- Placement: Board-level lead; standing agenda item via Audit & Risk or Quality & Standards.
- Cadence: Termly digital-assurance report covering risk, integrity and learner impact.
- Scope: Strategy, procurement assurance, DPIA/AI sign-offs, staff capability, incident response and transparency.
A Digital Governor oversees (not operates) assuring what others deliver.
But governance only works if people do. A Digital Governor’s role isn’t to dictate from above but to champion capability from within — ensuring every staff member, from classroom practitioner to senior leader, has access to the digital literacy, ethical awareness and AI fluency needed to make governance real. The EIRA Guide for AI in Schools (2024) makes the same point: AI leadership must be distributed, embedded in staff development and daily practice, not left to technical teams alone.
Professional development and one-to-one support must sit alongside oversight so innovation becomes confident practice, not compliance theatre. UNICEF (2025) describes this as “human capacity as the first line of accountability.”
Defence’s Responsible AI Senior Officer (RAISO) model and the NHS AI Assurance Framework (2025) prove that board-level accountability can unlock faster, safer innovation — where ethics and agility co-exist.
Three Headline Duties of a Digital Governor
Every college starts from a different place. Governance shouldn’t enforce one benchmark but measure distance travelled, recognising departmental progress, celebrating experimentation and supporting continuous improvement. These duties map naturally to intent (why we use AI), implementation (how it is used responsibly), and impact (what difference it makes to learners).
This mirrors Jisc’s Digital Maturity Model (2025), which encourages institutions to evidence growth over time rather than chase compliance snapshots.
- Assure learning, equity and impact
Demand termly evidence that AI and EdTech improve progress and do not displace reasoning, with specific focus on lower-attaining and SEND learners. - Govern data, ethics and digital risk
Maintain a board-owned digital risk register; require DPIA/AI assurance for major tools; publish a plain-English statement on how AI is used across the college. - Protect assessment integrity and public confidence
Adopt an AI use policy for staff and learners; update malpractice procedures; keep human-in-the-loop where validity or fairness could be affected; sample and verify accordingly.
As the WEF AI Governance Alliance (2024) and AGILE Index (2025) both demonstrate, governance maturity and public trust rise together, and FE can lead that curve.
30-Day Starter Kit — Appoint → Assess → Assure
- Appoint: Name a Digital Governor or sub-committee and agree Terms of Reference.
- Assess: Create a one-page heat-map of AI/EdTech use across teaching, assessment, data and cyber.
- Assure: Introduce a termly digital-assurance dashboard tracking risks, incidents, DPIAs, equity impact and assessment integrity.
Own every digital decision.
Compliance builds safety. Confidence builds trust.
Call to Action
Appoint a Digital Governor and give them the tools to do the job. That single act signals to inspectors, staff and learners alike that innovation, ethics and inclusion move forward together — and that every digital decision works for the learner.
AI doesn’t transform education, people do. Governance gives them the confidence to do it safely. That’s how FE moves beyond compliance to evidence-informed innovation: governing with AI, for the learner.
Further Reading
- UNESCO (2021) AI and Education: Guidance for Policy-Makers
- UNICEF (2025) Data Governance for EdTech: Policy Recommendations for Education Leaders
- OECD (2024) The Impact of AI on Productivity, Distribution and Growth
- OECD (2025) Governing with AI: Adaptive Governance for a Digital Age
- WEF (2024) AI Governance Alliance Briefing Paper Series
- MIT (2025) The GenAI Divide: State of Business
- EIRA (2024) Guide for AI in Schools: Principles of Responsible Leadership
- AI Curriculum Integration Framework (2025) A Framework for Embedding AI Across Disciplines
- ALT (2021) Framework for Ethical Learning Technology (FELT)
- Jisc (2025) Digital Maturity Model for Further Education
- AGILE (2025) Global AI Governance Index
- NHS (2025) AI in the NHS Programme
- NCSC (2025) Annual Review: Cyber and Digital Assurance
- MOD (2024) Responsible AI Senior Officer (RAISO) Model
Orla Brown is a consultant specialising in qualification development, digital governance and AI ethics. She helps awarding bodies, centres and providers build systems that are ethical, explainable and fair. Connect with Orla Brown on LinkedIn