Adventuring with AI: Reflections on the ALT Awards Shortlist
By Pete Dunford, Bridgend College
Recently, I was honoured to be shortlisted for the ALT Award for Technology in Adult Learning (in partnership with Ufi VocTech Trust). Whilst I didn’t take home the trophy this time, the process of reflecting on my work at Bridgend College has been valuable in itself. It gave me a moment to stop, look back at the rapid changes I’ve been navigating, and take stock of where I am heading.
I wanted to share the story behind my entry including the philosophy that drives it, and how it could be helpful for others.
Assessing the problem
For a long time in education, the focus has been on the importance of assessment, rather than the importance of learning. Very often we can find ourselves ’teaching to the test’ with an unhealthy focus on grades, rather than encouraging learners to explore subjects in a way that is meaningful to them. This is not a new issue, and twenty-five years ago the internet started to expose the cracks in the foundations. Mindless plagiarism became the norm for some students and teachers had to spend more and more time trying to ‘catch’ the cheaters.
However, with the advent of Generative AI, that dynamic has shifted from a structural problem to an existential one. Suddenly, many traditional assessment types feel vulnerable. The immediate reaction for many across the education sector is fear. How do we stop students from cheating now? How do we police this? If we can’t trust our assessment methods – what does that mean for our education system? We can no longer pretend that our traditional assessment methods are still fit for purpose.
It would be lovely to feel we could just turn education on its head and reset the focus towards learning. However, as anyone in education can testify, the sector is a slow ship to turn. We need to deal with the issues people are facing now, not just dream about what could be. So, whilst I want to encourage debate about how we rebuild the system for the long-term, we have to deal with our current situation in the short-term too, and that situation can best be summed up in one word – anxiety.
In my view, this anxiety (that is felt by both teachers and students) often stems from a lack of clarity. People hear about these powerful tools that are so easily available but they’re terrified of being accused of misconduct if they use them, or even if they don’t!
I realised that if I want to ease this anxiety and prepare learners for the workplace, I couldn’t just ban these tools; they are here to stay. I needed to find a way to encourage a shift from the binary “Ban vs. Allow” mentality, toward something that could provide structure, safety, and explicit permission. Something that might also help to drive the conversation around the purpose of assessment as we move towards a longer-term approach.
A practical framework: The Bridgend AI Usage Scale
This led to the creation of the “Bridgend AI Usage Scale” – a framework designed to bring that much-needed clarity. This framework was initially inspired by the work of Perkins, Furze, Roe and MacVaugh (2024), although it has developed and deviated from that.
Rather than a blanket policy, the scale offers six distinct levels of permitted use, which invites a teacher to be more mindful about how technology could be meaningfully used when writing an assessment. They can look at specific learning outcomes and ask: Does the student need to demonstrate a skill unaided, or is this a task where AI could be a useful tool to complement the learner’s skill? The scale then offers six ways to define the appropriate AI use for that assessment:
- Level 1: No AI
The work is completed entirely without AI assistance. This is best for practical tasks or in-class discussions where students must demonstrate raw knowledge and unaided skills. - Level 2: Planner
AI can be used for brainstorming, generating ideas, or suggesting headings. It acts as a sounding board, but the writing must be the student’s own. - Level 3: Spotter
This level allows AI to help identify errors—like a spell-checker that points out mistakes but won’t fix them automatically. It requires the student to take action and make their own decisions on the feedback. - Level 4: Editor
AI is allowed to help improve what the student has already written. It can correct spelling, punctuation, and grammar, but the student is not allowed to create new content or have the AI “write” the assignment. - Level 5: Drafter
AI can assist with drafting sections and improving the clarity or quality of the writing. It begins to share the load of content creation. - Level 6: Collaborator
This is the most permissive level. Students are allowed to use AI as a full professional partner for these assessments. I assess them not just on the content of their output, but on their ability to produce something that accurately meets industry standards.


A Balanced Assessment Strategy
This framework has allowed us to completely rethink our approach to assessment on the HND course that I lead. We are now incorporating a more diverse range of assessment types across the different AI usage levels which supports our students to develop a more rounded skillset. It would be ineffective to rely on assessments where AI can generate content (Collaborator level) if we need to directly assess knowledge recall or basic understanding, but it would be equally detrimental to ignore the digital skills they need for employment by removing entirely the ability to use any AI to support creativity and problem solving.
By considering the assessments on a course holistically, and using the full range of the scale across a course, we can ensure students prove they can complete tasks unaided when it matters, and prove they can utilise an AI tool effectively when the task would benefit from it.
The Impact: From Anxiety to Engagement
The most rewarding part of all of this has been the tangible shift in the classroom sentiment around assessments.
In the first year of using the scale on all assessments, 75% of students reported that the scale made it ‘very clear’ to understand what level of AI use was permitted, but beyond the data, the atmosphere also changed. By providing clarity to the students, the amount of assignment anxiety they expressed was reduced, and we had no instances of suspected malpractice. One student even told me she was “actually looking forward to completing some of the assessments” – a phrase I rarely hear in any context!
Another student, who previously didn’t consider herself “techy,” used the ‘Collaborator’ permission to design a prototype for a revision app. Because she had explicit permission to be creative, the fear of “accidental misconduct” vanished. She stopped worrying about risk and started focusing on appropriate solutions to the problem she’d been given.
Sharing the Experience
To the #AmplifyFE community: if you are looking to implement a similar approach in your setting, my advice is always to move forward with transparency and accountability – so set up a conversation with your Quality team to discuss it with them before striking out on your own. Then:
- Be explicit: Don’t leave “permitted use” for either students or staff to interpret for themselves. Ambiguity breeds anxiety.
- Focus on values: Our duty is to prepare learners for their future, not our past. How can we use assessment as a tool to help that?
- Collaborate: I release my resources (like my “AI Prompts for Students” resource) under Creative Commons because I believe we are all figuring this out together, and I appreciate all of the conversations and critiques it has led to. Join in conversations with the community and share what does and doesn’t work in your setting too.
If you’d like to see the full scale, download the resources, or read more about my journey, you can find it all on my blog, The Innovating Biologist, or you can find me on LinkedIn.
References
Perkins, M., Furze, L., Roe, J. and MacVaugh, J. (2024) The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21 (6). https://doi.org/10.53761/q3azde36

Join me at the next AmplifyFE webinar on 22 May 2026 at 12:30 PM, where I’ll be presenting ‘Assessing in the AI era: an AI usage scale in practice’.
In 2024, I developed the Bridgend AI Usage Scale to trial with the HE course I lead at Bridgend College. Two years on, I’ll be sharing what I’ve learned from putting that scale into practice — how students responded, what shifted, and what it looks like when an AI usage scale becomes a supportive tool rather than an extra layer of rules.