Putting principles into practice: an ethical approach to GenAI
Authors: Rosemarie McIlwhan (Heriot-Watt University), Kerith George-Briant (University of Abertay), Emma Duke-Williams (University of Dundee), Sara Preston (University of Aberdeen), Louise Drumm (Edinburgh Napier University)
AI was one of the topics on everyone’s lips at ALT Conference “Stronger Foundations, Broader Horizons” on 24 October. This comes as no surprise, but how much of those conversations were built on shaky foundations of just accepting the “tech bros magic and misdirection” (as Sheila Macneill suggests) rather than a more critical, ethical approach.
We were delighted to explore a more critical approach with participants in our workshop “Putting principles into practice: an ethical approach to GenAI”. The workshop introduced the Scottish AI in Tertiary Education Network’s Principles on AI in learning and teaching. These principles have been adopted across the HE sector in Scotland, informing institutional practices around AI in learning and teaching. When developing the Principles the Scottish AI in Tertiary Education Network (ScAITEN) members felt strongly that it was essential not only to acknowledge the ethical complexity of GenAI, but to explore what is actually meant by ethics and how this manifests in practice. The workshop provided an opportunity for #altc participants to discuss this.
The ScAITEN ethics principle states:
“We acknowledge the ethical complexity of GenAI and commit to using GenAI with integrity, and for the public good in keeping with the shared values of the Scottish tertiary education sector.
- We will educate and seek to address any ethical issues created by the training and development of GenAI, particularly in terms of equity, copyright, intellectual property, privacy and data protection/governance; and will advocate for ethical future developments.
- We will ensure that any engagement with or use of GenAI complies with our legal responsibilities.”
The wider ScAITEN principles advocate for underpinning any conversation about the ethics of GenAI with the ability to choose or refuse the use of GenAI. However, what we heard in the workshop was concerns that the ability to refuse GenAI is becoming increasingly limited as more and more tech companies build GenAI into their products with no ability to switch this part off. In Tertiary Education this raises issues for staff and students alike in terms of agency and ethics. This led to the discussion of wider issues such as if you are a lecturer and you are ethically opposed to GenAI, for say environmental or equity reasons, are you appropriately supporting your students if you do not build in the use of GenAI into your teaching? Particularly, if the ability to use that technology might be considered essential for future employability? These questions were also raised in the context of staff who aren’t opposed to GenAI but who have not (perhaps yet?) developed the skills or confidence to use GenAI or to support their students to do this. We won’t pretend that we have the answers, but it’s important to that we explore the questions both institutionally and individually so that we can come, if not to a consensus, then at least an understanding of each other’s positions and can take action from there.
The position for students appeared a little different, with some sharing how students who choose to refuse to use GenAI (due to ethical environmental, equitable and access issues) were provided with alternative options to engage with the learning and / or assessment. Use of GenAI for marking and feedback was another key topic in relation to students, with many noting that students were uncomfortable with GenAI being used to mark their work. This flagged the importance of human-in-the-loop as part of any assessment decision-making using GenAI. However, it appeared that in many instances the majority of students were engaging with GenAI without raising questions around ethics or other issues. This led to further discussion of what students need to know about GenAI and how we support them to develop a suitably critical approach to engaging with technology. But for some workshop participants there was a sense that it would be strange if students didn’t want to engage with AI (rather than GenAI) as it is and has been embedded in the industry for some time. The question of “what are the consequences if a student or staff member refused to use GenAI” was discussed, considering that this very much depends on context e.g. whether use is required or not, and what’s at stake. These discussions reinforced for us the importance of the ScAITEN position that institutions and individuals need to recognise the ethical complexities in relation to GenAI.
Agency (another ScAITEN principle) came up time and again, with discussion focusing on enabling staff and students to make critical, informed choices on using AI (including GenAI). There are questions to explore around the role and responsibility of institutions and who within those institutions leads out on this work (with leadership currently sitting across a range of places). Key amongst this discussion was consideration of the benefits of institutional policy and guidance compared with discipline-specific and / or more locally developed guidance or practice. Whilst there were advocates for both approaches, there was agreement on the need to work in partnership with staff and students, taking a collaborative journey into the realms of AI in learning and teaching, rather than taking a top-down approach. This links back to people having agency in their choices and ensuring appropriate, context-specific AI literacy. How we create safe spaces for students and staff to develop such literacy, to explore and experiment with AI, and to be honest with each other about this was a key question. For many people there’s a sense of fear around AI, sometimes around the technology itself but also around how you might be perceived if you are using it or not. (For example, for staff this was around being seen as competent in their jobs so not needing to use AI or perhaps lazy if they did use it.) As the ScAITEN principles state steps need to be taken so that “everyone who works and studies in the Scottish tertiary education sector develops understanding of GenAI as it relates to their context.” By creating safe spaces for people to explore and engage not just with the technical how-to aspects of GenAI but to consider the critical, ethical questions around it to help inform their choices.
Practical actions suggested by workshop participants to help people consider some of the ethical issues around use of GenAI included:
- Read the privacy statement before using the technology.
- Explore how the foundation GenAI models were trained – whose data is used and how, and whose labour is the model built-on.
- Be transparent in the assessment process and always have a human-in-loop on any decision-making, particularly where GenAI is used.
- Develop the ability to critique the outputs, so you know where something is a hallucination / misinformation, biased or otherwise raises ethical issues.
- Keep talking to each other, AI (in particular GenAI) can be a polarising issue but it’s important that diverse perspectives are part of the conversation.
This is a complex and nuanced area, perspectives vary depending on individual values and experiences. Continuing conversations openly and transparently is essential to helping everyone make informed, critical choices about whether they choose to use or refuse to engage with GenAI.
Thank you to everyone who participated in the workshop and shared their ideas. This blog reflects the workshop facilitators reflections on the workshop discussions. Any errors or inaccuracies are solely the responsibility of the blog authors.

Discover ALTC25 Revisited: Reignite your curiosity, catch up on what you missed, and help shape what comes next.
Join us as we bring back key speakers, thought-provoking sessions, and the big ideas that shaped the overarching theme of ‘Stronger Foundations, Broader Horizons’. You’ll hear fresh reflections from presenters and participants, explore emerging themes, and take part in new discussions about the future of learning and technology.