
Digital Assessment SIG: Reflections from our Officers
Academic year 2024/2025
Our last blog shared the experiences of our Co-Chairs over the last year of the Digital Assessment SIG. In this blog our team of officers share their experiences of being involved in the digital assessment special interest group and their hopes for the coming year.
Sally Hanford – University of Nottingham
I joined the Digital Assessment SIG because I’ve been involved in a review of Curriculum Management and e-Assessment at my institution (a collaborative effort across many departments) and I felt that the SIG would help me develop a wider understanding of how other universities are approaching Digital Assessment, discover good practice and learn about lessons others have learnt. Having worked in Higher Education for over 20 years, I also hoped to be able to contribute to the wider discussions.
I’ve really enjoyed the webinars. It’s been amazing to hear about what is going on at other institutions. The events have opened up some great conversations and opportunities to network.
I’ve learned so much about summative assessment and AI by being involved in a subgroup of the SIG on this subject.
I’m hoping we can go further in exploring this subject in the next academic year and I’m looking forward to finding out more about what is going on sector wide.
Sulanie Peramunagama – Digital Assessment Advisor, Brunel University of London
I joined the Digital Assessment SIG because I’m passionate about digital assessment and have been immersed in it since I began working in UK higher education nearly a decade ago. I am eager to learn from others across the sector, explore emerging technologies, and contribute to the evolving conversation around assessment practices.
Being part of the SIG has been both fulfilling and inspiring. Collaborating with our dedicated group as well as the sessions and demonstrations on innovations in tech on digital assessment has encouraged me to reflect on and improve my own practice.
One of the highlights for me this year was Professor Samantha Pugh’s webinar session on competency-based programmatic assessment. Her approach: “Using digital platforms to give students multiple opportunities to demonstrate their learning” illustrated how well technology can be harnessed to enhance assessment design. For me, this captures the true purpose of digital assessment: not just to digitise existing practices, but to use technology thoughtfully to make assessment more meaningful, inclusive, and effective.
I have also enjoyed being a part of a SIG subgroup researching the use of AI in summative assessment. At the time of writing (18.06.2025), we continue to navigate the impact of AI on assessment. From what we have found out so far, staff and student perspectives on AI in assessment are mixed. We share the hope that students will not only learn how to leverage AI to enhance their learning and performance, but also develop the critical awareness to understand its limitations and avoid being misled by it. To support this, assessments themselves need to be thoughtfully redesigned to provide opportunities for students to demonstrate these emerging skills in authentic and meaningful ways.
Overall, I believe our SIG will contribute generously to the sector to rethink and reshape digital assessment practices successfully. I’m excited to see how our work evolves in the coming year and I am happy to be part of this forward-thinking community.
Miki Sun – Learning Technology Service Manager, University of Edinburgh
I am glad to be selected as an officer and really grateful for this opportunity! I am a Learning Technology Service Manager who looks after a number of digital assessment tools and supports the vast user community. As a Learning Technologist, my passion is to use various technologies to enhance students’ experience and help them achieve learning goals; but as a service manager, my task is to ensure smooth running of the centrally supported digital tools and to mitigate impact of issues for assessment, which means I am limited by the tools and functions we could offer. I often struggle when software vendors could not develop or provide solutions and functions my users need, and often wonder how colleagues in other universities deal with similar challenges. Joining the Special Interest Group in Digital Assessment as an officer, my hope is to meet like-minded people, to learn from their best practices, to share the challenges we face so that we could influence future development of digital assessment pedagogy, technology and policy together. I have certainly not been disappointed in the first year!
Indeed, the co-chairs Alison, Gemma and Helen, and the other officers helped me feel welcome and supported as soon as we first met. I am impressed how plans and decisions were made collectively and quickly through the group’s Padlet boards and Team meetings, and our ideas were fast implemented, invitations sent, webinars organised, blog articles published, all like clockwork! Not only the JISC mailing list subscriptions increased daily, the first webinar in GenAI and Digital Assessment on 21st January 2025 was a great hit and there were 91 attendees on the day! When I shared the recordings with my colleagues, I received a lot of positive feedback, which helped grow my confidence. I then shared the future events’ more widely on my Linkedin post and in my internal service teams, Learning Technology community and user groups. I am so encouraged to see my previous and present colleagues joining the following webinars and have given excellent talks! I learnt so much from their experience and their groundbreaking work in creative and innovative digital assessment practice.
David Callaghan – Senior Educational Technologist – Liverpool School of Tropical Medicine
Chair of AI for Summative Assessment sub-group
In Mid-March 2025 I emailed the ALT list to call for participants for a group to look at using AI for summative assessment. This controversial email resulted in the formation of an initial group of 10 individuals from HEIs in the UK and North America. I was also approached by Gemma from the ALT Digital Assessment group who suggested we join as a sub-group – which we accepted.
Our initial ideas were to create a survey to gather staff and student attitudes to AI for summative assessment – which is now running and will remain open indefinitely (for longitudinal analysis). LInk to survey (& the author, Tadhg, says ‘…share, share, share!):
Other ideas include ‘Taking AI to moderation’ and marking a handful of previously assessed student work work you have IPR over (like your own) using AI.
Next steps for us are to review our survey looking for themes and write up for the ALT blog and then Journal. Ideally I would like the data from the survey and our other work to be used to lobby government and the OfS etc. to use AI effectively in assessment practices. We are also looking to take the two ideas below forward.
My favorite part of what the sub-group has done this year is allow a group of interested stakeholders to have supportive and frank discussions on this controversial topic.
I have learned, via these discussions and our initial survey, that our thoughts about what others think of the use of AI for summative assessment are fairly accurate – with some interesting thoughts from the survey, including comments like ‘Well, AI is going to be a little less biased’.
Hopes for next year is to publish in ALTJ or similar, lobby gatekeepers, and create some guidance for colleagues looking to use AI for assessing student work. The two projects, one on using AI in standardisation activities and another on using AI in a pseudo standardisation meeting may contribute to this aim.
Nurun Nahar – Assistant Teaching Professor – University of Greater Manchester
I joined the ALT Digital Assessment SIG in September 2024 as an Assistant Teaching Professor with a research background in technology-enhanced learning. In my role I advocate for research-informed pedagogies within my institution and advise my department on harnessing digital tools for both formative and summative assessments. When I came across ALT’s call for expression of interest in joining this SIG dedicated to digital assessment practice, I recognised an opportunity to join as an officer and shape a community-driven agenda that aligns with my commitment to evidence-based innovation in student learning experiences and I must admit it has been a fulfilling experience so far!
From our first meetings, I felt grateful for the open, candid conversations around challenges we all face in digitally assessing learning—whether workload pressures, questions of validity, or sustaining student engagement. Hearing from SIG colleagues representing diverse professional backgrounds across various institutions, has been an enlightening experience to explore issues that no single voice can resolve alone, such as balancing academic integrity with inclusive design. I felt proud to be working alongside colleagues who saw strength in drawing on our collective experiences, to charter a clear vision for this SIG whilst also recognising the value of inviting the wider sector to join the conversation to shape our agenda and impact.
On 21 January 2025 I was pleased to co-present in our first webinar alongside Alison Gibson, University of Birmingham (SIG Co-Chair) and Lisa Bradley, Queen’s University Belfast (SIG Officer). My session was titled “Generative AI and the Future of Digital Assessments: Shifting Focus, Leading Change.” It was attended live by 91 participants and has since been viewed over 230 times on YouTube. It offered me a chance to learn just as much as to share how generative AI might shift us from product-centred summative tasks toward process-rich formative cycles, how multimodal AI tools could support with self-regulated learning in students and how we as a sector can ensure responsible use of AI for learning, teaching and assessment through multistakeholder collaborations that foster critical dialogue exchanges and sharing of good practice across the sector. The lively discussion that followed on authenticity of learning and assessment and trust in AI tools, reminded me how much we still need to achieve in this space and what we could consider exploring collaboratively through the ALT Digital Assessment SIG.
Looking back, being part of this SIG has enabled me to further appreciate how structured dialogue can unearth practical, context-driven strategies from influencing procurement of digital tools to ensuring accessibility in digital assessments and how local innovations can inform broader guidelines. It has also offered me an opportunity to be part of the AI for Summative Assessment sub-group led by David Callghan and work closely with colleagues from various Higher Education institutions to understand staff and student perspectives on using AI for summative assessment. Above all, I am pleased that my engagement with this SIG and insights shared by various guest speakers in our subsequent webinars, has reaffirmed my belief that assessment design lies at the heart of meaningful digital practice, not the technology itself. Looking to the future, I am excited for what lies ahead for this SIG. I anticipate more challenges will unfold for the higher education sector as we continue to witness rapid progress in multimodal AI technologies combined with wider concerns facing this sector. However, I am hopeful that the forward-thinking spirit of this SIG, will help us address emerging issues collaboratively, ensuring our digital assessment practices remain resilient, equitable, and pedagogically sound.