Four perspectives on AI and the student digital experience: Reflections on a recent ELESIG event 

It is difficult to escape conversations about the recent release of ChatGPT. This sudden jump in awareness and capabilities of AI raises questions about the student learning experience and how we might understand its impact. This blog reports on a recent ELESIG round table event with four contributors to discuss AI and the students experience. Who was on the panel

The panel was made up of:

  • Rhona Sharpe is the Director for the Centre for Teaching and Learning at University of Oxford. She played an important role in developing the ELESIG network and has focused on the student experience for many years.
  • Olatunde Durowoju is the Faculty Associate Dean for Diversity and Inclusion at LJMU. He is interested in the way AI might level some of the inequalities between different students.
  • Sue Attewell is Head of EdTech at JISC and is jointly running the National Centre for AI in Tertiary Education which has been focused on supporting education to adapt to this change.
  • Rob Howe is a well-known early adopter of technology and is Head of Learning Technology at the University of Northampton.

What did they say

There was a wide range of viewpoints, ideas and insights. The recording is available at the bottom of this blog post, so you can catch up with the details. But here are what I believe were the main points.

Keep calm and carry on adapting

One key message to the academic community was to not to panic.

  • We have been here before: That the way this change has become visible has hidden a very slowly changing process of AI assisting many writing tasks over the past few years, such as Grammarly etc. We have faced technological change before and will probably follow the same arch of change.
  • Things are still developing: In the short term there will be more noise created by this technology, such as revelations around data protection, and the ethics behind the training data, that might make us reconsider how we are using the tools. In the long term we do need to flex and adapt to these developments, but sudden short-term reactions might have ripple effects that we haven’t considered.
  • Increasing assessment types: Making sure there are a wide range of assessment types on the programme would be very beneficial for many reasons. As would the reduction in assessment bunching, better support for understanding assignment requirements, and other well-known assessment support processes. Now is the time to address these and use tools like the JISC AI maturity model to help review the institutional approach. We need to find time to experiment, evaluate, share findings, to understand this change rather than reacting to calls of “it’s here, we must do something right now”

Keep connected

Academic development and learning technology community are being seen as critical in articulating this change within institutions. Once again, as with COVID, we find ourselves in positions of responsibility.

Keep talking: The widespread sharing and community events around this topic are really encouraging. There will not be one single answer to this so attending, talking, and sharing are all positive. This event as well as many other well attended sessions, demonstrate how the community are sharing their knowledge and challenging each other to think differently. AI effects will be wider than ChatGPT so further development will be a constant going forward. It is impossible to predict where we might be on the technology s-curve (Scillitoe, 2013), when it might begin to flatten out, or indeed if it ever well in this particular area. Clarity of understanding will be emergent, and involve cycles of innovation, evaluation, sharing and adopting, all through a critical lens as to wider impacts.

Student engagement

This session was focused on the student digital experience. Obviously, the panel saw the active involvement of students in exploring, discussing, and experimenting with the technology as a key process in understanding and moving forward.

  • Developing the student voice: They are our active experimenters, and seeing the opportunities and risks through their eyes will help us reduce our assumptions about their use of this tool. So, it would be a big mistake to leave students out of our conversations moving forward. We are making lots of assumptions about what they might do with this technology, their attitudes towards it, and their general attitudes to cheating. There is a need to focus on the new skills required to use these technologies effectively.
  • Expanding information literacy: One ever-present skill is the need for data and information literacy, and the cornerstone of past digital skills inventories, but now even more essential.
  • Reexamining misconduct: Another area that needs to be re-examined is the research into why students cheat. There is a rich vein of literature in this area, and getting acquainted with previous findings might further develop our shared understanding.
  • Productivity: There is a growing debate around the increase of all of our productivity when engaged with these tools. However, a criticality needs to question the superficial benefits of generating more stuff, over digging deeper for the pearls.

Think ‘equality’

AI provides an opportunity to reduce some of the barriers and provide a more equitable educational landscape for specific individuals or groups of students.

  • Bots to support students: As an example, the current focus is tending to concentrate on providing the content of the essay, whereas the strength of this technology may lie in supporting the structure of writing and offering improvements to text rather than the content itself. Other scenarios discussed included how specific AI bots might become part of our students’ network of support, offering practical and immediate assistance.
  • Increasing exams: AI has also reignited the debate around the relative benefits of exams versus coursework. Increasing the number of exams may have unintentional consequences. Richardson’s literature review highlights the need for more research is needed on this impact, however tentative conclusions include findings such as white students tending to get better marks in all assessment types, but all ethnic groups get better marks in coursework over exams (2015). A review of assessment anxiety literature recommends more work on developing student resilience (Howard, 2020).
  • Training data bias: Perhaps this offers an opportunity for a refocus on the research, reflection and development of practice around assessment preparation. Other equality considerations lie in the bias within the AI training data, that will impact on the work in decolonising the curriculum as well as raise other risks.

Join ELESIG to hear more about our activities and events.

Written by Dr Jim Turner, Liverpool John Moores University, Chair of ELESIG.

References

Howard, E. (2020) A review of the literature on anxiety for educational assessments. Ofqual. Available at: https://www.gov.uk/government/publications/a-review-of-the-literature-on-anxiety-for-educational-assessments (Accessed: 22 February 2023).

Richardson, J.T.E. (2015) ‘Coursework versus examinations in end-of-module assessment: a literature review’, Assessment & Evaluation in Higher Education, 40(3), pp. 439–455. Available at: https://doi.org/10.1080/02602938.2014.919628.

Scillitoe, J.L. (2013) ‘Technology S-Curve’, in Encyclopedia of Management Theory. Thousand Oaks,: SAGE Publications, Ltd., pp. 847–849. Available at: https://doi.org/10.4135/9781452276090.

Leave a Reply

Your email address will not be published. Required fields are marked *