As student class sizes increase without a corresponding increase in teaching staff, the challenge to provide timely, formative student feedback increases. It might seem that moving to e-assessment is the answer. However, this does not necessarily remove the need for staff intervention. We have addressed this, and related issues, at our University by piloting a novel method of peer review using Adaptive Comparative Judgement (ACJ) software. ACJ is a process where students compare pairs of submissions, making a simple binary judgement on which piece of work is better and providing formative feedback (Pollitt 2012). As judgements are made, the software begins to rank the submissions, and the end-result is a sorted series of submissions ranging from best to worst.
We begin with a live, interactive demo. We will display a version of the software which has pictures of animals and flowers uploaded to it onto the presentation screens and ask the audience to choose which of each pair of photos is better. Thus each participant will get a feel for the judging experience for themselves and will be able to provide us with feedback about the method and the software design.
We then explain the principles behind ACJ/APR before showing how we use it concurrently in a Futurelearn MOOC (n=1000) and an honours course in Computing Science (n=80), and providing examples of student reactions to this approach. The use of this learning technology for learners in MOOCs as well as for more traditional students gives it the potential for wide impact.
Computing Science students often write code before they have learnt to read other people’s code (Glass, 2003). Our use of ACJ attempts to redress this imbalance at the early stages of encountering a new programming language. It’s important that students have completed the task themselves before undertaking this peer review activity so they have a deep understanding of the problem before they start to make judgements on their peer’s work (Nicol, 2018). This activity therefore only allows students to read other people’s code after they have themselves completed the coding task. It is also underpinned by recent research (Nicol et al, in press) that making comparisons is fundamental to our faculty of evaluative judgement.
The problem is deliberately set up so that a range of code solutions is possible. As learners see other learners’ code, they rapidly gain insight into the alternative solution space. One student commented: “I’m impressed by how different the solutions are.” Another said, “Isn’t it always fascinating to see how different solutions might exist for the same problem?” (FutureLearn comments)
While the example we give is discipline specific, ACJ can be used for any task where there are a variety of ‘correct’ solutions, and we would like to extend our use of ACJ/APR to other subjects. As well as the interactive demo at the beginning, a major part of this session will be a group discussion of the applicability of ACJ to the audience’s subject areas and suggestions for how this process might be refined.
Glass, R.L., 2003. About Education, in: Facts and Fallacies of Software Engineering. Addison-Wesley, Boston, MA.
Nicol, D. (2018) Unlocking generative feedback through peer reviewing in: Grion, V & Serbati, A. (eds) Assessment of learning or assessment for learning? Towards a culture of sustainable assessment in higher education. Pensa Multimedia, pp 47-59.
Pollitt, A., 2012. The method of Adaptive Comparative Judgement. Assessment in Education: Principles, Policy & Practice, 19(3), pp.281–300.
Sarah Honeychurch posted an update in the session From a thousand learners to a thousand markers: Scaling peer feedback with Adaptive Comparative Judg 11 months ago
Here’s a link to our slides, including contact details and a link to the software
If you would like to find out more, I am @nomadwarmachine firstname.lastname@example.org
- Load More