Peer marking is often considered risky and rather untested and can be extremely contested by students. While heavily used in MOOCs to reduce the need for markers and administrative overheads, the use in traditional courses is tentative with the risks outweighing the reward. We found that the paramount challenges revolved around fostering trust both within the markers themselves, can student markers trust their own judgment1, and between the students, can they trust the feedback received from their peers? Could the marks given be too high or too low?2
Data-driven design and pedagogical approach
Our design choices stemmed from an opportunity to empower the learner, by offering a vehicle to express their voice within the assessment process3 and to implement learner-centered design, based on feedback received from previous cohorts.4
We first developed a prototype system and then tested on a relatively small cohort. Following the first round of engagement, we gathered both quantitative and qualitative feedback from the students and analyzed the data. From the narratives that emerged from the data, we were able to tap into opportunities to enrich the student experience. To verify the narratives that emerged, we conducted more research, considered the feedback and improved the learning design. From fostering trust and focusing on a qualitative rubric, we were able to improve the activity for a new cycle of students.
Emergent model of peer assessment
Through a synthesis of data obtained from formal student feedback, an application of social learning methodology and scholarly research on peer assessment, we modelled our peer assessment activity on experimentation, empowerment and partnership (EEP).
The key student feedback we received was that the rubric did not reflect the complexity of the feedback students wanted to reflect in their assessments. We adjusted the rubric to a similar quality used by our academic staff.
We then tried the new, improved EEP model within a new course and with a much larger cohort. The result was a seemingly simple method of peer review using the core forum tool. Key considerations to make this work:
- A forum with the ability to have separate groups and apply ratings
- Great learning design with a detailed rubric
- A really engaged community team and tutors
- Shared responsibility of assessment based on key drivers defined by Fraser and Hack5
The results from students from our EEP model was outstandingly positive and well received. Attendees will be taken through our process from setting the activity up and its mechanics, to our rationale and motivation for implementation.
We’ll also share the detailed feedback we gathered, and talk through results and student reactions.
- Langan, M & Wheater, C. (2016). Some Insights into Peer Assessment | LTiA Issue 4 | CeLT | MMU.Celt.mmu.ac.uk. Retrieved 18 March 2016, from http://www.celt.mmu.ac.uk/ltia/issue4/langanwheater.shtml
- Magin, D. & Helmore, P. (2001). Peer and teacher assessments of oral presentations: how reliable are they?,Studies in Higher Education, Vol 26, Issue 3, pp.287-298.
- Fraser, K. & Hack, K. (2015). Students as partners in learning, teaching and assessment, pp. 3. Retrieved 7 June 2016, from https://www.heacademy.ac.uk/sites/default/files/students_as_partners_in_learning.pdf
- Green, H., Facer, K., Rudd, T., Dillon, P., & Humpreys, P. (2005). Personalisation and digital technologies, pp. 23. Retrieved 7 June 2016 from http://www.nfer.ac.uk/publications/FUTL59/FUTL59.pdf
- Fraser, K. & Hack, K. (2015). Students as partners in learning, teaching and assessment, pp. 15. Retrieved 7 June 2016 from https://www.heacademy.ac.uk/sites/default/files/students_as_partners_in_learning.pdf
Word Count: 497/500 (excluding references)