In 2014, a team of academic specialists in teaching English for Academic Purposes (EAP) undertook the design and delivery of a five-week massive open online course (MOOC) on writing on the FutureLearn platform. The result was ‘A Beginner’s Guide to Writing for University Study,’ which aimed to introduce international students to the basics of academic writing and help them write an academic essay of their own. It was felt to be an ideal opportunity to reach students around the world who otherwise would find it difficult to access this type of study skills material.
Within the field of English for Academic Purposes (EAP), a common approach to teaching writing is to focus heavily on the writing process with attention given to steps such as planning, draft-writing, receiving interim feedback (from teachers and peers) and re-writing. It works well within small, face-to-face classes; whereas in contrast, MOOCs attract thousands of students. To what extent could we engage these students with the essay-writing process and with delivering feedback to each other, rather than just expecting to receive it from an instructor?
This presentation will explore different aspects of the team’s decision to massively scale up this process writing approach, and the need to rely on peer feedback rather than automated correction. As Balfour (2013) suggests, the latter has useful capabilities within the MOOC context, but also important limitations.
Bayne and Ross have referred to MOOC pedagogy as being ‘negotiated and emergent, a sociomaterial and discipline-informed issue’ (2014, p57), and our course was indeed heavily influenced by the EAP discipline in ‘negotiation’ with the platform chosen by our institution. We will describe the experience of course creation, as our initial enthusiasm for massive peer feedback quickly transformed into fear that students would/could not engage. We will reflect specifically on the technical and pedagogic challenges we faced in designing appropriate peer feedback mechanisms for essay drafts and final essays, and the solutions we found both within and beyond the platform. We will share how we incorporated the additional use of ‘mentors’, reflecting an awareness of the importance of MOOC teachers as facilitators (Kop et al; 2011). The mentors modelled good feedback practice, encouraged engagement, and monitored learner activity.
Finally, we will evaluate of the extent to which students actually engaged with peer feedback, based on our analysis of the course data and of student comments. In conclusion, we will summarise lessons learned from the first runs of the course and plans for improvement, which will be of relevance to anyone considering the design of a skills-based MOOC relying on a substantial writing or feedback element.
Balfour, S. P. (2013). Assessing writing in MOOCs: Automated essay scoring and calibrated peer review. Research & Practice in Assessment, 8(1), 40-48.
Bayne, S. and Ross, J. (2014). The pedagogy of the Massive Open Online Course: the UK view. [Online]. Available from: http://www.heacademy.ac.uk/assets/documents/elt/HEA_Edinburgh_MOOC_WEB_030314_1136.pdf.
Kop, R., Fournier, H., & Mak, J. S. F. (2011). A pedagogy of abundance or a pedagogy to support human beings? Participant support on massive open online courses. The International Review of Research in Open and Distributed Learning, 12(7), 74-93.