Description
Session Description
This workshop will introduce the OnTask learning analytics tool, and provide participants with an opportunity to get hands-on and learn how detailed, personalized, actionable feedback can be provided to large cohorts of students. As part of using OnTask in practice, participants will explore student perceptions of computer mediated feedback and collection of data, and identify what good feedback looks like (Nicol and Macfarlane‐Dick, 2006).
Providing students with timely, actionable feedback on their progress throughout a course can help students progressively adjust their learning and study strategies, feel supported by their teachers, and remain motivated (Boud and Molloy, 2013). Beyond very small groups of students this is difficult to achieve however (Pardo et al., 2019). We are using OnTask within large-scale fully online MicroMasters courses but believe it could also be very useful to large face to face undergraduate courses, particularly in first year foundation courses, where all students need to reach common baselines of knowledge and competency. As well as giving detail of our uses of OnTask (staff/participant numbers; relationship to learning design process) the session will include key insights and learnings drawn from uses of OnTask across a number of delivery modes and from different institutions.
OnTask is a software tool that combines data about student’s activities during the course of their studies with personalized, targeted feedback designed by teachers. The data used is tailored to the specific pedagogical design of the course, avoiding a “one size fits all” approach to analytics (Gašević et al., 2016), and short “snippets” of feedback text written by teachers are associated with simple rules based on this data. This is used to generate personalised emails to students with timely, actionable feedback relevant to them. Examples of the kinds of feedback that OnTask could support include directing students to additional readings, suggesting remedial learning content to support areas of challenge, highlighting effective study techniques for tasks in the course, and signposting to further sources of support etc. Students receive the suggestions relevant to them based on their actions and progress within the course.
As well as serving a practical purpose, OnTask provides a useful counter-point to much of the common understanding of learning analytics. Learning analytics in operation in institutions tend towards commercial systems based on predictive analytics, with dashboard visualisations, and where the underlying algorithms that make decisions about students are opaque. In contrast, OnTask is not a predictive system, it is not a dashboard, and the only “algorithms” are those rules written directly by teachers.
We believe that this session therefore aligns well to the Learning Analytics conference theme as it will critically explore the role of the teacher in analytics systems; attitudes towards student data collection and transparency; the potential for closer alignment of data and pedagogy; and the relationships between data informed decision making and student feedback. We hope that through exploring this specific example, participants will gain a wider perspective of learning analytics more generally.
Session content
Attendees at the workshop will receive an overview of OnTask and then work in groups with small sample data sets to create a series of personalized feedback “snippets”.
Group discussions will focus on the extent to which it should be transparent to students that there are machines in play and how to craft feedback that directs students towards helpful action, but respects their agency. The session will wrap up with feedback from participants as to the potential for application in their own institutions.
15 minutes: Introduction to On Task and walk-through demo
30 minutes: Group exercise – design several feedback rules based on a sample data set and specific scenario. A set of questions to consider whilst designing the rules will be supplied. Groups discuss and note points.
15 minutes: Feedback from each group – key points to note.
References
Boud, D., Molloy, E., 2013. Rethinking models of feedback for learning: the challenge of design. Assess. Eval. High. Educ. 38, 698–712. https://doi.org/10.1080/02602938.2012.691462
Gašević, D., Dawson, S., Rogers, T., Gasevic, D., 2016. Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. Internet High. Educ. 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002
Nicol, D.J., Macfarlane‐Dick, D., 2006. Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Stud. High. Educ. 31, 199–218. https://doi.org/10.1080/03075070600572090
Pardo, A., Jovanovic, J., Dawson, S., Gašević, D., Mirriahi, N., 2019. Using learning analytics to scale the provision of personalised feedback. Br. J. Educ. Technol. 50, 128–138. https://doi.org/10.1111/bjet.12592