Description
Session Description
Assessment and feedback are two important areas of concern for the UK Higher Education (HE) sector, which is increasingly driven by external factors including the National Students Survey and the Teaching Excellence Framework. HE institutions (HEIs) have been particularly concerned with the efficiency, accessibility and transparency of the assessment process. Online assessment is seen as one solution to these challenges. HEIs are adopting technologies for online submission, marking and feedback in aiming make this process more cost and time effective and transparent (Bausili, 2018). It is therefore unsurprising that institutions are supporting the standardisation of online assessment – with policies featuring anonymous marking, moderation, online submissions, grading time and online feedback (Newland et al., 2013). However, this standardisation has been suggested to negatively impact students’ perception of feedback. Winstone et al. (2017) identified barriers that inhibit the use of online feedback by students, ranging from difficulties with decoding terminologies, to their unwillingness to expend effort on reading feedback. She argues for less standardisation and more student agency in feedback to mitigate these barriers and allow more dialog and ownership. Zimbardi et al. (2017) found that when online tools are designed with the purpose of creating agency in the feedback, students do engage with it and perform better. However, one can argue that online tools are designed to support standardisation and not to enhance dialog and agency. This research examines these issues through involving users to better establish their requirements.
The project comprises of a review of the literature and current technical provision of assessment and feedback in Virtual Learning Environments (VLEs); and data collected from ‘Sandpits’ with students and lecturers in two HEIs in the UK. A ‘Sandpit’ is a type of creative design-thinking focus group where participants are stimulated by a narrative of a scenario around the use of a product, object or artefact and are encouraged to critique, discuss and re-design it (Frohlich, Lim and Ahmed, 2014; Casanova and Mitchell, 2017). These ‘Sandpits’ look to clarify the role of VLEs in assessment and feedback, through understanding students’ perceptions of feedback and how they are being addressed and understanding teachers’ perceptions of the constraints they face. We are exploring what is available, looking to improve interface designs and features, and present these to VLE product designers.
The ALT-C audience is one sensitive to the issues raised in using technologies to increase the efficiency, accessibility and transparency of assessment and may offer an additional perspective to the one already collected. In this workshop participants will experience a lightning review of the research to date and asked to participate in a creative design exercise exploring VLE solutions where they will be able to critique and redesign two scenarios (‘agency’ and ‘analytics and transparency’ in feedback) aligned to theme 1 ‘students’ data and learning analytics’.
Participants will be engaging in a creative methodology that can be used to help collecting data in their own institution about perceptions and usage of learning technologies and contribute to improve online assessment tools in the VLEs.
Session content
Sandpits’ is the method being used to collect data. A ‘Sandpit’ is a type of creative design-thinking focus group where participants are stimulated by a narrative of a scenario around the use of a particular product, object or artefact and are subsequently encouraged to critique, discuss and re-design it (Frohlich, Lim and Ahmed, 2014; Casanova and Mitchell, 2017)
The workshop will have the following structure:
– Introduction (5 min)
– Review of findings (10 min)
– Scenario 1 presentation – ‘Agency in feedback’ (5 min)
– Scenario 2 presentation – ‘Analytics and transparency in feedback’ (5min)
– Group work scenario 1 or 2 – critique based on what to keep, lose or change (15 min)
– Redesign scenario (10 min)
– Voting – 3 best features and 3 worse features (5 min)
– Question and discussions (5 min)
References
Bausili, A. (2018) ‘From piloting e‐submission to electronic management of assessment (EMA): Mapping grading journeys’, British Journal of Educational Technology. Wiley Online Library, 49(3), pp. 463–478.
Casanova, D. and Mitchell, P. (2017) ‘The “Cube”and the “Poppy Flower”: Participatory approaches for designing technology-enhanced learning spaces’, Journal of Learning Spaces, 6(3).
Frohlich, D. M., Lim, C. S. C. and Ahmed, A. (2014) ‘Keep, lose, change: Prompts for the re-design of product concepts in a focus group setting’, CoDesign. Taylor & Francis, 10(2), pp. 80–95. doi: 10.1080/15710882.2013.862280.
Newland, B. et al. (2013) HeLF—Electronic management of assessment survey report: heads of elearning forum.
Winstone, N. E. et al. (2017) ‘“It’d be useful, but I wouldn’t use it”: barriers to university students’ feedback seeking and recipience’, Studies in Higher Education. Routledge, 42(11), pp. 2026–2041.
Zimbardi, K. et al. (2017) ‘Are they using my feedback? The extent of students’ feedback use has a large impact on subsequent academic performance’, Assessment & Evaluation in Higher Education. Routledge, 42(4), pp. 625–644.