Online assessment was among the first forms of educational technology to be used widely in schools and universities, mainly in the form of multiple choice questions. The potential learning benefits and challenges of using MCQ for learning has been long demonstrated, but little evidence has been gathered on students’ perceptions of online assessment (Imtiaz and Mirhashemi, 2013). To get a sense of the effectiveness of these quizzes, we surveyed a sample of our students to get some key indicators – how did they feel the quizzes benefitted their learning? How did it affect their learning journey? Which assessment types proved the most effective? Our study built on the works of Struyven, Dochy, and Janssens (2005) and drew on data from an initial survey of 31 masters level and undergraduate students from a variety of business school programmes.
The undergraduate students use Blackboard Learn and the integrated assessment system. All postgraduate students receive an iPad on enrolment. Learning content is delivered in the form of pdfs, Panopto recordings and chat forums via our bespoke thin LMS ‘The Hub’ since 2011 (more on this in Wells et al., 2013). We control both environments, which guarantees all participants have access to the relevant hardware and software to access the quizzes.
We choose different tools to best respond to the requirements of each type of online assessment for each individual learner: group projects can be peer assessed using a simple Polldaddy survey, in-class presentations are marked by the audience using a Qualtrics survey, and for formative assessment we use Questionmark and Blackboard Learn.
The qualitative component of the study revealed that many students felt the MCQs helped them familiarise themselves with the course material, and helped them to reflect about and master the topics in their subject area. Questions requiring multiple answers were ranked as less beneficial than those requiring a single answer.
The survey also revealed issues relating to MCQ that students felt hindered their progress – these included unreliable technology, ambiguity in question wording and the rarity of questions that tested the students’ knowledge rather than just their ‘ability to use the find function on the computer’. They also advocate for the marking to be used as a serious incentive for them to invest time and effort.
This presentation will examine the data gathered in our initial survey and conclude with an overview of how we plan to act on the information to improve the student experience of online testing.
Imtiaz, M., Mirhashemi, M., 2013. Analyzing trends in technology acceptance studies in education domain, in: Current Trends in Information Technology (CTIT), 2013 International Conference on. IEEE, pp. 23–27.
Struyven, K., Dochy, F., Janssens, S., 2005. Students’ perceptions about evaluation and assessment in higher education: a review. Assess. Eval. High. Educ. 30, 325–341.
Wells, M., Lefevre, D., Begklis, F., 2013. Innovation via a Thin LMS: A middleware alternative to the traditional learning management system. 30th Ascilite Conf. Proc.
If referencing this session please use Collis, N. & Bourguignon, J., (2014). Students’ Perceptions of Online MCQs: gathering evidence and learning to swim with it. Session presented at the ALT Annual Conference 2014, Warwick, UK.
|Affiliation||Imperial College Business School|