Online learning has many advantages, but assessment isn’t necessarily one of them. Especially in large online courses like Massive Open Online Courses (MOOCs), online testing is usually in the form of multiple-choice questions. This makes testing easy to deliver and grade. Indeed, grading can even be automated using a variety of available tools.
Essay type questions and other more “open” styles of learning assessment are impractical when thousands of students are involved – how can the instructor(s) grade all those essays in a timely way? There are automated essay graders but their use remains controversial. One popular option is to leverage peer assessment or peer grading; that is, letting the students grade each other’s assignments and tests.
The logistics of peer assessment in online courses can be challenging. Studies show that it’s helpful to have software to help handle tasks like assigning students to cover specific peer reviews, dealing with late reviews, ensuring that peer reviewers remain anonymous throughout the process, and distributing completed peer reviews. Some systems rely on simple numerical scoring (e.g., 1 through 10), while others let students make written comments.
MOOC type courses, such as those that leverage the Coursera platform, are inherently conceived to empower learners to educate each other, such as through posts and responses in course forums. This form of “crowd-sourced commentary” helps create a learning community – so why not build the community even further by empowering learners to evaluate one another?
Coursera-based MOOCs rely on “interactive exercises” for student engagement and learning. They also can provide post-testing feedback on concepts that a learner didn’t test well on (a process called “mastery learning”).
Coursera also acknowledges that “In many courses, the most meaningful assignments do not lend themselves easily to automated grading by a computer.” Peer assessments in Coursera leverage a “grading rubric” to help students to assess others reliably and provide useful feedback. Coursera also employs a form of crowd-sourcing, in which multiple ratings are combined to yield a meaningful score. The idea is that if multiple students grade each homework, grading “accuracy” comparable to what a TA could provide is achievable. Plus, the peer evaluation process is a useful form of learning for students.
In other words: it’s about the learning, not the grades… especially since MOOCs generally are not offered for credit or a fee. Further, Coursera is actively seeking to innovate and evolve its peer grading features further.
By the same token, the learning goals of different types of courses can be very different. Statistics 101 might be all about demonstrating mastery of basic concepts through giving correct/incorrect answers or facts. But demonstrating and evaluating higher-order skills like critical thinking, innovation or creativity can prove much more different in a large, distributed online venue. Still, learners often report positive experiences with peer grading even in high-level, sophisticated MOOCs.
Another peer evaluation challenge frequently raised with respect to MOOCs is plagarism. Students taking Coursera courses have reported numerous instances of plagarism – even though the courses are not for credit (though a certificate of completion is issued). Notwithstanding the possibility of developing automated anti-plagarism protection software in the future, plagarism would be difficult to combat in the MOOC format. Further, I know from my own experience grading graduate-level assignments that many “plagarizers” are earnest students, who come from countries where the public education they received was based on rote learning.
A further challenge with peer evaluation in MOOCs is the sheer volume of reviewing that has to happen. Per this definitive Hack [Higher] Education blog post, significant problems arise even in simple assessment scenarios, including: the variability of feedback (which this witty blogger experienced directly), no way to provide feedback on feedback, no way to ask for clarification from anonymous reviewers, and an overall lack (or potential lack) of community and reciprocity.
Using peer review to teach writing – or to review written assignments — presents further unique challenges, as this experientially based report illustrates.
Peer assessment is a key challenge in the delivery of online courses and MOOCs, and it is one that is clearly evolving and benefitting from the experiences and insights of both learners and educators.
Have you taught or been enrolled in a MOOC that has used peer assessment? Please comment and share your experiences.
Featured image courtesy of Michael 1952.
Our free 20-page ebook is a step-by-step guide on how to select the right test for your student. Learn everything you need to know about using the PLAN and PSAT to improve student scores, how to leverage learning analytics to select one test over the other, and other tips on how to take the guesswork out of selecting the ACT vs the SAT.