If you suspect that it’s easier for learners to cheat online than in a face-to-face setting, you’re correct. The reason is fairly obvious. 

Face-to-face, instructors can watch learners complete assessments to ensure they’re not using their phone, looking at their neighbors’ paper, or otherwise gaming the system. 

Online, they can’t.

It’s impossible to achieve the same level of certainty online that we can get using a proctored in-person exam.  Prior to an in-person proctored exam, learners are typically required to empty their pockets, roll up their sleeves, display their hands, leave all their personal effects outside the testing room, and sit well apart from their neighbors.  During a proctored exam, one or more proctors monitors the learners at all times from a few feet away to look for any possible sign of cheating.  In addition, proctor fees (and telegraphed penalties associated with cheating) help discourage cheating in proctored exams.

Applying these practices to online exams is technically impossible.

Bottom line: if cheating cannot be tolerated—as in medical or legal exams—online assessments aren’t a good fit.

For situations that aren’t quite so life-and-death, however, each of the following strategies can help reduce the incidence of cheating.  Note that none of the strategies is 100% foolproof and some won’t be appropriate to every instructional situation.

  1. Handwritten assessments.  For work product assessments, requiring handwritten assignments from the beginning of the course forward creates a  baseline that can allow instructors to spot assignments written by (or inexpertly forged by) someone other than a given learner.  The downside, of course, is that assigning handwritten assessments significantly increases the instructor’s grading burden throughout the course.

  2. Incremental assessments.  For work products that require incremental stages, such as essays or UX design iterations, requiring the submission of incremental drafts over time makes passing purchased work off as their own more more difficult for learners.

  3. Unusual, ever-changing assignment requirements.  Assigning an essay or other work product on a common topic gives learners intent on cheating a lot of choices.  Stipulating an unusual topic or detailed specifications that change each cohort  may help make it more difficult for learners to lean on Google or chatbots for ready-made answers, or to purchase others’ work to pass off as their own.

  4. Synchronous virtual assessments.   Meeting with learners one-on-one via webcam to view their live performances, ask them questions in person about the work product they submitted, or quiz them orally can help us identify and prevent cheating. See Table 1 below for examples.

  5. The use of commercial plagiarism detection software targeted (and priced) for institutional use.  When learners submit an essay or other work product to plagiarism detection software such as TurnItIn, the software compares every line of the submitted essay to content available on the internet, and also compares that submitted essay to content previously submitted by anyone at any institution that uses TurnItIn.  The result is a report showing clearly how many contiguous sentences in the submitted essay appear in an earlier work (and, therefore, have likely been plagiarized).  Learners then submit both their essays and their plagiarism reports to their instructors for grading.  While this approach won’t prevent learners from purchasing a written-to-order essay, it does do a good job of identifying cut-and-paste plagiarism.

  6. Online proctoring services targeted (and priced) for institution use.  Online proctoring services such as ProctorU use a variety of strategies to prevent online cheating, including browser lock-down, screen capturing, live proctors who view learners through a required webcam, and more. The invasive technical requirements these services rely on can make taking assessments difficult for learners, and can’t detect or prevent every possible cheating scenario. However, they can be a solid option for online-only scenarios that require greater-than-average assurance that learners are taking their own exams without study aids.
Table 1. Examples of synchronous virtual assessments (CLICK TO EXPAND)

1. Watching a performance virtually in real time, and being able to stop learners during the performance as necessary to ask questions, ensures that instructors know who’s performing and also helps instructors spot cheating in the form of notes (or anything else disallowed).

2. Interviewing learners virtually about the work product they submitted can help instructors differentiate between learners who actually produced a given work product, and those who did not.  (Learners who created a submission will  be familiar with the content, the process they used, and the reasons why they chose certain approaches; learners passing off another’s work will not.)

2. Quizzing learners orally, via webcam, is the quickest, surest way to identify whether learners know material or not. This is especially true if virtual one-on-one oral quizzes take the form of unannounced “pop” quizzes.

Conclusion

It would be wonderful if only those online learners we expected to complete our assessments actually did so, and did so without breaking any of our rules (like phoning a friend or copying-and-pasting AI-generated answers).

But here’s the thing. When we deliver instruction online, we set ourselves up for plagiarism issues.

In effect, online instructions says to learners, “This instruction is super important…. just not important enough for us to be there in person with you to present and discuss it.” Throw in the uneven quality of many online assessments (present company excepted, of course); the fact that our learners already spend most of their waking hours in front of a screen, which drives feelings of isolation and encourages work-arounds; and (the big one) the fact that we’ve chosen to deliver assessments on a medium that enables instant, seamless cheating—and we really do have to feel for our learners. It’s like leaving a steak unattended on a low table and then getting upset when our dog helps himself to it. Yes, advantage was taken…. but who’s really at fault?

Having said all that, sometimes online assessments make sense for logistical reasons—and sometimes those assessments aren’t mission-critical. In such cases, applying one or more of the strategies described above can help set expectations and drive accountability, which benefits everyone involved.

What’s YOUR take?

Is minimizing cheating relevant to the instruction you design and deliver? If so, what strategies have you found effective in driving for reducing plagiarism and driving accountability?  Please consider leaving a comment and sharing your hard-won experience with the learning community.

Leave a comment