Suddenly, educators everywhere are panicking that ChatGPT can draft high-quality work that can also fool anti-plagiarism software. Game over for written assessments? Maybe not. But let’s take this problem one step at a time.
Back in simpler and more innocent times (ahem, 2018), I wrote a piece for THE Student about essay mills. That was yesterday’s problem, though. Who needs a pesky human ghost writer when you can now turn to a faithful and guileless robot helper?
This trumped-up chatbot can put together first-class undergraduate essays and even pass various academic and professional exams. Drat and blast! What do we do now? One mooted solution is to return to good old exams. Try cramming ChatGPT down your sleeve! That has been opposed here in THE on account of cost and equality issues, as well as the fairly obvious next generation of wearable technology, some newly minuscule device that can read an exam question and surreptitiously dictate an answer. (I hope you’re not eating right now, but Google has a patent for a device that you inject into your own eyeball.)
- How to make ChatGPT work as a teaching assistant: a case study in law
- Effective assessment practices for a ChatGPT-enabled world
- Prompt engineering as academic skill: a model for effective ChatGPT interactions
Meanwhile, naming no names, a remarkable number of otherwise clever academics have advised each other to simply look out for obvious errors produced by ChatGPT – a policy that will last a comically short time before those errors are quickly ironed out by OpenAI’s small army of contractors.
So, back to the humble essay. Really, the purpose of the essay is to demonstrate that a student has managed to do at least some reading and at least some thinking, followed by at least some writing. More of each is a bonus. Our veiled villain ChatGPT promises all of these in an instant.
So, what if there was a way to maintain the essay in all its three constituent parts – reading, thinking, writing – while also bringing in a measure of oversight like an exam offers?
I think I have an answer.
And that answer lies in a somewhat obscure feature of Google Drive: the “version history”. Pick any file you’ve worked on in Google Drive, then click on File > Version History > See Version History, and you’ll be presented with a detailed log of all changes made to that file. If it’s been edited by different people, you’ll also see which user made each change. Microsoft Office seems to have a similar feature, too. Click into the details in that log and you’ll see every little sentence you added, every typo you corrected, every awkward passage you rephrased or deleted. It’s really remarkably useful as a writing aid, not to mention a way to recover stuff you actually wish you hadn’t deleted. But it also leaves a very particular – and very human – trail of writing.
If a student actually does the writing themselves, they will similarly write things, move things around, add bits, delete bits; all the usual meandering manoeuvres of human writing. And all of that will appear in the version history, indelibly timestamped and tagged per user.
If, on the other hand, it was written for them – either by a machine or indeed by an old-fashioned human ghostwriter – then they would receive a completed text, which they could then only copy and paste all at once into the document. Even if they went to the trouble of manually typing out a provided text, the version history would simply show them typing it out word by word from start to finish. No human writes that way.
To make this into a useful ChatGPT-proof system, start by asking your students to create a document (or a whole folder) that they share with you. Or you can do it for them to save time. Make sure it’s with their university login, not some random account they could pass on to a ghost writer.
Next, explain to them very, very, very carefully that they are being asked to do all their own writing – absolutely every keystroke – in that document or folder. You can explain all the reasoning above, and it doesn’t have to be couched as a punitive measure or distrusting surveillance technique. It can be described quite honestly as a method to enable them to transparently verify their efforts – to the university and, indeed, to employers who just learned another reason to be sceptical of university graduates.
When the deadline comes, they can easily download their file into another format (Word, Excel, PDF) and then submit it to a regular submission point such as Moodle or Blackboard as well as conventional plagiarism checkers such as Turnitin. But the version history is still there, demonstrating their own work (or lack of). Even if they do all the work in a huge hurry the night before, all those little traits of human writing will still be recorded, just with rather more compressed timestamps (a useful tool for coaching them on time management, but that’s another story).
I have actually required my students to work this way for years, long before ChatGPT hove into view, mostly for group work, where it gives a clear record of exactly who did what – and, in some cases, who did nothing at all. It immediately removes the single biggest bugbear about group work: freeloaders. So it’s a versatile salve for different sores. I’ve never had bad feedback about this policy – and my students have always been comfortable complaining when necessary! Meanwhile, all the previous bad feedback about the unfairness of group work has dried up entirely.
It’s also a wonderful time saver for me: checking an activity log is magnitudes simpler than the depressing sleuth work required for uncovering ghostwriting (human or machine). More time for me to write better feedback and otherwise do my job without lengthy distraction.
There are some technicalities to ensure that this system is fair. Foremost, as above, is to make this requirement unmissably clear (certainly, do not just bury it inside infamously neglected curriculum documents) and explicitly repeat it again and again so that nobody is tripped up. Interim peer review of assessments can help as well, with students accessing each other’s work in Google Drive and giving formative feedback. That’s pedagogically sensible anyway, but it also helps to further embed understanding of this system. And, as with any assessment requirement, it’s important to accommodate exceptions if a student is unable to use a particular resource for reasons of accessibility or other genuine limitations.
When describing this policy, as I pointed out earlier, it’s worth emphasising that this is about enabling students to demonstrate and authenticate their own hard work, and not so much about paranoid peering at them. If all that is attended to, this can really create a fair and transparent system of accountability.
Of course, you don’t have to rely entirely on this. A further useful measure can be to set a traditional reflective personal learning journal or log (if used for group work, noting how they feel they contributed and how they might improve in future). These things can work well together.
Most of all, as long as it really is clear, students know that this is a fair system, put in place to ensure that their effort is recognised and that nobody could achieve the same grades as them by cheating. And ultimately, that’s in everyone’s interests.
Dave Sayers is senior lecturer in the department of language and communication studies at the University of Jyväskylä, Finland.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment4
(No subject)
(No subject)
(No subject)
(No subject)