Will ChatGPT change our definitions of cheating?

By dene.mullen, 2 November, 2023
View
We can’t yet know if we have a full taxonomy of ChatGPT-enhanced mischief, or whether certain uses should be classed as mischief at all, writes Tom Muir
Article type
Article
Main text

We are talking again about ChatGPT’s potential for student dishonesty. A report in Times Higher Education alerts us to ChatGPT’s capacity to respond to images, meaning that students could, in theory, simply take a picture of an exam paper containing images or diagrams, and ChatGPT would be able to answer the questions. Earlier this year, a BBC story emerged documenting uses of ChatGPT by students at Cardiff University, and reporting that a student’s ChatGPT-enhanced essay had been awarded a first – the highest grade that student had obtained.

Something else is worth paying attention to here. We are still talking about spotting quite obvious wrongdoing. There is no intellectual difficulty in working out what has happened in these or similar situations: a student has pretended to do work that they had not in fact done. The challenge is in detecting the dishonesty – but our definitions of dishonesty, cheating and misconduct remain intact. So far, at least.

But we can’t yet know what kinds of cheating ChatGPT might make possible in the future. We can’t yet know if we have a full taxonomy of ChatGPT-enhanced mischief. Perhaps all our definitions will remain intact or perhaps something new is rumbling down the tracks towards us.

Right now, we might say, any act of academic misconduct falls along a kind of continuum, in part because it is all susceptible to the same types of detection (an alert lecturer noticing a change in writing style, plagiarism-detection software). Along this continuum fall the student who has a chaotic note-taking style and ends up being unable to distinguish between their own words and those of a source; the student who grabs a single paragraph from an internet source and doesn’t attribute it; and the student who buys an essay from an online essay mill.

These would all be clear examples of students (accidentally or deliberately) passing off someone else’s ideas as their own. But using text from ChatGPT is perhaps not quite the same thing – because ChatGPT is interactive, generative and creative. An internet source exists before and outside a decision to steal from it and is unchanged by the act of stealing from it. But ChatGPT only produces text in response to a prompt; one must interact with it.

So far, we might think that ChatGPT-produced text is clearly on the continuum I was outlining above – that is, a student including ChatGPT text in (for example) an essay would be passing off another’s words or ideas as their own. But the generative, interactive capacities of ChatGPT take us in another direction. A large language model (LLM) such as ChatGPT producing text in response to a prompt can surprise us – and we might very well want it to.

It’s for this reason that Mike Sharples, in a recent THE piece, says that he intends to use ChatGPT to “augment” his thinking. He means, I think, that its generative capabilities might prompt him to create presentations or papers in ways that he would not have done previously. He wants to be surprised: the prompt you give ChatGPT can always generate text you were not expecting.

What this means is that it’s hard to bind or limit an LLM. We might say to students that they may use it up to a point or that certain limited uses of it are legitimate on certain courses – but its responses could still exceed such limits. It could still surprise us.

Let’s try a thought experiment.

As tools such as spelling and grammar checkers are legitimate, we decide that we can allow students to use ChatGPT to go slightly further and improve the surface-level features of a text. A student might then prompt ChatGPT to check the coherence of a text or to make sure that paragraphs have topic sentences.

Here is where the student might be surprised.

The resulting reorganisation of text is such that a new thought, or a new line of argument, crystallises in it. If we accept that thinking and language are related (and I think we must), then this is a possibility with an appropriately sophisticated LLM. This student might now respond to ChatGPT’s text as though they are themselves being prompted and refine the text further. And then they might involve ChatGPT once more, producing a new text, and so on. We are now talking about a student and a machine co-writing and responding to one another, creating a complex, multi-stranded text braided together from the work of both student and machine.

This would be something different from splicing “chunks” of text written by ChatGPT into an essay.

It might very well be misconduct – but I don’t think it falls on the continuum of misconduct I described above. It is the generative capabilities of LLMs, their capacity to surprise us, that would make such a collaboration with ChatGPT possible. We will need to think carefully about how misconduct is defined when collaboration like this – which may very well be an authentic, meaningful learning process – is possible.

I don’t think we are yet at a point where this thought experiment can become reality. But then, this time last year the conundrums we now face with LLMs were not reality, either.

So, how might we, as educators, respond to these new circumstances? I can think of a couple of ways.

One is that we might need to write modules in which we tell students that they can use ChatGPT to their hearts’ content – it will not be classed as misconduct, but they need to document what they are doing. Such an approach would allow us to get an understanding of what students are doing and how they are incorporating LLMs into their own work habits. We could then start refining our definitions of misconduct in the light of what we find out.

Related to this, we might need to prepare for the idea that the use of LLMs could be encouraged in foundation or first year and that students would be expected to decrease their reliance on them as their expertise increased. We could expect text produced by ChatGPT in assignments to be appropriately labelled as such. This would echo some of the things we know about plagiarism, which can be usefully seen as a “stage” immature scholars need to pass through.

Our starting points, in other words, should be understanding how students might benefit from using LLMs at different points in their degrees. From there, we might consider what uses are legitimate and what might be “too much”.

Tom Muir is associate professor of English for academic purposes at OsloMet – Oslo Metropolitan University, Norway.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
We can’t yet know if we have a full taxonomy of ChatGPT-enhanced mischief, or whether certain uses should be classed as mischief at all, writes Tom Muir

comment1

THE_comment

1 year 1 month ago

Reported
False
User Id
3487406
User name
cesaregiulio.ardito
Comment body
Great piece! I would struggle with framing co-creation with a LLM as "cheating" or "malpractice", as these tools, while new, are now a permanent future of the world we live in. I agree that we might want to assess LLM-free thinking and writing abilities of students, but then it would our responsibility to create the conditions for that to happen... outside of those, as educators we certainly can't shy away from the challenge of teaching proper, critical use of LLM and generative AI! (and first we have to figure out what that is, exactly...)
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoiY2VzYXJlZ2l1bGlvLmFyZGl0byIsImVtYWlsIjoiY2VzYXJlZ2l1bGlvLmFyZGl0b0BtYW5jaGVzdGVyLmFjLnVrIiwiaWQiOiIzNDg3NDA2IiwiaWF0IjoxNjk4OTM4NTEzLCJleHAiOjE3MDE1MzA1MTN9.MsO7bhWd2i6bEIt8MKQty4rroV97jyvRLtHX74Ze-pJJ3A1JucEVHhRKILfVWPINOuLJhZsuAW988eQEXsuTDg
Reviewed
On