How ChatGPT can help disrupt assessment overload

By dene.mullen, 19 April, 2023
View
Advances in AI are not necessarily the enemy – in fact, they should prompt long overdue consideration of assessment types and frequency, says David Carless
Article type
Article
Main text

Recent developments in generative AI – not least the notorious ChatGPT – offer opportunities to stimulate long-overdue reforms to assessment practices. But these reforms need careful thought and principled implementation within an already hard-pressed sector. To make space for these developments, now is an opportune moment to tackle the issue of assessment overload, too many assessments and limited opportunities for students to revise work iteratively.

Modularised higher education has led to an assessment arms race in which academics inadvertently compete for student attention using grades as both carrot and stick. Students often end up jumping through hoops rather than achieving sustainable learning, and an inevitable consequence is that if something is not assessed then students won’t do the work.

Meanwhile, end-of-semester bottlenecks, with students juggling multiple deadlines, lead to rushed work or inadvertently encourage students to take shortcuts through copying, plagiarism or outsourcing to humans or bots.

Reducing the assessment burden could support trust in students as individuals wanting to produce worthwhile, original work. Indeed, students can be co-opted as partners in designing their own assessment tasks, so they can produce something meaningful to them.

A strategic reduction in quantity of assessment would also facilitate a refocusing of assessment priorities on deep understanding more than just performance and carries potential to enhance feedback processes.

If we were to tackle assessment overload in these ways, it opens up various possibilities. Most significantly there is potential to revitalise feedback so that it becomes a core part of a learning cycle rather than an adjunct at its end. End-of-semester, product-oriented feedback, which comes after grades have already been awarded, fails to encourage the iterative loops and spirals typical of productive learning.

So, what are the implications of generative AI for feedback? Given the capabilities of ChatGPT to generate speedy feedback on work in progress, students could be encouraged to document and evidence the process of their learning. Staged assessments with built-in cycles of feedback are valuable in their own right – while countering the risk of one-shot products generated mainly or entirely by ChatGPT. Staged assessment sequences might involve, for example: a one-minute elevator pitch as stage one; a short, annotated bibliography as stage two; drafts shared with peers or chatbots, then revised further, as stage three onwards.

Staged, process-oriented assessments could involve: students trying out various prompts for ChatGPT; evaluating and building on its outputs; learning to detect AI hallucinations and inaccuracies; relating content to other modules or key readings; or personalisation to real-life events in regional or national contexts. These kinds of processes might enable students to evidence how they have added value by curating and adding to AI-generated content. Acknowledgement of AI inputs to student work is expected, but perhaps not in so much detail that audit trails become overwhelming for lecturers.

Assessing process as well as product could be valuable but time-consuming for lecturers, hence the need to reduce the overall assessment burden. With our ever-decreasing attention spans and the prevalence of short messaging platforms, clear and concise written communication is at a premium. Can a standard 3,000-word essay be profitably shortened or rethought entirely?

AI advances also prompt consideration of assessment types. Do universities tend to privilege written forms of assessment at the expense of oral ones? After all, the workplace needs staff who are effective in both oral and written communication.

Oral forms of digital assessment can be engaging, personalised and authentic in preparing for the world of work. Because of their spoken, interactive nature, they also engender increased student accountability and often lead to worthwhile learning outcomes.

Digital oral assessment includes forms such as video presentations, podcasts or vlogs. These are often more appealing to students than conventional written assignments. Whereas speaking in public can be anxiety-inducing, with these recorded forms of presentation students can practise and refine work in the comfort of their own environment.

Again, digitally enabled oral presentations should be concise. If the three-minute thesis can become a worldwide phenomenon, undergraduates can be encouraged to produce meaningful contributions of short duration. Less can sometimes be more, because concise communication does not equate with dumbing down. In fact, it is often more challenging to prepare a short presentation than a longer one.

A lifelong learning imperative for all of us is honing the capacity to work productively with generative AI. Lecturers and students need to learn together how to use AI constructively, responsibly and ethically, including how to critique and build on its contributions. Its potential to enable new forms of feedback is promising but needs careful preparation, support and staff development.

Educational specialists have been recommending assessment reform for decades but inertia, workloads and conservatism are perennial barriers. Assessment in the generative AI era is likely to be more challenging and time-consuming for staff. Something has to give, and reducing the quantity and length of assessments would be a good starting point.

David Carless is professor of education at the University of Hong Kong.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
Advances in AI are not necessarily the enemy – in fact, they should prompt long overdue consideration of assessment types and frequency, says David Carless

comment5

THE_comment

1 year 6 months ago

Reported
True
User Id
1657716
User name
Stella Jones-Devitt
Comment body
Great article David. We need to consider living with AI successfully and ethically. Reducing assessment loads and bunching, and having staged more authentic assessments should happen regardless of responding to this technology. Getting time pressed lecturers to realise they can reduce their assessment load yet provide a better learning experience for their students is one thing; constructing quality systems to be more flexible in meeting these agendas is another. Thanks for sharing these ideas in one space.
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoiU3RlbGxhIEpvbmVzLURldml0dCIsImVtYWlsIjoic3RlbGxhLmpvbmVzLWRldml0dEBzdGFmZnMuYWMudWsiLCJpZCI6IjE2NTc3MTYiLCJpYXQiOjE2ODE4ODg4MDYsImV4cCI6MTY4NDQ4MDgwNn0.ijlylFq7DQW4iNe_OsYXXqzVfM350m31NN06o1hh4zK0V1HatRRePwdTxgIC8-dEKLKZAD67Aa3q-XcImSuSJA
Reviewed
On

THE_comment

1 year 6 months ago

Reported
False
User Id
90158
User name
ScepticAcademic
Comment body
Tackling overassessment and rethinking assessment styles - sure, sign me up. A lot of this happening already. As for: "Staged assessment sequences might involve, for example: a one-minute elevator pitch as stage one; a short, annotated bibliography as stage two" - good luck with that when you have 200-300+ students on a module and up to 40 in each seminar classes.
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoiU2NlcHRpY0FjYWRlbWljIiwiZW1haWwiOiJtLmNyb25lQHF1Yi5hYy51ayIsImlkIjoiOTAxNTgiLCJpYXQiOjE2ODI0MjE3NDMsImV4cCI6MTY4NTAxMzc0M30.ofJDoud7or6-5aiNLrOPiW-xFLYLKTlEkWjyPcAMc4VF8mNak8jDcatm4oWuaN-p5XRxLjK0Nh7PG5TI2NaUXg
Reviewed
On

THE_comment

1 year 6 months ago

Reported
False
User Id
3389858
User name
perry.share
Comment body
It's there in the first paragraph: 'reforms need careful thought and principled implementation within an already hard-pressed sector'. Add in the need to dedicate the required resources. It will be a major challenge and it has not been recognised yet for what is entailed.
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoicGVycnkuc2hhcmUiLCJlbWFpbCI6InBlcnJ5LnNoYXJlQGF0dS5pZSIsImlkIjoiMzM4OTg1OCIsImlhdCI6MTY4MjQzMTMyNywiZXhwIjoxNjg1MDIzMzI3fQ.Opxi-5KrVEEu8IEsgCHsU34y1uCu3muBVc2P44my9Y9KReZViPVsCUMjB-dyB1evfatpbf3L56knhrXSpvN1Jw
Reviewed
On

THE_comment

1 year 6 months ago

Reported
False
User Id
1047584
User name
kingston123
Comment body
Thanks for this interesting and timely article. We certainly have to confront AI head on and not hide from it as it will be a generational disruptor, perhaps even more so than the Internet. The concern I have is that this first iteration (the ZX Spectrum stage!) does provide space for students to engage and edit and improve and contextualise etc but the versions in a few years’ time (or less) will perfect the content beyond our ability and we will simply stand back in amazement. So where does that leave us and our engagement with it? There are already versions that can train to sound like specific individuals and produce videos of that individual speaking, but all fake, which has ramifications for oral assessment. It seems that a return to pen and paper exams and face-to-face oral assessment is the obvious option but that’s just an avoidance strategy and we need students to learn techniques for working with AI as it will be embedded into most work environments. It's hard not to be pessimistic though.
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoia2luZ3N0b24xMjMiLCJlbWFpbCI6ImEubGluZ2hvcm5Aa2luZ3N0b24uYWMudWsiLCJpZCI6IjEwNDc1ODQiLCJpYXQiOjE2ODI0Mzk3MDksImV4cCI6MTY4NTAzMTcwOX0.Tz2SUR_MxLg4VhHQTeaVPDBzA8nBNNSbWb7mSrh2Mr-5U-6coBim8URCnPjKj-morrOcbauQNHZuAuzt-e4Vpg
Reviewed
On

THE_comment

1 year 6 months ago

Reported
True
User Id
3463657
User name
delphiapp426
Comment body
Great article. AI advances will change assessment, grading, and perhaps even the way we should conceptualize teaching. Check out the work at AutomatED - they recently published a piece on AI and grading: https://automated.beehiiv.com/p/professorial-productivity-vol-2-ai-grading-assessment
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoiZGVscGhpYXBwNDI2IiwiZW1haWwiOiJkZWxwaGlhcHA0MjZAZ21haWwuY29tIiwiaWQiOiIzNDYzNjU3IiwiaWF0IjoxNjgyNDQ3OTMxLCJleHAiOjE2ODUwMzk5MzF9.j-gaCB7Gdo6Wun5SfHXYfBmsliMvfo_BiwFYJI_DN5NHq2Fyc3euqSOc8N2Q5P_4bYM-0gVRG27C2Lvo-sZz9g
Reviewed
On