Writing already involves AI, in the form of predictive text for example. We are familiar with this on our phones and in our emails. Humans have been collaborating with technology for writing since sticks were used for drawing in sand or on cave walls.
Ingenuity and creativity mean that this technology is constantly changing. Most recently, the advent of AI writers (software that uses artificial intelligence to generate text) has created a lot of excitement and concern.
How do AI writers work? You simply enter what you’d like to write about in an onscreen box and an AI writer will produce almost human-quality text in moments, related to your topic. AI writers can also take an existing piece of text and “spin” it, to produce multiple different versions.
In higher education, worries about academic integrity have clouded exploration of the potentials of AI writing. Yet for people in many careers already, working effectively with AI writers is an essential part of their everyday responsibilities.
- Original essays written in seconds: how ‘transformers’ will change assessment
- What’s next for AI in higher education?
- Go green, AI!
Students today need to be prepared for a future in which writing with AI is already becoming essential. Just as word processor functions such as spelling and grammar checks have become accepted and integrated into writing practices, so too will text generators.
Journalists, administrators, content creators, academics, poets and many others are using AI writing in many creative and efficient ways. The problems start when AI writers are not attributed for their input. When students submit AI-authored work as their own, this is a breach of academic integrity.
Yet the lines between human and machine have become blurry. We readily use strings of predictive text in texts and emails, without citing the program or algorithm that produced the words.
So how can students use AI writers in higher education? The following suggestions allow for academic integrity and for the development of higher-level thinking skills.
1. Use AI writers as researchers. They can research a topic exhaustively in seconds and compile text for review, along with references for students to follow up. This material can then inform original and carefully referenced student writing.
2. Use AI writers to produce text on a given topic for critique. Design assessment tasks that involve this efficient use of AI writers, then critical annotation of the text that is produced.
3. Use different AI writers to produce different versions of text on the same topic, to compare and evaluate.
4. Use and attribute AI writers for routine text, for example blog content. Use discrimination to work out where and why AI text, human text or hybrid text are appropriate, and give accounts of this thinking.
5. Use and attribute AI writers for creative text, for example poetry. Google’s Verse by Verse requires the user to input a first line, then writes the rest of the poem or provides suggestions based on the work of famous poet muses. This is just one of countless ways that AI can make interventions in creative processes. Students can research the multiple programs and algorithms on offer.
6. Explore and evaluate the different kinds of AI-based content creators that are appropriate for your discipline.
7. Research and establish the specific affordances of AI-based content generators for your discipline. For example, how might it be useful to be able to produce text in multiple languages, in seconds? Or create text optimised for search engines?
8. Explore different ways AI writers and their input can be acknowledged and attributed ethically and appropriately in your discipline. Model effective note-making and record-keeping. Use formative assessment that explicitly involves discussion of the role of AI in given tasks. Discuss how AI could lead to various forms of plagiarism, and how to avoid this.
All the above require specific skills in search optimisation, evaluation and editing. Then there’s speculative work: students can create design documentation for AI text and content generators of the future for their own disciplines. What kind of innovative AI could support a landscape gardener? A user experience designer? An accountant? A doctor?
All the suggestions above demand the development of a critical awareness of the normative and excluding potentials of algorithmic content generation. This is a part of the reason why these tools must be acknowledged and, indeed, embraced in higher education. Pretending that they do not exist or banning them outright will not prevent or enhance their use. Some of the key critical questions to ask about any AI text generators are:
- What was the body of material on which this AI was trained? In other words, what has this AI read and absorbed, to make its “assumptions” of what strings of words make “sense”?
- Who, and what, has been excluded from this body of material, and therefore, potentially, the text generated?
- What assumptions, biases and injustices are embedded in this material, and therefore, potentially, in the text generated?
Socially responsible engagement with AI text generators needs to be both creative and critical. The opportunity to experiment with AI – while always being aware that users are themselves training AI in the process – is a vital component of contemporary higher education.
Lucinda McKnight is a senior lecturer in pedagogy and curriculum at Deakin University and an Australian Research Council DECRA Fellow.
If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter.
comment