Social media advertisements promoting AI tools seem to be proliferating. Many offer students quick solutions for their academic writing needs; others provide services such as answering multiple-choice quizzes and finding research articles. Although some marketing messages suggest questionable uses of AI, implying that students might pass off AI-generated work as their own, this doesn’t mean all AI use is negative.
For students who struggle with writing or who are still developing their skills, these tools can be valuable scaffolds. Even expert writers can benefit from using AI to handle grunt work such as finding relevant articles, gaining an overview of the field and brainstorming ideas.
The issue of academic integrity is increasingly complex. Like Goldilocks in the fairy story, we must determine when AI use is “too much” versus “just right”.
- Spotlight collection: understanding and protecting academic integrity
- When promoting academic integrity, start at the root of the problem
- Campus webinar: Artificial intelligence and academic integrity
The influence of AI adds a new dimension to the concept of plagiarism, for a start. How do we ensure students can use these tools to support their learning but still avoid setting them up for later accusations of academic misconduct? Is using AI the same as students passing off the work of others as their own?
The issue of ownership is at the core of academic integrity and AI
As we navigate the issue of AI and integrity, the solutions to how we safeguard the value of our assessments are not black and white. If I write the essay but use AI to enhance its readability, is it still my work? What if the AI generates ideas that I then integrate with my own thoughts? Does the origin of the ideas matter as long as I’m validating, reworking and adding to the AI output? If there is any consensus among academics on these issues it is that we are moving towards a world where each assessment might have its own guidelines about the use of AI. One assessment may require that students use no AI support, such as an interactive oral assessment, while another may expect students to use a particular type of AI toolset in recommended ways. What is appropriate depends on the focus of the assessment.
So, our focus must shift. Just because the student uses a tool to help them doesn’t mean there is no learning. When we fail to define the fine line for AI use, in writing, editing and proofreading, we set our students up for academic misconduct allegations. Ownership in the era of AI instead means learning to leverage, and be supported by, AI.
In giving assessments, I allow students to use AI, but with the caveat that the work must still be their own. However, how can I truly know if the work is theirs? What does “just right” look like for students when ownership in AI is less and less clear?
The answer to AI use cannot simply be a ban
Some educators argue that the risk of determining what is “just right” is too high – that in the era of AI, we can’t have academic integrity, and to counteract this, we need to ban AI.
However, how can we do this when AI is embedded in nearly every tool students use? We would need to revert to summative assessments in controlled invigilated environments. Although this may work for some contexts, a pullback to tried-and-true measures raises more issues than solutions. There are strong pedagogical reasons to adopt approaches that focus on assessment for learning rather than of learning. If we revert to traditional exam conditions, are we protecting integrity or clinging to outdated assessment methods?
Refocusing assessment is also a challenge
So, if banning is not a realistic option, how can we ensure academic integrity and that students’ use of AI is “just right”? No matter which way we reshape our assessments, if AI is part of the learning process, we are stuck with the question of where students’ work ends and AI’s begins. This will become a more pressing issue when AI output is undetectable and can perform better than the general population of students. AI writing detectors can be used to highlight areas of possible AI involvement, but they cannot provide proof of use.
When we acknowledge that AI is now part of the assessment process, we move away from looking for AI writing to the murkier (but potentially more beneficial) focus on learning. I break assessments down into smaller parts, allowing students to receive feedback from myself, peers and even AI. The assessments themselves are more complex, requiring students to incorporate their own context and create more than a standard essay, often involving AI in the process. I also integrate methods to validate students’ understanding, such as presentations where they must discuss ideas and substantiate their claims. Students may initially find these approaches unfamiliar, but with careful scaffolding, this can mean more meaningful learning.
In my teaching, I try to be deliberate in my inclusion of AI. My assessments are designed to help students use AI where it supports learning, and my marking is focused on the “human” input. For example, in my postgraduate course, although AI is reasonably good at summarising research papers, its outputs can often lack nuance and context. So, for their assessment, I guide the students to use these summaries as a starting point for their review; they present the AI summaries, then evaluate and critique them. This requires them to read the papers alongside the summaries. In this context, the “just right” is clearly defined and focused more on stepping them into a deeper engagement with the research.
Will this prevent students from using AI to create convincing assessments that have little input from themselves? Maybe not entirely, but I believe this approach better prepares them for the realities of learning and using AI responsibly. If they are not given suitable guidance and support, many students use AI covertly. This often leads to them using it inappropriately, undermining their own learning and potentially compromising their data privacy and the intellectual property of others. Only by being open with students and helping them to use AI in their learning can we address these issues.
How this looks from assessment to assessment and year to year will evolve. As these tools themselves change, it will become easier for students to use AI “too much”. We need to consider what is “just right” within the context of our assessment at this moment in time. We must move away from black-and-white notions of AI, and superficial notions of good or bad to more nuanced and continuously evolving process of redesign.
Kathryn MacCallum is an associate professor of digital education futures in the School of Educational Studies and Leadership at the University of Canterbury, New Zealand.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment