Much of the focus in higher education on generative AI has been the existential threat it raises with respect to assessment. However, large language models (LLMs) also have constructive applications. At the law school at King’s College London, we wanted to focus on the learning potential, specifically using AI “as tutor” and for feedback. We used as our starting point a 2023 paper by Ethan R. Mollick and Lilach Mollick of the Wharton School at the University of Pennsylvania that sets out seven approaches for using LLM tools for learning.
The experience highlighted benefits (such as time management) and limitations (hallucinations) of artificial intelligence in education, and it left us with a list of suggestions for instructors using AI to personalise learning.
AI as tutor
We run a postgraduate course that trains lawyers for practice. In our commercial law module, the first week’s task is to learn about the meaning of, and to draft, the most ubiquitous commercial document: a non-disclosure agreement (NDA).
- THE podcast: how to use generative AI in your teaching and research
- Resources and advice for using GenAI in higher education
- How to make ChatGPT work as a teaching assistant: a case study in law
Our instructions to the students were to use any LLM with an “as tutor” prompt to learn about NDAs. They then use the same LLM and deploy their own prompting skills to draft an NDA for a fictional client. In class, we compared their draft documents with a standard precedent NDA taken from a curated database used by practising lawyers.
Prompting tips
We gave our students a detailed prompt to assist with the task based on the Wharton article. As part of the teaching session, we discussed with students the core technique for effective AI prompting:
- Instruction: a clear directive telling the LLM exactly what action to take or task to perform
- Constraint: specify any limitations or conditions that must be adhered to while following the instruction (for example: “Do not provide immediate answers to problems but help students generate their own answers by asking leading questions.”)
- Context: provide the background and reason for the instruction, helping the LLM to understand the relevance and focus of the task (“You are an upbeat, encouraging tutor who helps students understand concepts by explaining ideas and asking students questions.”).
A richer learning experience
Overall, this was a positive exercise. Students really valued the real-tutor plenary discourse comparing their LLM drafts with the critical comparator. We were able to address in class hallucination issues in the LLM drafts (hallucinations in law are a big issue) and where the LLM drafts did not adequately address legal risks when compared with the curated NDA.
But the learning exercise was much richer than if the tutor had lectured on how to draft NDAs or set a problem exercise to draft. Students experimented with guided prompting and experienced the innate satisfaction of drafting their first commercial document. They learned about both the benefits and the limitations of LLMs while completing the underlying tasks within the subject domain.
AI for feedback
Students always ask for more feedback at an individual level on their written work. The constraint is tutor time. In our first experiment, we looked at feedback on a formative assessment for our Land Law module, which is a compulsory subject, heavy in content, that some students find challenging.
We uploaded a Land Law problem formative question, a student’s answer and marking guidance (as written by the tutor) to GPT-4. We used a prompt to ask both where the student’s answer met the marking guidance and where their answer could be improved.
The feedback from GPT-4 was very similar to the feedback given to the student by the tutor herself.
However, this approach to using an LLM for feedback is not appropriate for two reasons. First, from a pedagogy point of view, releasing “model answers” or detailed marking guidance carries with it the very real risk that students replicate these answers in future assessments, even in completely different contexts. Second, we want to avoid a digital gap between those with access to subscription-based models and those without (acknowledging that the free latest version allows a limited amount of uploading of documents).
To meet best pedagogy, it is possible to use an LLM but keep the model answer “hidden”. This requires using the OpenAI API. The API contains the LLM that interrogates the student’s answer against the model form answer, which is not accessible by the student. We are working with a developer to ensure that this interface works within our student platform – this is not complex.
Advice for using AI to personalised learning
LLMs should be used as an effective aid to personalise learning. They give students wider opportunities to learn in different ways alongside in-person teaching. You should:
- Consider which classes lend themselves to the use of an LLM for the task assigned, which can be used as a comparator for the in-person teaching session
- Provide detailed prompts to students as part of their materials and explore the specific hallucination issues in class
- Engage developer resources to design feedback using an LLM API that keeps model answers hidden. Studies show that students have an even preference for AI-generated or human-generated feedback.
Michael Butler is director of the Professional Law Institute of the Dickson Poon School of Law at King’s College London.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment