In an artificially intelligent age, frame higher education around a new kind of thinking

By Laura.Duckett, 9 May, 2024
View
One of the helpful by-products emerging from the advent of AI is that we are beginning to reflect more critically on the way we think, writes David Holland
Article type
Article
Main text

Since John Dewey popularised the educational ideal of critical thinking more than a century ago, the concept has become fundamental to the perceived reputation, value and quality of higher education – the core branch of its DNA if you will. It follows, then, that one of the helpful by-products emerging from the advent of AI is that we are beginning to reflect more critically on the way we think. 

Models of critical thinking often allude to well-known hierarchies in which lower levels of processing such as memorisation, description and comprehension give way to supposedly higher faculties such as analysis and synthesis. While it’s inevitable that large language models such as Open AI’s ChatGPT are more efficient at lower-level processing, how do they fare when it comes to the superior faculties inherent in critical thought? Tentative evidence would suggest not that well, which compounds general impressions that large language models are superficially impressive not least because this is what they are designed to do: give the outward appearance of articulate learned thought. They merely reflect the most popular language choices in their training data, which is not confined to academic work. That said, there is no reason why machine learning cannot be focused on the construction of argument, as the ARG-tech project based at the University of Dundee shows. It is merely a matter of time and development before artificial approaches begin to infiltrate the higher-level thinking that might be regarded as a human preserve. 

All this throws the spotlight back on to the nature of critical thought, and by extension, what higher education is and should be for. There is sustained confusion about criticality, even among those principals who research it. While there are expected areas of consensus, we are still faced with an incredibly abstract, broad and unwieldy spectrum of higher-level thinkings (plural intended). This should not come as a surprise given the amount of heavy lifting the concept is asked to perform across so many disciplinary contexts and its idealised status as a cypher for quality and value in higher education. Such a perfect storm of complexity, vagueness and high stakes has led to predictable alienation in the teaching space and what has been called a “paradox of importance v use”. On a human, motivational level this makes good sense in that we have every incentive to pay extended lip service to critical thinking while avoiding the difficulties of implementing it. 

There is a sharp parallel here with the way that marking and external examining practice is so under-researched. It’ll be too late to get the toothpaste back in the tube if it transpires that criticality, and, by extension, academic standards are not as uniformly high as we would like to tell graduate employers and open day crowds. It’s also a dilemma that is increasingly being taken out of our hands as the standards and relevance of higher qualifications are subject to wider questioning. What scientist, critical thinker or financier would accept the proposition that standards are high merely because of traditional perceptions, without due falsification or even testing of that proposition? As the Sicilian novelist de Lampedusa foretold: “If we want things to stay the same, things are going to have to change.”

How can we respond to this challenge at a teaching and curriculum level? A good starting point would be the radical and evidence-based intervention of bypassing the term “critical thinking” entirely. Instead, build dimensional frameworks around more unitary, tractable and demonstrable thinking concepts such as “analysis” or “evaluation.” “Toolkits” such as the one we have developed here enable students to evaluate their own criticality and their peers’ thinking also. There are many ways to apply them, for example ranking the quality of different arguments students are presented with. This work can pre-empt and inform the assessment rubrics that are used in marking, thus raising students’ assessment and feedback literacies. One way to bypass student anxieties around providing critical feedback for their peers is to use AI-generated material as a stimulus for critique. To engage in such activities, students require opportunities, through critical reflection, to develop the kinds of literacies, beliefs, values and expectations that underpin them. This stream of meta-cognitive work can be embedded within assessment diet, for example in portfolio compilation or the inclusion of reflective components. 

To assuage the way that different language and expectations obscure criticality, use mapping exercises to provide students with a roadmap showing the critical landscape of their course and help teaching teams co-ordinate their efforts at the programme level to avoid redundancy and promote the vertical development of critical skills. This open and constructive discussion of criticality can be as intimidating for teachers as their students, not least because it is embedded in stark power relationships (who gets to judge what is deemed “critical”?). The sharp end of this practice is giving full rein to the social calibration activities that raise the quality of marking and feedback practice. Departmental and faculty “critical champions” may also provide leadership that engages in cross-campus and inter-institutional dialogue. Redrawing the map of criticality in these ways will help ensure that AI is the handmaiden and catalyst to our thinking rather than its substitute. 

David Holland is lecturer in psychology at the University of East Anglia.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
One of the helpful by-products emerging from the advent of AI is that we are beginning to reflect more critically on the way we think, writes David Holland

comment