The AI tech giant OpenAI recently faced a storm of controversy when releasing a voice tool that sounded suspiciously like actor Scarlett Johansson, after she declined to provide the voice herself. Although records now show this wasn’t a deepfake but rather a voice actor hired to provide input, it raises some critical questions regarding how our images and voices may be replicated without our knowledge or permission.
What would you say if your university asked to create a digital avatar of you to deliver your lectures? What if this was a condition of your employment? Or what if a student shared a video of you saying something you never said? These are the real questions that we are going to face as this technology becomes more mature and widespread in society.
In tackling this topic, the first thing to understand is that there are differences between deepfakes and synthetic media. When we talk about synthetic media, we are referring to images, audio or video which is not of a specific person or institution, but is AI-generated. Deepfakes, on the other hand, are faked media of a real individual.
- In an artificially intelligent age, frame higher education around a new kind of thinking
- How to prevent cyberbullying from rearing its ugly head in universities
- Apply the principles of critical pedagogy to GenAI
The underlying technology is a form of Generative AI (GenAI) which uses Generative Adversarial Networks (GANs) to mimic voice, facial expressions and lip movements. Deepfakes have been around for a while but the technology is now maturing fast. Despite not releasing it for public use, Microsoft recently demonstrated an application called VASA-1, which can create a talking head from a single image and short voice clip, complete with natural head movements and facial expressions. It’s not long before other tech companies catch up – and applications that create less sophisticated deepfakes quickly and simply are now available online.
Many governments are making certain forms of deepfakes illegal, including the UK, the EU, South Korea, China, and Canada. However, while legal frameworks are essential for dealing with the risks of this tech, there is currently little focus on what these technologies mean for higher education. These technologies are sneaking up on us while our industry is preoccupied with other concerns.
What’s more, we’re providing the material that can be used to create deepfakes of ourselves online. We upload videos, images and audio clips of ourselves to our social networks or through other channels, and this publicly available data may leave us vulnerable to impersonation.
In fact, one app we tested required just two minutes of footage to create a deepfake that fooled even close friends and relatives. This isn’t just on the horizon, it’s already here.
For educational institutions, deepfakes pose a special set of risks that we need to consider and plan for, both at the level of individual educators and at the policy level. Educators hold crucial roles of responsibility for preparing students for the future and working to safeguard the academic integrity and reputation of our institutions. Deepfakes enable damaging new forms of cyberbullying, academic dishonesty and damage to our credibility.
The potential effects of deepfakes on higher education
Deepfakes may be used by bullies to create distressing and humiliating content of students or staff, including non-consensual pornography. Such cases can severely impact young people especially, and the realistic nature of deepfakes makes it difficult to refute their authenticity, thereby exacerbating the victim’s suffering. It has the potential to be far more distressing than traditional cyberbullying.
Other effects of deepfakes could also cause problems for schools and universities. Fake recommendation videos from high-profile individuals may begin to plague admissions applications, and deepfake technologies may be used to fabricate research data. This undermines the integrity of educational institutions and devalues the achievements of genuine students. Imagine a scenario where a student submits a deepfake video as a recommendation from a well-known academic or senior professor. The ability to create such convincing fabrications can undermine the credibility of education and lead to inequity.
Furthermore, the spread of deepfake misinformation can also severely damage the reputation of a school or university. A single deepfake incident, such as a fabricated video of a university leader making inappropriate remarks, can erode public trust and tarnish the institution's image. In fact, one disgruntled teacher in the US created a deepfake of a school principal, framing them as racist. Incidents such as these could lead to decreased enrolment, loss of funding and a tarnished reputation that takes years to rebuild, as well as sowing social divisions among the students, staff and stakeholders at a school.
How to take action
There are a few things educational institutions need to do to get ready for when an inevitable deepfake incident occurs, which we discuss in more detail in our recent preprint. The first, most straightforward action to take is to update cyberbullying policies to explicitly address the creation and spread of deepfakes. Students and parents need to be made aware of clear definitions, consequences for violations, and the mechanisms in place for reporting and addressing incidents. It’s crucial that these policies are communicated effectively to all students and staff to ensure widespread awareness and understanding.
Second, raising awareness about deepfakes is crucial. Educational programmes should focus on teaching students and staff how to recognise AI-generated content and to be critical of any videos they come across. Workshops and training sessions can help build digital literacy and resilience against misinformation, which will prepare them well for the future. For example, integrating media literacy into the curriculum can equip students with the skills needed to critically analyse digital content and identify potential deepfakes.
Schools and universities should also develop comprehensive crisis management plans to respond swiftly to deepfake incidents. These plans should include strategies for ongoing crisis communication, communicating with stakeholders and restoring trust. Proactive measures, such as informing potential targets about deepfake risks, can also be beneficial. A university could establish a rapid response team trained to handle deepfake incidents, to provide a coordinated and effective response.
Finally, as educators, we need to recognise that while these technologies are risky, there are also potential benefits. Deepfake technology may be used to create immersive learning experiences or enable students to engage in simulated conversations with leading historical figures, or enable the creation of compelling lectures from nothing more than notes and a photo.
At the same time, we should adopt a critical-first approach and be prepared for the havoc that could arise from the fact that we can now see anyone doing or saying anything with a few clicks of a button. Balancing innovation with vigilance will be key to ensuring that the benefits of AI and synthetic media enhance rather than undermine the educational experience.
Jasper Roe is the head of the department of English and lecturer in social sciences at James Cook University, Singapore, and Mike Perkins is head of the Centre for Research and Innovation at British University Vietnam.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment