Developing a GenAI policy for research and innovation

By Laura.Duckett, 8 October, 2024
View
Establishing a framework to guide AI use in research is vital for ensuring institutions are and remain fully compliant
Article type
Article
Main text

Research integrity policies, procedures and guidelines should provide a framework to support the highest standards of staff and student personal conduct in research. To achieve this, we created a new policy for the use of GenAI tools in research and innovation at the University of East Anglia (UEA) earlier this year. The policy supports and protects researchers who use GenAI and aims to ensure the university is fully compliant in this developing area.

The policy represents a truly collaborative effort involving many colleagues and below we share our experiences of developing the policy focusing on eight key areas:

Pooling knowledge across the university

When thinking about AI it is necessary to first identify what is meant by AI and where AI activities integrate into the university’s activities. To lead this AI development, we established an AI working group in the summer of 2023 comprising 20+ specialists across the u­niversity who represented a breadth of roles in the areas that are impacted by AI: teaching, research, IT, data protection, governance, research integrity and ethics, authorship and data repositories, IP, student (undergraduate and postgraduate) support, library facilities and training. 

The working group began by brainstorming the areas where AI comes into play and articulated the opportunities, challenges, risk level and potential mitigation for each. This resulted in the establishment of three themes: teaching, research and security, each to be taken forward by a newly created subgroup since different approaches and personnel may be needed for different contexts. 

Establishing the university’s position

AI is here to stay and will become increasingly important to manage and monitor for the future. We recognised this and wanted to establish policies and guidance that would support researchers in their use of AI but knew we had to consider carefully how the university could promote awareness (this would mainly be via online and face-to-face training) and support and protect its staff and students while not restricting research or academic collaboration.

Agreeing on the terminology

The vast area of AI can feel overwhelming and there is a lot of ambiguity around the terminology and its definitions. We found that many people within the university were using conflicting terms when talking about AI or did not understand the difference between AI and generative AI. Providing a clear definition of terminology greatly helped discussions around our policy creation.

AI is defined by UKRI as “technologies that aim to reproduce or surpass abilities (in computational systems) that would require intelligence if humans were to perform them”. The UEA focus for this policy is on a subset called generative AI and our chosen definition is “any type of artificial intelligence system that identifies patterns and structures in data/information/material and generates content that resembles human-created content”. This content includes audio, code, images, text, simulations and videos.

Becoming the expert 

The research subgroup was charged with drafting the policy and began by gathering and assimilating relevant information, mainly by consulting with researchers, IT and data protection services and external colleagues and reading articles on websites such as Cancer Research UK and the Information Commissioner’s Office or recommended resources, such as those from the UK Research Integrity Office. Members also attended external workshops with peers, such as “AI and research” run by the Association of Research Managers and Administrators, and the “AI and ethics review” round table event for Westley Group Universities. 

The group identified the University Research Ethics Committee as the lead academic body to inform the subgroup and direct the recommendations within the policy, for example, the use of personal data in a GenAI tool and when a university ethics review would be required. The drafting of the policy was also informed by the university's Generative AI Policy for Teaching and Learning that was approved in November 2023.

Populating the policy

Our policy is quite lengthy (16 pages) and some may choose a more light-touch approach but we felt it was important to be comprehensive by including background information and identifying the potential risks and legal, ethical and integrity considerations such as cybersecurity, data protection, ethics and safeguarding, publishing and data management, intellectual property protection and insurance to enable our researchers to fully understand the concerns regarding this new high-risk technology and exercise caution. It also provides signposting to staff so they can obtain further information and advice, which enables researchers to locate everything in one place. In addition, we distilled key information on the security risks, legal aspects of data protection, intellectual property protection and other considerations such as biases and inadvertent data linkages when employing GenAI tools into a University Ethics Guidance Note providing more detail on the ethical considerations and societal risks of GenAI in research, to enhance our suite of 30+ ethics guidance notes. 

Refining the university's current processes

It is important for a new policy to be incorporated into the university’s processes. This was a challenge for us particularly because of the swift development of the policy. The main processes that required adaptation were i) the ethics review process; ii) the information system approval process for an unsupported tool; and iii) the data protection review process. 

The research subgroup worked with ethics, cybersecurity and data protection experts to support the process changes in parallel to the policy development. We designed a new internal flow chart clarifying the key governance steps supporting the policy and set up a new GenAI review group to support the implementation of these steps. 

Finalising and approving the policy

To ensure agreement across the university, the policy undertook an extended review and approval journey through the university’s research culture group, research executive, innovation executive and doctoral college executive ahead of its endorsement by the Senate at its meeting in February 2024. At each meeting, we presented key points such as the aims of the policy, definitions used, a list of potential risks, the proposed governance and ethics framework for approving the use of a GenAI tool and signposted to further support within UEA. This stimulated good discussions and highlighted areas for attention before the next meeting.  We also shared the slides beyond the meetings to promote wider awareness.

Going live

In spring 2024, we launched the policy. We believe that when doing so at such an early stage of understanding of GenAI it is important to treat it as a living document that evolves as we develop knowledge and expertise of how GenAI interacts with research. Accordingly, the new policy will be reviewed regularly during the first 12 months to educate researchers and raise awareness.

Having a robust GenAI policy ensures that there is clear and consistent guidance for the responsible use of GenAI technology. We hope that our experience provides a blueprint for others embarking on a similar journey. 

Helen Brownlee is research integrity manager and Tracy Moulton is contracts manager at the University of East Anglia.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
Establishing a framework to guide AI use in research is vital for ensuring institutions are and remain fully compliant

comment