Generative AI Guidance for Researchers
Generative AI has tremendous potential to enhance, transform and/or disrupt and will increasingly become embedded in our work. The potential for innovation and creativity must be balanced by the kind of careful and critical approach that you would apply to any emerging technology. This guidance seeks to support the responsible, appropriate, and informed use of AI tools and to support academic and research integrity.
There is therefore one overriding principle, which applies to all staff, students, and researchers at the University: any use of generative AI tools must be accompanied by critical analysis and oversight on the part of the user.
Further, the University’s position on the use of AI in writing is clear: work that is not your own effort or is without appropriately transparent acknowledgement of sources or tools used (e.g. work submitted that is the product of Generative AI) does not meet crucial requirements for assessment or academic/research integrity. ** Please note that updated guidance is available from UKRI in line with this statement**
This page provides general guidance for researchers at the University of Glasgow around the use of Generative AI in their work and research. Where AI tools form part of your research design or methods, the tool kit within your discipline, or are a subject of your research, your use of them as a researcher shoul3 Decemberd be covered by relevant ethical approval and data protection processes - and therefore meet the key principle above.
Implementation date |
24 Jan 2024 |
Last edited |
3 December 2024 |
Owner |
Research Services Directorate |
Date of next review |
Jan 2025 |
Definitions/Glossary of useful terms
AI: Artificial Intelligence
Generative AI: a type of AI technology with the ability to generate new content. You, the user, can enter a prompt (this can be text based, images, designs, music etc.) and the technology will return a response. This is often shortened to GenAI.
NB: Generative AI cannot generate ‘novel’ content, but it can generate ‘new’ content. This effectively translates to being able to find new ways to put existing content together to create something new, but it is unable to have a truly novel idea of its own.
Some examples of Generative AI tools are: ChatGPT, Google Bard, DallE, CoPilot.
GenAI tools work on the basis of probabilities and predictions, and do not actually understand you, the user.
LLM: Large Language Model. Generative AI tools are based on LLMs which are text-based databases that store information to allow the GenAI tool to function.
Neural Network: a computer system modelled on the human brain and nervous system. These are computational models inspired by the ways that human brains work.
GPT: generative pre-trained transformer. This is a type of neural network architecture which comes ‘pre-trained’ on a LLM and is able to ‘generate’ new content.
Algorithm: a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.
Research and Academic Integrity
We strongly recommend that you treat AI with appropriate caution: AI tools do not know the meaning of what they produce, cannot be critical and evaluative, and are prone to biases, inaccuracies and mistakes.
The University’s position on plagiarism in the University Regulations is: 'the submission or presentation of work, in any form, which is not one's own, without acknowledgement of the sources' (Plagiarism Statement - see section 32.2)
We discuss below some of the ways in which AI may be beneficial in the research and writing process, but it is your responsibility to ensure that all work submitted is a true reflection of your own effort:
As a student, all your submitted work must be of your own creation, your own critical evaluation process, and your own experience. We expect your work to clearly and transparently acknowledge any sources – including AI – that have added to the work.
As a researcher, you should ensure that you are being appropriately transparent about your use of AI tools. Where there has been 'substantial' use of AI in your research process, it may be appropriate to reference this in some way. This will vary widely depending on how individuals may use AI tools in specific instances and/or the conventions in different disciplines. 'Substantial' as a term cannot be simply defined here but the EU Living Guidelines on the Use of Generative AI in Research provides a starting point to consider this.
The fundamental rule with AI and academic/research integrity is this:
If you make use of AI at any point in your research or writing process, no matter at what stage, you must appropriately and transparently acknowledge the use of that source/platform as you would any other piece of evidence/material in your submission.
We will cover the citation and acknowledgement of AI in a separate section below.
Generative AI tools: Risks and Limitations
Generative AI can and frequently does get things wrong. The onus is on you, the user, to ensure that if you choose to use these tools that the content they produce is accurate and true, that you are using tools ethically, and ensuring the privacy and integrity of your work.
Key problems include:
- AI gets things wrong.
- AI will produce incorrect (and sometimes nonsensical) outputs.
- AI does not know right/correct from wrong and will present all outputs as if they are equally valid and true.
- The 'garbage in/garbage out' principle holds here as the quality and clarity of an input into an AI tool (e.g. how you frame a question or prompt) is linked to the quality, clarity, and usefulness of the output.
- AI is biased.
- AI tools reflect and amplify biases and stereotypes.
- AI is not rational and does not understand the complexities around the information available on the internet. It therefore cannot judge inaccurate, offensive statements or assess validity or accuracy.
- AI makes things up.
- Called 'hallucinations', some AI tools will make up false references to texts that do not exist. While these references often look authentic, a quick search will reveal that the supposed reference or data does not exist.
- AI tools are simply predicting the most likely next word each time – think of it like asking your phone to write a sentence based on clicking the next autocorrect word each time.
- AI tools are unreliable
- AI tools cannot access all the necessary data and information and are, in most cases, not 'up to date' or 'trained' on the most recent information available.
- AI tools may provide different answers to the same or similar inputs.
- AI tools cannot access anything behind a paywall. For your academic work, the University Library subscribes to a huge collection of academic texts. AI cannot access these paid resources, and so it does not have the ability to reproduce any information from those texts.
- Even were an AI tool to have a breadth of input and be reasonably up to date, it is impossible to fully assess the completeness of any output or ensure that the most relevant or reliable sources were used.
- Privacy concerns
- Many tools incorporate inputs into how AI models are 'trained' to respond and therefore researchers should exercise great care in putting their data or work into these tools.
- Exposing your data, ideas, or research, or those of others without permission, to an AI tool may, in effect, put it into the public domain, compromise confidentiality, or allow the work to be used without attribution, accountability, context, or completeness. While this is a risk in making any research publicly available, the lack of attribution or association with the creator or owner of the work increases the risk of misuse or misunderstanding and potentially complicates intellectual property ownership.
- Whether using a free tool or under a subscription or licence, please review the terms, conditions, and privacy statements very carefully.
Using AI tools to check your writing
We would strongly discourage the use of AI tools to check your writing. As seen above, generative AI tools can frequently get things wrong. Furthermore, the University strongly discourages the use of proofreaders (which includes essay writing companies) as it is difficult to discern where the boundary is between 'proofreading' and 'writing'. It is important to note that if you use, or have used, generative AI tools to check your writing, this can fall under the banner of academic misconduct and plagiarism.
The Avoiding Academic Misconduct policy makes it clear in the first point that you must not use AI tools to prepare your work:
‘Make sure all work you submit – essays, lab reports, presentations, exam answers, etc – is entirely your own work. You must not copy, translate, or lightly edit, someone else’s work, you must not have any other person, service or AI tool prepare your work, and you must not prepare your work with another person (except in specific assignments where it is clearly marked as a group effort).’
This is clarified with a point further in the policy where it states that ‘Getting someone else to do the work for you, whether this is a friend, family member or commercial service, including services offering 'proof-reading for a fee’ will be considered misconduct. An important distinction to be made here is that is what is not permitted is where someone or something else is contributing substantively to what you present for assessment. Some additional context for this clarification for PGRs (due to the scope and scale of the assessed work) is found in the PGR Code of Practice in Section 10:
- Proof-reading one’s own work is an important writing skill and students are therefore encouraged to do this. However, there may be times that students would consider engaging the services of a proofreader. While the use of a proofreader is broadly permitted, students and supervisors should be clear about what a proofreader can and cannot do.
- Students have sole responsibility for the work they submit and therefore should review very carefully any changes suggested by a proofreader.
- Proofreaders may assist with the identification of typographical, spelling and punctuation errors; formatting and layout errors such as page numbering or line spacing; and/or grammatical and syntactical errors. Proof-readers may not add, edit, re-write, rearrange, or restructure content; alter the content or meaning of the work; undertake fact-checking or data checking or correction; undertake translation of any work into English; and/or edit content so as to comply with word limits.
It is not recommended to use generative AI tools to proofread your work as it is easy to get this wrong. You also need to be careful to protect your work and your ideas.
If you need someone to proofread your work for written English (and not content), the University has a peer proofreading service available to all students - although please note this is for all students so PGRs wishing to engage support for a full thesis need to allow significant time for this.
For further information about what’s available to you if you are an ESL student, get in touch with the English for Academic Study team.
PGRs wishing to contract with an external proofreader should ensure that they are engaging an experienced professional who understands the boundaries of what is permitted. Some professional guidance and a contacts directory are available through the Chartered Institute of Editing and Proofreading.
How to cite/acknowledge usage of generative AI tools in your work
Citation
It is important to understand that AI tools cannot be the author of a work. The tool cannot produce original ideas or take any responsibility for the outputs.
However, the current consensus on how to reference any use of AI is to treat it as if it were private correspondence. This consensus continues to evolve so please check for up to date guidance on your preferre citation style (e.g. APA or MLA).
The reasons for this are:
- Like private correspondence, the prompts and responses you enter into and receive from AI are unique to you
- Like private correspondence, AI is a problematic source as it cannot be easily replicated and verified
- Like private correspondence, each prompt and response session with AI is time-bound, specific and unique to that moment in time.
The specific rules for many referencing styles are still to be finalised, but the general rules are:
- Name the AI platform used (e.g., OpenAI ChatGPT or Google Bard)
- Include details on the date of use of AI
- Ideally, include details on the prompts input (and, if possible, the responses received)
- Include details of the person who input the prompts
- Keep records of the responses output by AI, even if you do not include these in the submission itself
- Be clear, open and transparent in your use of AI
- Do not present any of the responses from AI as your own. This constitutes academic misconduct, which could lead to disciplinary measures being taken against you.
An example: citing AI in Harvard
The information required for Harvard is:
- Name of AI tool (e.g., OpenAI ChatGPT or Google Bard)
- Date (day, month and year of when you entered the prompt(s)) and received the response(s)
- Receiver of communication (the person who entered and received the prompt(s) and response(s) - this would be your name if you used AI)
Your in-text citation would look like this:
'The use of AI in academic writing presents challenges for how to correctly and accurately cite (OpenAI ChatGPT, 2024)'
Your corresponding reference list would look like this:
OpenAI ChatGPT. 2024. ChatGPT Response to Caitlin Diver, 9 January 2024.
For styles other than Harvard referencing, look for 'personal correspondence' as a source type in the relevant guide from the UofG Library list of referencing styles. Please note that these guides may change as the academic consensus evolves around citing this new type of source, and as new AI technologies continue to emerge. It would be advisable to check for updates on a regular basis.
Acknowledgement, rather than citation
It may be more appropriate to acknowledge the use of AI tools rather than to cite them, e.g. depending on the guidance for submitting your assessment or the guidance provided by your publisher.
A basic acknowledgement should include
- Name and version of the generative AI system used, e.g. ChatGPT-3.5
- the company that made the AI system, e.g. OpenAI
- URL of the AI system.
- Brief description of how the tool was used
- Date the content/output was generated
For example:
I acknowledge the use of ChatGPT 3.5 (Open AI, https://chat.openai.com) as a tool to proofread the final version of this work.
You may also wish, depending on the circumstances, to include prompts that were used, copies of outputs that were generated or how you used or edited the content generated.
Research specific tools
Some tools describe and market themselves specifically as ‘AI Research Assistants’. A non-exhaustive list of these tools includes: Elicit, Scite, Scholarcy. These tools will usually emphasise their time-saving benefits for researchers. If you choose to use them, you must understand their limitations and exercise caution. While these tools can carry out searches and superficially summarise their findings, they cannot evaluate studies for you. You still must read each one carefully and come to your own conclusions over its merits. You should also take care to check search results, to check the databases used by the tools, and take the time to look at the tool’s own guidelines on proper usage.
Publishing
COPE Position Statement: Authorship and AI tools
International Publishers Association: IPA Work on Artificial Intelligence
Publishers Association: People Plus Machines: The Role of Artificial Intelligence in Publishing
Royal Society: AI and the Future of Scholarly Publishing (part 1 / part 2)
Springer Nature: AI Tools to Protect Research Integrity
Quick guidance for supervisors
Do | Don’t |
…ensure that PGRs know they can use supervisory meetings to ask questions about AI tools they encounter and their reliability (and suitability) for research |
…immediately close down any discussion of AI. Use of AI might point to specific difficulties (analysing articles, structuring work, writing in a second language) which could then be appropriately discussed and addressed in supervisory meetings |
…emphasise that PGRs should question the validity and accuracy of any output, data, results, and information received from AI tools. Make clear that these tools cannot replace their own expertise and insight |
…assume that PGRs fully understand the problems in the output, data, results and information received from these tools. A PGR who is having difficulty in a specific area may not clearly understand why the output from AI tools is poor or unreliable. |
…remind PGRs that all submitted drafts should be the result of their own thought processes, workings, analysis, and critique. Ensure that they understand what skills they are expected to demonstrate for assessment |
…automatically assume that PGRs understand the way in which they carry out their research is as valuable as the eventual output of the project |
…keep up to date with the institution’s guidelines and information around academic integrity and AI: this advice will be updated as appropriate |
…forget to remind PGRs to keep up to date with both institutional guidelines and journal regulation, which might differ in subtle but important ways. |
…be aware of how research AI tools are advertised: they'll often promise time-management and efficiency benefits. Open discussion of expectations around time management and work rate should begin early in the supervisory relationship to avoid PGRs resorting to these tools |
…forget to remind PGRs that they should not upload any of their work – data, results, discussion, reports, etc into any AI tool. AI tools should not be used to conduct research or investigations into a topic. |
Internal Resources
APG: Generative AI software and implications for learning, teaching and assessment
Coursera: Generative AI for Students: Ethics & Academic Integrity (*New MOOC* from SLD)
Learning Innovation Support Unit: GenAI@Glasgow
Library: Referencing (guidance on specific referencing styles)
MyGlasgow / Learning & Teaching / AI Guidance
Student Learning Development - AI (sections for staff and students)
Student Learning Development - Digital Literacy Resources
External Resources
Cancer Research UK Guidance for Researchers on the Use of Generative AI
Cancer Research UK: Research with integrity - what you need to know about generative AI
EU Guidance on the Responsible Use of Generative AI in Research
Funders joint statement: use of generative AI tools in funding applications and assessment
Information Commissioner's Office (ICO): AI and Data Protection
JISC: AI in tertiary education
Policy on the use of generative AI tools in CRUK funding applications
QAA: Quality Compass - Navigating the complexities of generative AI in HE
Royal Society: Why and how to embrace AI such as ChatGPT in your academic life
Royal Society: Science in the Age of AI
Russell Group: Principles on the Use of Generative AI Tools in Education
UKCORI: Research Integrity in the UK: Annual Statement 2024 (with specific content on AI)
UKRI: Use of GenAI in application preparation and assessment (please note the update to this guidance on 3 December 2024)
UKRIO: AI in Research Resources
UNESCO: Guidance for generative AI in education and research
Updates to Guidance
15 April 2024
- addition of 'resources' section with links
- addition of this 'updates' section
- updated 'intro' section to clarify that this guidance should not impede research which specifically encompasses AI as a subject, tool, or method
- updated section on 'Using AI tools to check your writing' to clarify the use of proof-readers for PGRs
- updated section on 'Generative AI tools and their limitations' to 'Generative AI tools: Risks and Limitations' and updated text to add clarity about possible risks
- updated section on citation to include reference to acknowledgement of use
20 June 2024 - additional resources added to 'resources' section.
30 July 2024 - creation of 'publishing' accordion, moved relevant resources, added new resources
24 September 2024
- updated resources to include new UKRI guidance on using GenAI in application preparation and assessment, checked an updated links in section
- split resources into 'internal' and 'external' sections, additional internal links added to reflect work on L&T practices in APG/SLD/LISU
3 December 2024
- guidance updated to reflect updated UKRI guidance - this is noted in the Intro section and the External Resources.