This guide offers support on the responsible use of generative AI tools. It introduces key considerations such as accuracy, bias, and critical evaluation, and highlights the importance of academic integrity, independent judgement, and adherence to relevant university policies.
While generative AI can assist with certain stages of the library search process, such as generating ideas or creating and refining search strategies, it should not be used as a substitute for using library databases to locate quality peer-reviewed, scholarly sources.
Students should always check any specific module advice or guidelines on the use of generative AI tools.
Artificial Intelligence (AI) refers to systems that perform tasks simulating human intelligence by analysing and coding data, optimising tasks, and making predictions. It works by using sophisticated large language models (LLMs) that mimic the learning and decision-making processes of the human brain.
Meanwhile, Generative AI (Gen AI) is a subset of AI that creates new content, rather than just drawing from existing data.

Runco (2023) notes that a more accurate view is that it has a ‘pseudo-creativity.’ It lacks essential qualities of human creativity, not originating new ideas but rather drawing upon and reconfiguring existing information.
Understanding how generative AI behaves is crucial for critical thinking and recognising its limitations.
Many countries are actively working to regulate AI to prevent its current and potential future harms. Such dangers include deep fakes, bias, misinformation, surveillance, copyright, and loss of privacy.
Stryker, C. and Kavlakoglu, E. (n.d.). 'What is artificial intelligence (AI)?', IBM. Available at: https://www.ibm.com/think/topics/artificial-intelligence. (Accessed: 19 September 2025)
Generative AI, sometimes called Gen AI, is artificial intelligence (AI) that can create original content such as text, images, video, audio or software code in response to a user’s prompt or request.
Stryker, C. and Scapicchio, M. (2024). 'What is generative AI?', IBM, 22 March. Available at: https://www.ibm.com/think/topics/generative-ai. (Accessed: 19 September 2025).
Stryker, C. (2025). 'What are large language models (LLMs)?', IBM, 10 September. https://www.ibm.com/think/topics/large-language-models. (Accessed: 19 September 2025).
'What is machine learning ?', IBM. Available at: https://www.ibm.com/think/topics/machine-learning. (Accessed: 19 September 2025).
IBM (2023). 'What are AI hallucinations?', IBM, 1 September. Available at: https://www.ibm.com/think/topics/generative-ai. (Accessed: 19 September 2025).
Material Cited in this Subject Guide:
Hern, A. (2023) 'Fresh concerns raised over sources of training material for AI systems', The Guardian, 20 April. Available at https://www.theguardian.com/technology/2023/apr/20/fresh-concerns-training-material-ai-systems-facist-pirated-malicious. (Accessed: 19 September 2025).
IBM. (2025). Available at: https://www.ibm.com. (Accessed: 19 September 2025).
Kalai, A. T., Nachum, O., Vempala, S. S., & Zhang, E. (2025). 'Why Language Models Hallucinate.' ArXiv. Available at https://arxiv.org/abs/2509.04664. As referenced in: OpenAI (2025). 'Why language models hallucinate', OpenAI blog, 5 September. Available at: https://openai.com/index/why-language-models-hallucinate/. (Accessed: 19 September 2025).
Lo, L. (2023). 'The CLEAR path: A framework for enhancing information literacy through prompt engineering.' The Journal of Academic Librarianship, 49(4). Available at https://doi-org.dcu.idm.oclc.org/10.1016/j.acalib.2023.102720.
Runco, M. (2023) 'AI can only produce artificial creativity', Journal of Creativity, 33(3). Available at: https://doi.org/10.1016/j.yjoc.2023.100063.
Other recommended material:
Bengio, Y. et al. (2024) 'Managing extreme AI risks amid rapid progress', Science, 384(6698), pp. 842-845. Available at doi:10.1126/science.adn0117. (Accessed: 19 September 2025).
Miao, F., Shiohira, K. and Lao, N. (2024) 'AI competency framework for students.' UNESCO, p. 19. Available at https://unesdoc.unesco.org/ark:/48223/pf0000391105.