Skip to Main Content

Gen AI: A Library Guide

A guide for students at Dublin City University

How does AI work?

Artificial Intelligence (AI) refers to systems that perform tasks simulating human intelligence by analysing and coding data, optimising tasks, and making predictions. It works by using sophisticated large language models (LLMs) that mimic the learning and decision-making processes of the human brain.

Meanwhile, Generative AI (Gen AI) is a subset of AI that creates new content, rather than just drawing from existing data.

What generative AI can create. Text, Design and art, Images and Video, Sound, speech and music, Software code, Simulations and synthetic data.

Runco (2023) notes that a more accurate view is that it has a ‘pseudo-creativity.’ It lacks essential qualities of human creativity, not originating new ideas but rather drawing upon and reconfiguring existing information.

Understanding how generative AI behaves is crucial for critical thinking and recognising its limitations.

Many countries are actively working to regulate AI to prevent its current and potential future harms. Such dangers include deep fakes, bias, misinformation, surveillance, copyright, and loss of privacy. 

Useful Definitions

  • Artificial Intelligence (AI) is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.

Stryker, C. and Kavlakoglu, E. (n.d.). 'What is artificial intelligence (AI)?', IBM. Available at: https://www.ibm.com/think/topics/artificial-intelligence. (Accessed: 19 September 2025)

  • Generative AI, sometimes called Gen AI, is artificial intelligence (AI) that can create original content such as text, images, video, audio or software code in response to a user’s prompt or request.

Stryker, C. and Scapicchio, M. (2024). 'What is generative AI?', IBM, 22 March. Available at: https://www.ibm.com/think/topics/generative-ai. (Accessed: 19 September 2025).

  • A machine learning model trained on vast amounts of text data to understand and generate human-like language.

Stryker, C. (2025). 'What are large language models (LLMs)?', IBM, 10 September. https://www.ibm.com/think/topics/large-language-models. (Accessed: 19 September 2025).

  • Machine learning is the subset of artificial intelligence (AI) focused on algorithms that can “learn” the patterns of training data and, subsequently, make accurate inferences about new data. This pattern recognition ability enables machine learning models to make decisions or predictions without explicit, hard-coded instructions.

'What is machine learning ?', IBM. Available at: https://www.ibm.com/think/topics/machine-learning. (Accessed: 19 September 2025).

  • AI hallucination is a phenomenon where, in a large language model (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

IBM (2023). 'What are AI hallucinations?', IBM, 1 September. Available at: https://www.ibm.com/think/topics/generative-ai. (Accessed: 19 September 2025).

Useful Links

  • The EU Artificial Intelligence Act provides a legal framework for the deployment of AI within Europe. 
  • The DCU Position Statement on the Use of Artificial Intelligence Tools states that AI provides significant opportunities—the university has a leadership and duty to protect the academic integrity of its research and prepare for the future by recognising and providing up-to-date and clear guidance on the appropriate use of AI.
  • DTS Resource Page includes AI resources, an outline of the DCU-approved AI tools, and some of the key policies and applications that DCU students and staff should be aware of.
  • Staff Introduction to AI Literacy - DCU's Learning & Organisational team (DCU People) developed a staff AI course on Loop. It introduces the basic aspects of AI and generative AI through a series of video presentations and exercises that aim to inform generative AI, how it works, its capabilities and its limitations. 
  • DCU's Academic Integrity Policy - The University is responsible for upholding academic integrity through this policy, which is underpinned by its procedures and practices that every staff and student must familiarise themselves with.
  • DCU's Research Integrity Policy - This policy clarifies and defines what is meant by research integrity to prevent research misconduct.

Material Cited in this Subject Guide:

Hern, A. (2023) 'Fresh concerns raised over sources of training material for AI systems', The Guardian, 20 April. Available at https://www.theguardian.com/technology/2023/apr/20/fresh-concerns-training-material-ai-systems-facist-pirated-malicious. (Accessed: 19 September 2025).

IBM. (2025). Available at: https://www.ibm.com. (Accessed: 19 September 2025).

Kalai, A. T., Nachum, O., Vempala, S. S., & Zhang, E. (2025). 'Why Language Models Hallucinate.' ArXiv. Available at https://arxiv.org/abs/2509.04664. As referenced in: OpenAI (2025). 'Why language models hallucinate', OpenAI blog, 5 September. Available at: https://openai.com/index/why-language-models-hallucinate/. (Accessed: 19 September 2025). 

Lo, L. (2023). 'The CLEAR path: A framework for enhancing information literacy through prompt engineering.' The Journal of Academic Librarianship, 49(4). Available at https://doi-org.dcu.idm.oclc.org/10.1016/j.acalib.2023.102720.

Runco, M. (2023) 'AI can only produce artificial creativity', Journal of Creativity, 33(3). Available at: https://doi.org/10.1016/j.yjoc.2023.100063.

Other recommended material: 

Bengio, Y. et al. (2024) 'Managing extreme AI risks amid rapid progress', Science, 384(6698), pp. 842-845. Available at doi:10.1126/science.adn0117. (Accessed: 19 September 2025). 

Miao, F., Shiohira, K. and Lao, N. (2024) 'AI competency framework for students.' UNESCO, p. 19. Available at https://unesdoc.unesco.org/ark:/48223/pf0000391105.