Artificial intelligence, or AI for short, involves using computers to do tasks which mimic human intelligence in some way. It's something that's getting talked about a lot at the moment, with several high-profile generative AI tools having been opened up for public use. Chatbot tools like ChatGPT, Google Gemini and Microsoft Copilot have become particularly popular as a way of finding out information or generating answers to specific queries. Just as we might turn to a Librarian for an answer to something, so we might turn to an AI chatbot service.
On this page we'll be concentrating on AI tools as a reference source — as a way of finding out information. But you should also take a look at our Digital Creativity page on AI generation tools:
At the University of York, we have access to the chat function of Google Gemini (through the Education version), which is the University's preferred generative AI tool. We also have the free version of Microsoft Copilot.
If you log in with your University username and password, neither Gemini nor Copilot will not retain any prompts or responses and that data will not be used to train the underlying model.
For more information about accessing and using Gemini, see the IT Services page below. You can also see information about Copilot, which can be used as an alternative or to compare outputs (these tools are very similar, but will give different results to each other as they are based on different large language models and training data):
It is important to consider what information you are putting into a generative AI tool as part of your prompt or additional files you use with it. The University has guidance on privacy and data considerations for generative AI which should help you to determine what is appropriate.
You'll need to think carefully about your use of AI, and make sure you're acting responsibly and appropriately. The University has put together some explicit guidance for particular groups:
There are also things to consider if you are looking to use generative AI outside of a University context. Careers and Placements have put together some guidance for students around using AI as an applicant that explores attitudes towards using AI as part of applying for jobs.
It's worth pointing out that this sort of technology is not actually new. Chatbot programs like ELIZA have been around for as long as the University has. Most work by a process of machine learning whereby they're exposed to passages of text which are analysed to identify patterns of structure that can then be mimicked. It's like the computer has got a pile of books, a pair of scissors and a tub of glue. You ask it a question and then it flicks through the books to cut and paste together something that looks like it might probably look like an answer. The more books you give it, the greater probability it has of putting together an accurate and intelligible response.
Put in very basic terms, imagine you've asked the bot to tell you a story. Well, stories often begin with "Once upon a time..." so the bot's algorithm determines that we need to start with that. As for what comes next, well often the word time can be followed by the word machine, especially in stories. Let's try it. And if the bot's creators keep talking to it about the concept of machine learning, then learning is going to be a pretty strong candidate, probabilistically, to come after machine. All the while it needs to check that what's being written makes 'sense' (at least with respect to what it's seen before), so maybe the next word needs to be was: Once upon a time, machine learning was... and so on until they all live happily ever after.
In the past, chatbots like this didn't have the storage or the speed to be able to respond especially effectively (as the above example goes some way to illustrating). But computing power has improved dramatically, and the internet has given the bots something huge to work with: countless webpages, social media posts, electronic correspondence and digitised publications; a library of unimaginable scale that can be cross-referenced at immense speed.
Still, these 'AI' systems are still not very intelligent in real terms: they're still just collaging bits of other peoples' sentences together based on patterns and trends. And predictive text is still doing pretty much exactly the same thing as programs like ChatGPT albeit on a smaller data-set: it's working out what you're likely to say next based on previous things you've written, just like the early chatbots were. If all you ever do is type nonsense, the predictions will be nonsense too. And that's something important to remember about AI technology like this: it's only ever as good as the information that's been fed into it.
Suppose you asked an AI "What is the capital of Peru?" but most of the data the AI had to work with was people doing bad jokes and naff puns... You might be more likely to get back "P" rather than "Lima". It's crowd-sourced data, and so you're very reliant on your crowd being right!
But then the internet's a bit like that, too, isn't it? And just as we have to think critically about the websites we encounter and the content within them, or even the academic sources we might find in a database, so we have to be vigilant about the 'answers' we get from AI tools…
Like a daft dog with a wagging tail these bots are desperate to please us, but like a cat who's just left us a dead bird as a present, they don't always get it right.
Working with an AI chat tool is a lot like working with other existing sources we might use. And that can help us understand the ways in which AI tools can be helpful while also appreciating their potential shortcomings...
AI chatbots are, in a way, glorified search engines. They're searching across millions of records and using algorithms to piece together what other algorithms suggest are the required results.
In fact, Google has been using machine learning AI as part of its search ranking algorithms since 2015. Obviously the AI Overview at the top is AI-generated, but even the actual results you get from a Google Search are, in part, being determined behind the scenes by AI technology. The answer we get in a chatbot AI, then, is not all that far removed from just going to the Google homepage and clicking on the "I'm Feeling Lucky" button — something you've probably seldom (if ever) done, and for good reason!
The difference is in the way that that answer is presented and constructed. Rather than us being taken to a source that might, somewhere within it, give us our answer, that information is being presented to us by the chatbot, reframed in its 'own words' (words it's found in various places and glued together in a way it 'suspects' will make sense). As with any source of information, the question we have to ask ourselves is 'do we trust it?'
This way of presenting information is, again, far from new: encyclopædia are a more established way of condensing and collating large amounts of information for easy reference. Wikipedia is a well-established online, crowd-sourced reference tool that's so useful that we've already linked to it several times on this page alone. Since chatbots are also collating information from multiple sources, might they be as reliable a reference source?
The difference is in the way the information is being published: an encyclopædia will go through some form of editorial scrutiny, even if, as is the case with Wikipedia, that scrutiny is just a huge audience of potential editors. A chatbot's reply isn't vetted in quite the same way: the answer you're getting has an audience of one; it's been generated for you, there and then, with no real editorial control. Sure, it's getting its information from correlations within a huge database, but that's not quite the same thing.
And there's an additional problem: chatbots have an unfortunate habit of just making stuff up. They lie. Or rather, in the act of assembling a sentence based on probabilities, sometimes those probabilities take their replies to all sorts of strange places. It's almost as if, in an effort to please you, they sycophantically tell you what you want to hear.
That's where citation comes in useful. Look at a Wikipedia article and it's full of citations linking off to other, hopefully more reputable sources of information. And while we might read Wikipedia to get an understanding of a topic, if we're doing a piece of academic work then it's those more scholarly sources that we, like Wikipedia, will be needing to work with.
Some chatbots also give citations, and if a tool you're using doesn't, why not explicitly ask it for some? Of course, you'll need to explore these to confirm that what the bot is telling you is accurate, just as you need to pay attention to the citations in an encyclopædia. Occasionally you'll find on Wikipedia a citation that isn't really supporting what's being stated, and the same is very much true with chatbots.
When it comes to finding information, then, chatbots and encyclopædia perform very similar 'summary' roles, and in both cases we really need to pay attention to (and critically assess) the sources they claim to be using.
One of the potential benefits of working with an AI tool is that you're able to use natural language: you're constructing your search requests in meaningful English sentences. So a conversation with a chatbot is a lot like a conversation with a human.
But some humans are more reliable than others. You will need to be cautious about what the chatbot tells you, just like you might be cautious of the information you got from a conversation in a pub or at a bus stop. You should treat everything you're told with a degree of healthy scepticism, and that's even more true when the gatekeeper of information is a robot. Who knows what rubbish it's been reading?!
Just as with a search engine, it will very often take more than one attempt to get the information you need. With a web or database search you'll need to tweak the prompts you're using based on the response you get, and the same is true with a generative AI tool. The difference is in the nature of that query: rather than fragmentary keywords and operators, you actually need to 'talk' to the chatbot in meaningful sentences. This has the benefit of making any query reformulations you make much more natural — a conversation. But the drawback is that you have to be even more precise about what you're asking (sometimes even devious!).
What we 'ask' of a chatbot determines what it gives us back. But what it gives back in one moment may be different to what it gives back a moment later. The algorithm's weightings are constantly shifting; the underlying dataset grows every time you type something into the box, giving the model more information to work with and 'learn' from. The programmed guiding parameters of the model might also be updated by the AI's creators. Consequently there are no guarantees about what makes a good prompt — just some general principles to consider. And even those principles may shift as AI systems develop.
My dissertation title is "Explore the precarious relationship between privacy, freedom of expression and social media." Can you help me find academic articles that could be helpful for me?
By being explicit that we're looking for academic articles to help with a dissertation, we can prompt the AI to give us more relevant results. We could be even more explicit if necessary.
Don't be afraid of writing out your query as a list of points rather than as a long sentence. What works for human understanding can sometimes be useful for the computer too!
It can be useful to get the AI to spell out how it reached an answer by breaking down its process and showing its working. You could prompt this by adding something like "Let's think about this step-by-step" to your question, or asking it afterwards if it missed anything in its previous response.
Here's another model for approaching AI prompts:
Direct the AI towards the desired content by steering it towards your precise needs. Be clear about the intended audience or level (for example, "higher education"), and perhaps provide a background for why it's being used. You might even give the AI a particular role ("imagine you're a...").
List the requirements you have and define specific goals you need the AI to meet.
Look at what the AI gives you then ask it for more. Ask it to expand on areas of interest or to give specific examples. You might even prompt it to go beyond the more obvious sources. Focus in on points of interest and get the AI to expand upon them.
If at first you don't succeed, how can you change the instructions to get what you want? Modify your prompts based on the evolving need as your conversation develops. If you asked the AI to "Detail the impacts of AI in modern industries", and the results were too vague, you could get it to "Focus on AI impacts in the education sector."
Check and review everything the AI gives you. What are you missing? What more could it provide you with? Keep chatting until you get what you need. You could even get it to re-draft and review its own work!
Design a 10-day campervan road trip itinerary through Wales.
Ensure the route includes iconic natural landmarks, recommended campervan parks, and local eateries. Prioritise safety and accessibility.
Provide a general overview of popular destinations in Wales for campervan travellers.
Redraft to focus the trip on child friendly activities for a 7-year-old.
Incorporate more coastal stops, like scenic beaches or seaside towns, ensuring they're campervan-friendly.
Academic publishing is a business, and while more and more academic literature is published in a way that is freely available online, there's still a significant amount which exists behind a paywall. The University spends a lot of money to access academic literature so that it's available for its members to read. And it also pays for specialist databases that can search that literature in a systematic and controlled way.
We don't know what data our AI tools are trained on, but it is very likely that they do not have access to everything that is available. They will not be able to search behind every paywall, and they cannot search all of the Library's collections. It's also possible that the data powering the AI model is not as up to date as you might need it to be.
It's also difficult to repeat a search with an AI. The model that powers it may give one result today and another tomorrow. This isn't necessarily a problem for quick searches scoping out the general picture of available literature, but a lot of academic research is about doing things that can be replicated and validated by others, and that's not really possible with an AI tool.
The lack of transparency over the coverage of what's being searched is a concern. And partial coverage may lead to biases within the responses you receive. Libraries have been doing a lot to 'decolonise' their collections — to try to overcome potential biases. The overwhelming amount of academic literature has historically been written by white western men of a certain age. And AI tools are an averaging machine — they play the probabilities and return the 'obvious' answers, which means they may well end up amplifying the same voices.
The certainty with which these chatbots present their results is also something to guard against. They talk like experts but they're just cutting and pasting without any real understanding. A set of search-results is relatively neutral and easy to skim-read. An AI response may be loaded with additional values you'll need to critically peel away.
Crafting good prompts for image generation is its own special skill. Here are a few things to consider:
Once you've got an image, you can always redefine its appearance with a list of format details, like we have done with this academic duck:
Throughout a lot of the above we've kind of found ourselves writing as if these chatbot services are actually intelligent — that they're autonomous beings that are really reading lots of texts and thinking about what they contain. It's easy to fall into this trap of personifying the thing beyond the screen, especially when it's been programmed to respond in a lifelike manner. But it's important to remember that this is not actually an intelligent, conscious, thinking creature: it's lines of code sifting through tables of probabilities in order to assemble fragments of sentences in a naturalistic way. The choices it 'makes' are being governed by the algorithms written by its programmers, and the raw materials being used will also have been selected by those programmers.
We can approach this information in two ways: on the one hand we could get all philosophical and question to what extent our own human consciousness is actually just metaphorical cogs and code. But we should also think about the role of the programmer and the influence of their choices when writing the software and when setting the criteria of what's included in the data.
Forthcoming sessions on :
There's more training events at: