Generative artificial intelligence, or generative AI for short, has brought a range of opportunities, ethical questions, and issues. We'll explore the world of generative AI tools and some of the considerations you need to bear in mind when using these tools.
This page is one of our generative AI guide pages, designed to help you navigate the world of generative AI. See the main Generative AI guide for our other pages and the University of York's generative AI guidance and policies:
Artificial intelligence, or AI for short, involves using computers to do tasks which mimic human intelligence in some way. There are a huge range of methods, tools, and techniques that fall under the term artificial intelligence.
One area in which AI has been used is for generating content that is similar to other content, known as generative artificial intelligence or generative AI. This is often done using a branch of artificial intelligence known as machine learning, which focuses on getting computers to appear to learn how to do something, like recognise or generate certain kinds of content. Models are created using training data, meaning that the generative AI model is trained using examples of content, such as text and images, and then that model can be used to generate new content.
Generative AI tools are applications, usually web-based, which allow you to harness the power of this AI generation without needing to know how to create AI or machine learning apps yourself. These tools can do a range of things (you've probably seen some of them in action, maybe Google Gemini, Microsoft Copilot or ChatGPT), but also come with many caveats and restrictions. Typically, you give these tools some kind of prompt, maybe using text and/or images, and it 'generates' content in return.
One important thing to be aware of with AI generation tools is the fact that they are all based on datasets. The data that the AI tool has been 'trained' on impacts what results you will get, in terms of quality, accuracy, and bias. You will see mistakes in content generated by AI and you need to critically evaluate anything it generates. The data that these tools are trained on might be copyrighted or someone's intellectual property (IP), which introduces other issues about what people do with the outputs of generative AI tools.
As we'll explore further on this page, think about what you use generative AI tools for. You should also read any relevant University policies and guidance before using generative AI. There is University guidance for students and postgraduate researchers around the use of generative AI in assessed work, and also guidance for researchers around the use of generaitve AI in research. There's also general guidance from IT Services around which tools are recommended and what you need to know about data and privacy with generative AI. All of this guidance is linked from our main Generative AI page.
Whenever you use a generative AI tool, you should be critical of anything it generates, including text and image content. You should cross-reference information with other sources if you're not sure.
We've talked about AI generation tools broadly, but what actual tools and apps are out there? The answer to this is that there are a huge number of tools that include AI generation and we couldn't list them all here!
One thing to be aware of is that some tools are just for AI generation - like text generation tools such as Google Gemini and ChatGPT - and others are tools with a range of features that include AI generation. Many applications you already use may have added generative AI features in recent years (for example, at the University of York some of the learning technology tools we have available have AI tools and features available). These features are often specific to the software/application, so are useful when you're using that software, but aren't a general generative AI tool like many of the chatbot-style AI generation tools are.
Text and image generation are the most common kinds of generative AI tools, so on this page we'll be looking in more depth at these two areas. Regardless of the tool you're using, many of the considerations on this page will apply.
Large language models (LLMs) are generative AI models that can take text prompts and generate new text in response, often in the form of a chatbot, a way of appearing to talk to the computer as it gives conversation-style outputs. ChatGPT was an early big name in the area, but there are lots of others now, including Google Gemini and Microsoft Copilot.
At the University of York, we have access to the chat function of Google Gemini (through the Education version), which is the University's preferred generative AI tool, and also the free version of Microsoft Copilot. If you log in with your University username and password, neither Gemini nor Copilot will retain any prompts or responses and that data will not be used to train the underlying model. These are only chat tools and do not integrate with other apps like Gmail or Docs.
We also have access to Google's NotebookLM, which uses Gemini's text generation but works with sources that you give it. It can summarise documents or allow you to ask questions about chosen sources of information. It can even generate podcast-style summaries of the material you provide.
We've mentioned that when using Gemini, NotebookLM, or Copilot with a University of York account, your data will not be used to further train the model. For any other generative AI tool, unless you know otherwise, you should assume they are using your prompts, files, and responses to train the model, and that this data is not necessarily secure or private.
For more information about accessing Gemini, NotebookLM, and Copilot, see the IT Services pages below. Elsewhere on this page we'll explore guidance on getting the most out of using them.
The other kind of generative AI tools that quickly became popular were those which can create images from prompts, often applying particular criteria or particular design styles. These have been used to make stock images without needing to pay or find copyright-free pictures, but also to make fun and often silly images (taking advantage of the mistakes these tools often make to test the limits of AI. There are a huge number of image-focused generative AI tools out there and lots of other generative AI tools (and even image and design tools in general) have image generation built-in in some way now. Lots of image generation tools cost money, and others give you a limited number of 'credits' to use to make images.
At the University of York, we don't recommend any specific image generation tools, but you can use Google Gemini to generate images as well as text. One thing to consider is whether you need images to be of specific things, like the University itself, or York, or similar. Generative AI is not good at creating images of specific things, so even if you write "University of York campus" you'll probably get a generic university building that isn't one we really have! In these cases, it is much better to use real photographs that you have permission to use.
Tools powered by artificial intelligence are everywhere, and they are not just generative AI - for example, everyday applications like Gmail use a range of AI techniques to highlight important emails or suggest how you might want to reply to an email.
Generative AI tools are being used in a huge range of ways. Text generation, for example, is being used to search for information, to create outlines and plans, to proofread text, to explore ideas, and many other things. People are constantly finding new ways to use (and abuse) these tools, so we couldn't ever write an exhaustive list of how you can use any type of generative AI! Always check any University, departmental, funding, and/or publisher/journal guidelines around what uses of generative AI are accessible for work that you might be submitting.
Be careful with the limitations of these tools. Always test what you want to do with the tool and the outputs you get before investing too much time in it. Lots of these tools are rapidly changing due to technological change, so keep an eye out for functionality changing. A lot of the things generative AI can do is also possible to do with other applications. You can explore elsewhere on our guides or on IT Services' list of tools and software to see what else might be more useful for your task.
You can use generative AI to help you to develop ideas for things like creative projects (like writing out what your idea is so far and asking it for questions or related ideas) or standard documents (like asking what things people typically include in a cover letter, a lesson plan, etc), rather than using it to try and create actual outputs. You then take these prompts and write/create your own content, critically evaluating which of the suggestions you got might be useful and which aren't. Particularly for creative projects, this can help you to go beyond your initial idea. For example, if you were creating a public-facing resource educating someone about generative AI, you could ask a text generation AI tool for a list of ideas of different kinds of resources you might make, or you could use an image generation AI tool to suggest possible design features.
To explore more about what AI tools can do in terms of finding information and being used as a reference source, see our Searching for Information guide page on AI tools:
Text-based generative AI tools can help you to troubleshoot problems with digital tools or computer code, as they can search through many sources of help. Sometimes this can help you to spot and correct errors, though you'll need knowledge of the coding language or software to be able to know if the suggestion the generative AI tool gives is a good one or not!
As we've already discussed, Google Gemini is the University's recommended generative AI tool and it works as a chatbot-style tool into which you can write prompts and get responses. NotebookLM is another Google tool powered by Gemini which works differently, as you give it sources of information and then anything generated only uses the content from those sources as its references.
As part of Gemini, we also can use Gems, which allow you to create customised versions of Gemini that you can return to, so it always uses the same tailored information you've given it for that task. See Google's guidance on getting started with Gems for more information.
Let's compare Gemini, NotebookLM, and the Gemini feature Gems.
Gemini | NotebookLM | Gems |
---|---|---|
Access at gemini.google.com | Access at notebooklm.google.com | Access at gemini.google.com > Expand menu on the left > Explore Gems |
Chatbot interface for writing prompts | Generates summaries (written and audio) and specific formats (timeline, FAQs, etc) but also has chatbot interface for writing prompts | Chatbot interface for writing prompts (initial set up by filling out the details of your Gem so you don't have to write the same bits of a prompt each time) |
Uses any sources it can access, but you can give it specific files. You can now also connect it to Google Workspace so it can use your Gmail, Calendar, Drive, etc as a source. | Uses only the sources you add to it. | Uses any sources it can access, but you can give it specific files and tell it to use those in particular ways. If you connect it to Google Workspace, it can use your Gmail, Calendar, Drive, etc as a source |
Recent chats appear on left-hand side. To save anything permanently, use the Share and export or Copy response options under a response, copy and paste the chat, or ask Gemini to make a Doc. If you connect to your Google Workspace you can also save to a Keep note. | Notebooks with their specific sources are saved, and you can save anything you generate as a 'Note' to access in the future (but chats aren't saved otherwise). | Gems are saved and can be accessed from the left-hand menu in Gemini. Chats have the same functionality as Gemini, so recent ones appear on the left and they must be exported or copy and pasted to be kept forever. |
With Gemini, NotebookLM, and Gems used via a University of York Google account, any prompts, files uploaded, or other data will not be used to train the underlying model.
Later on this page we'll look at general guidance for writing prompts for generative AI, but in terms of using Google's tools - Gemini and NotebookLM - here are some of our tips for getting the most out of them:
When writing prompts for use with generative AI tools, you often need to think carefully about the words you use to get an output that matches what you're hoping for.
There are a range of methods for trying to get better results from generative AI tools. Typically, you would start with an initial prompt and evaluate the usefulness of the generative AI's response. Then, you'll apply strategies to refine your prompt. You might try things like:
Writing prompts for each generative AI model and the tools based on it will differ, because these models and tools have different datasets and are designed in different ways, with different parameters. When using a generative AI tool you will start to learn what kinds of prompts work best with it, but when you use a different tool, this might change. For example, if you've used ChatGPT before, you might need to phrase your prompts for Copilot or Gemini differently to get the most effective outputs.
in Google's Get Started with Google AI in Higher Education course, they suggest that you can use the acronym PARTS to remember the parts of an effective prompt. PARTS stands for:
For more in-depth guidance on writing prompts for generative AI, see our section on "interrogating the AI" from our page on using AI tools to search for information:
One area relating to generative AI tools is how we might acknowledge our use of these tools. We might consider questions like:
To know more about how you might reference generative AI tools in an academic context, see our referencing guide:
More generally, acknowledging when and how we have used generative AI can help to make it clear when these tools are and aren't used for different tasks. For example, if you credit any images you generate with an acknowledgement of the tool you used, it makes it clear that the image was created by AI, rather than a photograph of something real. Particularly for generated content that people might assume is real, doing this acknowledgement ensures you aren't unintentionally spreading misformation or misleading people about the source of something like an image.
In an educational context, stating when and how you have used generative AI tools is important for ensuring you avoid academic misconduct, as well as being transparent about your learning or research process and which tools, if any, you've used as part of that process.
As there is currently a lot of debate around the ethical issues surrounding generative AI, being transparent about when you have used generative AI also allows people to make informed decisions around how they engage with generated content or AI tools. For example, people might want to know if something was made by a human, an AI tool, or a combination/collaboration between the two.
Generative AI comes with a whole range of ethical considerations and concerns. Many of these you may have already heard about, but it is important to try and keep up to date with the latest conversations around generative AI as this will inform how and if you use it. This could be an entire guide or course in itself, so here we will outline some of the main areas of generative AI ethics and let you explore the areas that interest you further.
One major ethical question in the world of artificial intelligence (and this isn't specific to generative AI) is the biases that can be found in AI models, often due to the training data used for them. Generative AI models are based on existing datasets, with the computer 'taught' from this existing data. What data is chosen as the training data is crucial: using datasets that contain inequality and bias will replicate those inequalities and biases in the AI tool.
Generative AI also uses various sources, such as information openly available on the internet, when generating its responses. There is a lot of information out there and a lot of it contains biases, so you need to be critical of assertions that any generative AI tools makes. These tools are designed to give you an output that you want, which does mean they can sound very certain about things that might not be so easy to be sure of! When these confident assertations contain biases and inequalities, it can cause these biases to become part of your work or thinking too, if you aren't critical about these statements.
Any use of artificial intelligence, and especially easy to use generative AI tools, should come with some reflection about the role of AI in our lives. Do we see them as 'magic' applications that can create something out of nothing, or complex code that has been designed and written by humans making choices? Does this make a difference to how we use the outputs of these tools? The 'people' side of generative AI is important: not just the people who create the tools, and the people whose work might have been used when the AI tool was trained, but also people whose work is more hidden, like those who label the huge datasets that generative AI is trained on. People run and work at the technology companies who build and sell these AI tools. Basically, there's a lot going on behind the scenes, and responsible use of AI means being aware of this!
One thing that it can be good to do is to see who owns the AI tool you're using and consider if they might have any particular motivations or aims. For example, most tools are owned or were funded by big technology companies such as Google and Microsoft, which means that these companies might want the tools to make them money. This is true for all technology, and to be critical consumers of technology we need to remember that these tools have been made with purposes in mind and goals for the companies that create them.
One major area of debate around generative AI relates to the climate impact and the huge amount of computing power needed to train generative AI tools. Any technology comes with a price not just in terms of cost, but in terms of electricity, water, and other resources. We recommend that you look into current research in this area so you can make an informed decision about if you use generative AI and what you use it for. You can search for articles and research on this area, but bear in mind it is fast-moving like all of generative AI is so there's new information and research coming out all the time (that's why we've not linked to anything, as it would go out of date very quickly).
Whether it is the data that you put into a generative AI tool like your prompt or files, or the data you put into other services or onto the internet, these days data is a commodity for training generative AI models. This means that you need to be aware of your data and if you need to opt out of data sharing or think about the data you put into digital tools. There are lots of articles out there about how to stop certain technology companies from training their models on your data, so it is worth having a look online if you've not already (again, we can't really link to any because they change all the time).
For advice around privacy and information handling for anyone at the University of York, see IT Services' Generative AI page which discusses which kinds of information can be used with generative AI and what you need to consider if you are processing perosnal data.
Another area being discussed is the impact of generative AI on people's jobs, copyright, and intellectual property. This ranges from whether certain types of jobs and work are being devalued due to people claiming that they could be done by generative AI to artists wanting to protect their own style from being 'ripped off' by a generative AI tool.
The ability to generate text with AI has brought up questions around writing work. Is writing copy, captions, video scripts, and more so unnecessary as a creative task that AI could generate it more efficiently? How might this impact copywriters' jobs? These tools often state that they are making copywriters' jobs easier rather than replacing them, but there's a long history of how technology, work, and human skills interact. The use of generative AI does not happen in a vacuum and we should consider the broader societal implications of any digital tools that we use.
For tools that create images in certain art styles, watch out for where these tools could be using the art styles of real, living artists. Use these tools critically and think about where they might be imitating living, working artists in ways they might not want. For example, they might be good for inspiration, but not something you would use on a final output.
On the other hand, people have used generative AI to speed up or automate repetitive or unnecessary parts of their work. This might allow people to have more time to dedicate to more in-depth, analytical, or creative work.
We cannot know the overall impact that generative AI might have on the current and future world of work. If you want to explore this topic further, here's a video from York's Festival of Ideas on AI and the future of work.
Each time you use a generative AI tool, you should consider whether generative AI is the right way to get what you're looking for. This might be because you need to be careful not to do false authorship or anything else that is against the University's guidelines on using AI tools (for example, the student guidance on using AI and translation tools) or any other guidance you've been given for the task. It might be because you could actually do that task more easily with a different digital tool or application that doesn't use generative AI. And it might be because what you want to create shouldn't be something that has been generated.
You should always consider if using generative AI will actually save you time, especially as it needs to be checked for accuracy, relevance, and bias. If you are doing something that is very high stakes or must be accurate, generative AI is often not the best option, but you might use it for small elements like think up alternative phrases or ways of wording sentences.
If it isn't appropriate to use generative AI for what you are doing, you might want to explore elsewhere on this guide or on IT Services' list of tools and software to see what else might be suitable for your task.
The extreme end of this are deepfake videos, which use AI to create fake video content that appears real, often because it impersonates real people or spreads misinformation by showing something that didn't really happen. You shouldn't generate these, and should also question videos that you watch, looking at where they come from and if they have any signs of being a deepfake video. For more information, see the University of York page on deepfake videos:
If you use generative AI tools to create images or videos of people, these may suffer from the concept of the uncanny valley: when humanoid objects look just imperfect enough to a human that they provoke uncanny feelings of uneasiness. Have you ever seen a computer-generated person (e.g. in a video game) and they don't quite look right? Or felt unnerved by dolls or humanoid puppets or robots? This can be the uncanny valley effect.
When using generative AI tools, bear this idea in mind if you're creating images of people. You might make your imagery worse if you try to include a human and they don't look real enough. Consider generating more abstract images instead, though AI generators can often make even inanimate objects look a bit "wrong" and unnerving, or finding copyright-free stock images from sites like Pixabay and Unsplash instead.
Similarly, if you're looking represent a real-life issue, it might not be appropriate to generate an image or a written account of this issue, because this might misrepresent the issue (like adding something entirely unlikely or incorrect into the image or text) or appear fake or uncanny. You don't want to take something serious and have people with extra fingers or missing arms, for example, as that would distract the audience from the purpose of the image. Think critically about whether what you need should be accurate or not before using a generative AI tool to create it (and remember you can always not use what you generate if you think it might not best represent what you need).
Forthcoming sessions on … :
There's more training events at: