This guide explores how generative AI tools work, discusses their strengths and limitations, and considers the pedagogical, security and ethical factors to help you decide if and how you will use those tools in your teaching.
The ways we teach and learn are constantly being shaped and reshaped by new technologies, practices and innovations. Generative Artificial Intelligence (AI) platforms exploded in 2023. ChatGPT was followed quickly by a large number of text generating tools, alongside software that creates original images, presentations or computer code.
So how do we, as university educators, position generative AI? Is it an opportunity of a threat?
As with many new technologies, the answer is somewhere in the middle. Generative AI platforms can offer an avenue to cheat, but they can also be valuable tools that we can incorporate into assessment and use to teach higher-order thinking.
In this guide we will look at how generative AI tools work, discuss their strengths and limitations, and consider pedagogical, security and ethical factors to help you decide if and how you will use tools like ChatGPT in your teaching.
Generative AI platforms are large language models, or natural language processers, that draw on huge datasets to respond to questions or prompts. You might like to think of them as highly developed versions of predictive text programs - they respond to prompts by trying to predict what a human responder would most likely say next.
Knowing the way in which these technologies work can help us understand some of their strengths and limitations. Because they draw on such big datasets, platforms like ChatGPT generate reasonably accurate, plausible-sounding responses to prompts on a wide range of topics. However, these responses tend to be vague and general, with little real detail and personalisation. They also may not be accurate – citations are likely to be irrelevant or even made-up.
These platforms can only make use of the information they’ve been given, so they might not be as helpful with very current topics. And their responses are influenced by the same biases and misinformation that colour the internet. Without knowing which sources, the AI has used to respond to a prompt, it can be difficult for users, particularly novices in the topic, to tell how accurate, credible, and reliable the response is. Thus, generative AI can reinforce existing biases and limit ways of thinking.
Generative AI platforms pose a risk to assessment integrity. They are readily available, user-friendly and can produce essays and other academic content, although whether this output is passable is another story. The generated text is mostly original so it can often get around our existing safeguards and detection methods such as originality checkers. But outright bans are unenforceable and likely to be counterproductive - after all, these tools are also increasingly used in work and life, so we need to consider how we and our students learn to navigate them.
Universities are required by the Office for Students to ensure that assessments are valid, reliable and ‘designed in a way that minimises the opportunities for academic misconduct and facilitates the detection of such misconduct where it does occur’ (Office for Students, 24 Nov 2022). So we do need to ensure that assessments are robust and ensure that they provide opportunities for students to demonstrate their achievements against the stated learning outcomes through their original work. But we shouldn’t be so focused on academic integrity concerns that we miss out on the opportunities AI offers to rethink and extend our teaching, learning and assessment practices. Below, drawing on the latest literature in the sector (including European Universities Association 2023, QAA 2023), we offer guidance on how we might critique and adapt our existing practices to make the most of what AI offers, while reducing the potential for academic misconduct.
Discuss within your School or Institute the role generative AI may play in assessment, and whether it may be appropriate to include it in your assessment approach.
Most students are aware of these tools. But they may not be aware of how they work, their affordances and limitations, the issues of privacy, ownership, and security they raise, or their ethical and moral implications. Consider including discussions about ChatGPT as part of your approach to develop students’ assessment and AI literacy. You may wish to refer to the Student Guide to Generative AI to support your discussions with students.
Be clear with students about whether the use of ChatGPT and other such platforms are acceptable in your module and for what purposes. For example, can students use AI to help them generate a basic structure that they can then build an essay or other piece of work around? If students can or will use these tools in your module, you must tell them how to acknowledge this use.
AI and ChatGPT are a good opportunity to critique and review our assessment design. Some types of assessments are more likely to be vulnerable to generative AI platforms – traditional essay or short-answer questions on key concepts, for example, are easily generated by AI, as is computing code. However, there are lots of ways in which we can discourage and minimise inappropriate use of AI.
The Queen Mary Academy Assessment Toolkit provides guidance on what to consider when designing assessment for integrity. You may also wish to attend a Queen Mary Academy workshop on ‘Assessment Design for Academic Integrity’. Search for couse code QMAADAI1 in the CPD booking system.
You may decide to integrate tools like ChatGPT into your module, as part of learning activities. Consider how your students might use these tools in their professional practice and future careers – how can you help them learn how to use them effectively and ethically?
For example, you might:
Artificial intelligence tools and their responsible use in higher education learning and teaching
This guidance will be updated regularly to reflect the latest developments, please check this page regularly.