Skip to main content
Queen Mary Academy

Generative AI and Chat GPT - a guide for Queen Mary staff

This guide explores how generative AI tools work, discusses their strengths and limitations, and considers the pedagogical, security and ethical factors to help you decide if and how you will use tools like ChatGPT in your teaching.

The ways we teach and learn are constantly being shaped and reshaped by new technologies, practices and innovations. The latest disruption everyone is talking about is the generative AI tool,  ChatGPT.  No doubt you’ve seen headlines about ChatGPT passing exams (NBC, 23 Jan 2023) and being banned by schools and academic journals (Guardian, 26 Jan 2023). On the other hand, you might also have seen people discussing how they use ChatCPT in their professional practice, for writing computer code or marketing copy (Times Higher Education, 14 Oct 2022).

So how do we, as university educators, position generative AI? Is it an opportunity or a threat?

As with many new technologies, the answer is somewhere in the middle. Generative AI platforms can offer an avenue to cheat, but they can also be valuable tools that we can incorporate into assessment and use to teach higher-order thinking.

In this guide we will look at how generative AI tools work, discuss their strengths and limitations, and consider pedagogical, security and ethical factors to help you decide if and how you will use tools like ChatGPT in your teaching.

An abstract pattern on a dark blue background

What is generative AI and how does it work?

Generative AI platforms are large language models, or natural language processers, that draw on huge datasets to respond to questions or prompts. You might like to think of them as highly developed versions of predictive text programs - they respond to prompts by trying to predict what a human responder would most likely say next.

Knowing the way in which these technologies work can help us understand some of their strengths and limitations. Because they draw on such big datasets, platforms like ChatGPT generate reasonably accurate, plausible-sounding responses to prompts on a wide range of topics. However, these responses tend to be vague and general, with little real detail and personalisation. They also may not be accurate – citations are likely to be irrelevant or even made-up.

These platforms can only make use of the information they’ve been given, so they might not be as helpful with very current topics. And their responses are influenced by the same biases and misinformation that colour the internet. Without knowing which sources, the AI has used to respond to a prompt, it can be difficult for users, particularly novices in the topic, to tell how accurate, credible, and reliable the response is. Thus, generative AI can reinforce existing biases and limit ways of thinking. 

Why is this important?

Generative AI platforms pose a risk to assessment integrity. They are readily available, user-friendly and can produce essays and other academic content, although whether this output is passable is another story. The generated text is mostly original so it can often get around our existing safeguards and detection methods such as originality checkers. But outright bans are unenforceable and likely to be counterproductive - after all, these tools are also increasingly used in work and life, so we need to consider how we and our students learn to navigate them.

Universities are required by the Office for Students to ensure that assessments are valid, reliable and ‘designed in a way that minimises the opportunities for academic misconduct and facilitates the detection of such misconduct where it does occur’ (Office for Students, 24 Nov 2022). So we do need to ensure that assessments are robust and ensure that they provide opportunities for students to demonstrate their achievements against the stated learning outcomes through their original work. But we shouldn’t be so focused on academic integrity concerns that we miss out on the opportunities AI offers to rethink and extend our teaching, learning and assessment practices. Below, drawing on the latest literature in the sector (including European Universities Association 2023, QAA 2023), we offer guidance on how we might critique and adapt our existing practices to make the most of what AI offers, while reducing the potential for academic misconduct.

An abstract image of a city skyline at night through a series of lights

What can you do?

1. Discuss in your School/Institute

Discuss within your School or Institute the role generative AI may play in assessment, and whether it may be appropriate to include it in your assessment approach.

2. Talk about it with your students

Most students are aware of these tools. But they may not be aware of how they work, their affordances and limitations, the issues of privacy, ownership, and security they raise, or their ethical and moral implications. Consider including discussions about ChatGPT as part of your approach to develop students’ assessment and AI literacy. You may wish to refer to the Student Guide to Chat GPT (under development) to support your discussions with students.

3. Tell students whether or not it is acceptable to use in your context

Be clear with students about whether the use of ChatGPT and other such platforms are acceptable in your module and for what purposes. For example, can students use AI to help them generate a basic structure that they can then build an essay or other piece of work around? If students can or will use these tools in your module, you must tell them how to acknowledge this use.

4. Consider future assessment design

AI and ChatGPT are a good opportunity to critique and review our assessment design. Some types of assessments are more likely to be vulnerable to generative AI platforms – traditional essay or short-answer questions on key concepts, for example, are easily generated by AI, as is computing code. However, there are lots of ways in which we can discourage and minimise inappropriate use of AI.

  • Start by exploring how assessment is organised and scaffolded across the module and programme: Do students get a chance to break down and practise parts of the task, and receive timely, formative feedback? Do they have sufficient time between deadlines to prepare their assessments? Too much high stakes assessment and deadline bunching may make your assessment more liable to misconduct, using ChatGPT or other forms of contract cheating. On the other hand, using staged assessments or those that build upon students’ previous work or personal experiences, can mitigate the risk.
  • Think about what you are asking students to do or create:  What kinds of questions or tasks are you setting?  General essay questions are more liable to be answered in a passable way by tools like ChatGPT. On the other hand, these tools will be less successful at responding to more complex or specific assessments which require critical thinking, evaluation, or reflection.
  • Consider whether your assessment design could focus on the process rather than the end-product. In other words, can you integrate an element of reflectivity?

The Queen Mary Academy Assessment Toolkit provides guidance on what to consider when designing assessment for integrity. You may also wish to attend a Queen Mary Academy workshop on ‘Assessment Design for Academic Integrity’. Search for couse code QMAADAI1 in the CPD booking system.

5. Using AI in your teaching

You may decide to integrate tools like ChatGPT into your module, as part of learning activities.  Consider how your students might use these tools in their professional practice and future careers – how can you help them learn how to use them effectively and ethically?

For example, you might:

  • explore with students how generative AI tools might be useful within your subject area, what opportunities this offers for the discipline and what the limitations or downsides may be.
  • incorporate formative activities in which students use AI to support their learning and help them create work. Guide them to critically evaluate the output, and learn to use AI as a starting point, rather than an end point. Discuss the prompts they used and the quality or type of output different prompts generated.
  • talk about how to develop effective prompts - what is a good prompt? What can you ask ChatGPT to get the type of answer you want?

Where can I find out more?

Further reading

This guidance will be updated regularly to reflect the latest developments, please check this page regularly. 

Latest update: 3 August 2023

Back to top