Skip to main content
IT Services

AI

Generative AI is a tool that can generate huge amounts of text and images based on a prompt you give it. If you ask it a question, it will try to give a sensible answer; if you ask it to draw a picture, it will try its best to create something.

Like all AI tools, it is fed enormous quantities of data in a process called training the goal of which is for it to learn to use data to complete a task. In the case of generative AI, the goal is to be able to make new things based on the data it has already seen.

However, there are a number of security issues that are important to be aware of when using Generative AI tools.

The most common AI models are typically run and trained by a company like OpenAI or Microsoft. The default behaviour for a lot of these models is that your data will become part of the dataset used to train a future model. This means when you query the public version of ChatGPT in the future, your data might already be inside it, effectively making your data and IP publicly available. Therefore, for QMUL it is imperative no sensitive university information, personal data or research data is used with AI tools.

In terms of external threats, generative AI also enables cyber-criminals to engage in new and sophisticated forms of social engineering and phishing attacks. This has included the use of generative AI to create fake voice messages from leaders, asking employees to share or expose sensitive information. Therefore, it is important that staff and students are vigilant when receiving any unexpected messages or requests for sensitive information. Any such requests should be reported to the IT Service Desk using one of the following methods:

Back to top