Skip to main content
Digital Education Studio

Developing guardrails for ethical and effective AI use at Queen Mary

An interview with Cathie Jayakumar-Hazra, Cybersecurity Awareness Training and Policy Manager

As generative AI tools rapidly evolve, universities face growing pressure to integrate these technologies in a way that upholds academic integrity, protects data, and supports inclusive learning. At Queen Mary University of London, a new set of AI guardrails aims to do just that—offering a clear policy framework for the ethical and effective use of AI in teaching, learning, research, and operations.

The Digital Education Studio caught up with Cathie Jayakumar-Hazra, Cybersecurity Awareness Training and Policy Manager, to learn more about the work behind the AI Guardrails policy, the principles that shape them, and what comes next for AI governance at Queen Mary. The policy will be published in June 2025 and will be circulated when available.

Could you give us an overview of where we currently are at Queen Mary in terms of policy and guidance around the ethical and effective use of generative AI in teaching, learning and assessment?


The AI Guardrails I have drafted form the basis for Queen Mary’s forthcoming general-purpose AI policy. They cover responsible AI use in teaching, learning, assessment, research, and operational activity. They supplement and complement the existing guidance already in place, including the Queen Mary Academy’s guidance on generative AI for educators and the Library’s student guide to generative AI, which focus on ethical submission of work and good academic practice.

This policy outlines Queen Mary’s AI governance framework—without it, we risk exposure to cyber threats and reputational harm. As a cybersecurity professional, I am particularly concerned about new forms of ransomware, like DragonForce, which was used to target M&S, Co-op and Harrods, and deepfakes, which can allow a cybercriminal to impersonate a top executive or the principal to extort money and gain access to sensitive information. These pose real threats to universities. And intensive use of AI can exacerbate existing social prejudices, generating algorithmic bias that impacts marginalised groups.

The policy is in its final review and will soon be published. It has taken time because it has been reviewed for both wording and format by our Directorate of Governance and Legal Services. It has been a lengthy process, but essential, especially as so many academics and students are already using these tools.

 

What were the key principles, opportunities or risks that guided the recent updates to our AI guardrails, and how do they respond to current concerns such as academic integrity, bias, and responsible student use?


The guardrails are designed to support ethical, transparent use of generative AI in ways that enhance learning without compromising academic integrity, fairness, privacy, or inclusivity. For example, AI can support staff in generating slides, quizzes, or lesson plans. Students might use it to understand difficult concepts, visualise ideas or generate mind maps. That is fine—so long as it is done transparently and critically.

I have included rules to help staff review AI-generated materials for accuracy and bias. Transparency is a key requirement: if AI is used to create teaching or assessment materials, staff should clearly disclose how and where it was used. Students also need to be encouraged to think critically about what AI produces—otherwise we risk misinformation, plagiarism, or dependence on AI for thinking and writing.

The guardrails explicitly state that AI must never be used for summative marking—academic staff are the sole assessors of student work. The rules instead guide how to use AI in formative or learning-support contexts and will be supplemented by expanded guidance.

We also discourage the use of AI detection tools due to security and reliability concerns. Instead, we emphasise good practice and clear communication around what data can and cannot be entered into AI tools—this includes personally identifiable information (PII), financial details, or research data.


What are the core principles underpinning the AI Guardrails policy, and how do they help address concerns like academic integrity and algorithmic bias?


I structured the policy around four pillars borrowed and adapted from IBM: fairness, transparency, data protection, and robustness. These help staff and students use AI responsibly and build trust in how it is applied.

  • Fairness means treating everyone equally—AI systems must not marginalise any group.
  • Transparency means staff should understand and be able to explain how the AI tool works and disclose how they use it.
  • Data protection is about ensuring AI tools respect privacy and safeguard student, staff, and research data.
  • Robustness means the tool must be resistant to attacks and vulnerabilities—particularly as cyber threats rise.

These pillars support not only academic integrity but also tackle algorithmic bias. I have drawn on the work of scholar Kimberlé Crenshaw to highlight how intersecting identities—like being disabled, trans, and a woman—can create what we call compound marginalisation. AI can deepen these inequalities. That is why we are now developing a fairness audit system to help staff report AI misuse and assess algorithmic fairness.

 

Looking ahead, what do you see as the emerging priorities for Queen Mary in continuing to adapt to the rapidly evolving AI landscape—especially in terms of staff development, student support, or governance?


We’re putting several things in place. Staff can already use the Ideas Forum to submit queries about using AI tools—especially when ethical considerations are involved.

In the Guardrails, we have included a full section on how to get advice—whether it is about privacy, reporting plagiarism, or knowing what tools are allowed at Queen Mary. One of the most urgent challenges we face is protecting our research data, intellectual property, and sensitive information. That is why we are developing Nebula One, our own large language model, built on Microsoft Azure, and on Queen Mary’s tenancy. Unlike ChatGPT, Nebula 1 will process data safely within Queen Mary’s servers, keeping it secure and compliant with data protection.

We are also launching a three-level AI literacy course through our CPD platform:

  • Level 1: How to use AI responsibly (for general users)
  • Level 2: For people using high-risk AI tools in decision-making, like in HR
  • Level 3: For developers and solution designers to help them build AI models that are secure and comply with regulations

AI is changing extremely fast. These measures are about giving people the tools and knowledge to use it responsibly—while protecting what matters most: our students, staff, our data, and our academic values.

Find out more

Cathie and her team also host a Queen Mary podcast on cybersecurity and AI, titled "Just Saying It." Listen here: Just Sayin' IT: the QMUL Cybersecurity Podcast | Podcast on Spotify

 

Back to top