This is a regular series in which we share highlights and insights about digital learning from conferences, webinars and workshops we’ve attended each month.
Here are our professional development highlights from April.
Digital accessibility in online education: Centre for Online and Distance Education (CODE) webinar
This online panel discussion, chaired by Dr Margaret Korosec, Dean of Online and Digital Learning at the University of Leeds, focused on the collaborative design of an online Masters programme in Disability Studies, Rights and Inclusion. The panellists were Dr Hannah Morgan, Associate Professor of Disability Studies and the programme lead; Tahiya Brewin and Emma Dibb, Learning Designers on the project, and Claire Ashdown, one of the first cohort of students on the programme.
Hannah, Tahiya and Emma shared how they employed a collaborative approach to designing the programme – much like the co-creation approach we take on the CARE Agenda programmes – with a focus on ensuring accessibility. In addition to adhering to WCAG2 AA standards, they adopted a Universal Design for Learning approach. This is most evident in their approach to assessment, which allows students choice in the submission format for each assessment, although they acknowledged that this created some challenges in terms of ensuring academic integrity and anonymous marking.
As this is a social sciences programme, it was important to the team that the design supported both reading and in-depth discussion of topics, in an accessible way. After trialling a podcast in the second module, they found that the programme team really embraced this approach – not only are podcasts less resource-intensive to produce than videos, and provide greater flexibility for students, but they also enable long-form discussions that support students’ learning and exploration of topics.
As a student on the programme, Claire highlighted the responsiveness of, and care from, the teaching team. She also noted the value of the flexibility in assessment formats, explaining that as a staff member at another university, she now recommends not specifying assessment format unless it is essential for the validity of the assessment.
The webinar recording is available on Youtube.
Digitally Enhanced Education webinar, University of Kent
The University of Kent hosts regular Digitally Enhanced Education webinars. The latest, held on 26 March and 2 April featured speakers on a range of topics, including AI and assessment, academic integrity, authentic learning and students’ perspectives on AI use. Highlights included presentations from the University of Sydney, Brunel University and Imperial College London.
In response to the growing presence of generative AI in education, the University of Sydney has developed a two-lane approach to assessment. This model—highlighted by Danny Liu—aims to protect the integrity of qualifications while preparing students to engage critically and responsibly with AI tools that are increasingly embedded in professional practice.
Lane 1 assessments are secure and controlled, designed to confirm that students have met key program-level outcomes. These assessments, often in the form of viva or in-person tasks, allow educators to verify student knowledge without relying on surveillance-heavy tools. This lane offers a reliable way to assess the person behind the work, especially when AI use in open contexts can blur authorship.
Lane 2 assessments, on the other hand, are open, formative, and often scaffolded at the unit level. These tasks embrace the reality that students will use AI—much like they already use spreadsheets or writing tools—and focus on helping students develop judgment and discipline-specific skills with the support of AI. Rather than attempting to ban or detect AI use (which is often impractical), Lane 2 assessments guide students in using AI productively and transparently.
Danny also critiques common sector responses like "traffic light" systems (red/orange/green use of AI) as too simplistic. Instead, he advocates for a “menu-based” model, where educators help students select and justify the appropriate use of AI tools based on the purpose of the task.
This dual-lane strategy may help educators balance academic integrity and real-world relevance. It also has the potential to enable universities to confidently assure learning outcomes while supporting the development of students’ contemporary capabilities—particularly in navigating the ethical and effective use of AI. The webinar recording is available on YouTube.
Another recurring theme throughout the two-day webinars was the need to redesign assessments to be AI-resilient for authentic learning rather than attempting to create AI-proof assessments- which is considered impossible given the rapid development of AI and Generative AI in the past two years. Instead, assessments should evaluate students’ abilities in line with graduate learning outcomes that emphasize employability and career-related skills. This approach aligns with the Queen Mary Graduate Attribute Framework, which now includes “Be AI and digitally literate” as a key attribute.
Dr Pauldy Otermans & Dr Stephanie Baines from Brunel University presented AI-Proofing Assessments: RAG-Rating the Future of Education. They shared their experience in restructuring undergraduate psychology assessments using basic AI prompting. For example, their assessments incorporate AI by asking students to critique ChatGPT outputs, use AI tools for qualitative research (e.g., creating interview schedule and transcribing interviews), and design posters with AI-generated visuals. One assessment even encourages students to use AI tools for writing CVs or cover letters as preparation for job interviews, which serves as their real assessment.
Another presentation, Assessing what matters authentic learning in the age of AI by Dr Caroline Clewley from Imperial College London, emphasised the importance of aligning intended learning outcomes, teaching activities, and assessments constructively. This alignment ensures that assessments remain meaningful for graduates and valuable to their future employers. Dr. Clewley illustrated her approach using the Virtual Reality module, where she created custom GPTs (personalized chatbot with a pre-set prompt and module documents as a knowledge base) to scaffold learning and help students stay focused on the module’s objectives.
As we transition from the assessment of learning to assessment for learning, it may be time to further advance to the concept of assessment as learning, where students become active agents in their own education by using AI appropriately. This approach can help them develop sustainable learning skills that will support them long after graduation.