CogSci seminar series: “In Models we Trust: Reasoning about Truth with NLP”
When: Wednesday, June 25, 2025, 11:00 AM - 12:00 PM
Where:
Speaker: Michael Schlichtkrull
Date/time: Wednesday, 25th June 2025 - 11 am
Location: Peter Landin Room 4.24, 4th floor
Title: In Models we Trust: Reasoning about Truth with NLP
Abstract: From retrieval-augmented generation to Deep Research agents, models that retrieve, process, and reason about external documents have become a key research direction. Unfortunately, not everything on the internet is true. In this talk, I discuss how malicious actors can misinform models, making them unwilling accomplices against their users. Human experts overcome this challenge by gathering signals about the veracity, context, reliability, and tendency of documents - that is, they fact-check, and they perform source criticism. I argue that models should do the same. I present findings from our recent AVeriTeC shared task, showcasing the state of automated fact-checking. I also introduce the novel NLP task of finding and summarising indicators of reliability, rather than truthfulness: automated source criticism. I argue that these are critical defense mechanisms for knowledge-seeking AI agents.
This will be followed by an informal lunch at the Curve which everyone is welcome to join. This semester the CogSci seminar series will be held only in-person, and will not be broadcasted by electronic means.