Skip to main content
Events

How to open an algorithmic black box? The multiple ways of making transparency: from source code to counterfactual examples

When: Wednesday, February 24, 2021, 5:00 PM - 6:30 PM
Where: Online

About this Event

Katja de Vries is an assistant professor in public law at Uppsala University (Sweden). She is also affiliated to the Swedish Law and Informatics Research Institute (Stockholm), the Center for Law, Science, Technology and Society (Brussels), and the Department of Sociology of Law (Lund). Her current research focuses on the challenges that AI-generated content ('deepfakes' or 'synthetic data') poses to data protection, intellectual property and other fields of law.

During the 2010s the ever-increasing omnipresence of algorithmic decision making (ADM) has been accompanied by a growing importance attached to the ideals of fairness, accountability and transparency (FAT). The last two ideals in this triad are often conflated: transparency is seen as a way to operationalise accountability. The underlying assumption is that transparent insight into the workings of an ADM system functions as cleansing ray of light that stimulates the creation of responsible, fair and sustainable systems and that makes it possible to hold the creators accountable for faults and biases in the system. This assumption is problematic on many levels.

Firstly, which part of ADM systems has to be unveiled to realise “true” transparency is highly equivocal. Secondly, perception cannot be equaled with understanding. Thirdly, neither perception nor understanding automatically result in empowerment of the subjects of ADM systems. In fact, it turns out that interpretability and transparency tools can give a false sense of reliability to ADM systems, resulting in disempowerment. Consequently, accountability does not automatically follow from transparency. Transparency is an empty shell if the existing power structures prevent individuals to act on it. Opening an algorithmic blackbox will stay an empty gesture if the necessity of the ADM system and its consequences are not questioned.

In this seminar I argue that actionable and empowering transparency, instead of aiming to show underlying ADM mechanisms as they “really” are, can benefit from AI-generated data and machine imagination. Empowerment and accountability are deeply connected to the possibility to imagine things differently, and be able to come up with parallel histories and realities. Transparency is thus not conceptualized as factual (“How is this really working?”) but as counterfactual (Wachter et al., 2018): “What would happen if…?” and “How could it work differently?”.

Facts can fall short of reality, and fabricated realities can be more representative of reality than so-called ‘real’ realities. I explore how counterfactual synthetic data can play a role in the empowerment of subjects to ADM systems as well as in an enlightened and aspirational training of ADM systems. However, the use of synthetic data is definitely not a panacea for realising actionable and empowering transparency. I will particularly look more closely at the Rashomon problem: that there is a multiplicity of counterfactuals which all tell convincing yet different stories of how a certain classification or decision was made.

Join Katja for her presentation which will be chaired by Chris Reed, Professor of Electronic Commerce Law at Queen Mary University of London, with comments from Keri Grieman.

 Please register above and you will be sent a Zoom link to access the event the day before it takes place.

Back to top