Speaker: Bertram Malle, Brown University
Title: From Explanation to Justification to Trust in Human-Machine Interaction
Abstract:
Explanations are important to people’s understanding of other agents (human or machine). But when an agent commits a norm-violating act, explanations are often insufficient. Justifications are needed. I will examine the cognitive and social nature of explanations, the cognitive, social, and normative nature of justifications, and probe what implications each of them has for trust in other agents (human or machine).
Speaker: Serena Villata, CNRS
Title: Towards Natural Language Explanatory Argument Generation: Achieved Results and Open Challenges
Abstract:
Providing high-quality explanations for AI predictions is a challenging task. It requires, among other elements, selecting a proper level of generality/specificity of the explanation, referring to specific elements that have contributed to the decision, and providing evidence supporting negative hypothesis. In this talk, I will present some results achieved in the area of Argument Mining and Argument Generation and how these results can be exploited – in general and in legal applications – to generate high-quality explanatory dialogues crucially based on argumentation mechanisms.
2021
Speaker: Julie Shah, MIT
Title: Social and Ethical Responsibilities of Computing and the Role of Explainability and Transparency
Abstract:
This talk presents frameworks for considering social and ethical implications in technology conception, implementation and deployment. Specifically I discuss frameworks for identifying sources of bias in machine learning systems, recent methods for transparency and interpretability of ML models and their limitations, and the role of problem and value framing as a key leverage point to shape technology for human benefit.
Speaker: Dov Gabbay, King's College
Title: Explainable Reasoning in Face of Contradictions: From Humans to Machines (Link to paper)
Abstract:
A well-studied trait of human reasoning and decision-making is the ability to not only make decisions in the presence of contradictions, but also to explain why a decision was made, in particular if a decision deviates from what is expected by an inquirer who requests the explanation.
In this talk, I examine this phenomenon, which has been extensively explored by behavioral economics research, from the perspective of symbolic artificial intelligence.
2020
Speaker: Prof. Tim Miller, University of Melbourne Title: Explainable artificial intelligence: beware the inmates running the asylum (or How I learnt to stop worrying and love the social and behavioural sciences)
Abstract:
In his seminal book The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity, Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge. As a result, programmers design software that works for themselves, rather than for their target audience; a phenomenon he refers to as the ‘inmates running the asylum’. In this talk, I argue that explainable AI risks a similar fate if AI researchers and practitioners do not take a multi-disciplinary approach to explainable AI. I further assert that to do this, we must understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science; and focus evaluation on people instead of just technology. I paint a picture of what I think the future of explainable AI will look like if we went down this path.
Short Bio: Tim is an associate professor of computer science in the School of Computing and Information Systems at The University of Melbourne, and Co-Director for the Centre of AI and Digital Ethics. His primary area of expertise is in artificial intelligence, with particular emphasis on human-AI interaction and collaboration and Explainable Artificial Intelligence (XAI). His work is at the intersection of artificial intelligence, interaction design, and cognitive science/psychology.
2019
Speaker: Kary Främling, Umeå University / Aalto University Title: Explainable AI - history, present and the future
Abstract:
The need for explainability in AI systems has been identified to be a necessity for acceptance almost since the beginning of AI. Research activity around explainability has been going in waves, following the popularity and trends in AI. Despite the emergence of the new name Explainable AI (XAI), most of the challenges identified for XAI remain the same as before. The talk will give an overview of the history of XAI, the current trends, the main challenges and some guesses on what the future of XAI looks like. There will also be an overview of how the XAI workshop is expected to make the XAI domain progress.
Speaker: Michael Winikoff, University of Otago
Title: Explaining Cognitive Autonomous Agents: Directions and Challenges
Abstract:
It is important that autonomous agents are able to explain their selected course of action. Such explanation can help humans develop an appropriate level of trust in the agent, and can improve the transparency and understanding of the agent, its capabilities, and its limitations. In this talk I review some recent work in the area, focussing on cognitive agents, i.e. agents structure in terms of folk-psychological constructs, such as goals and plans. I pose some questions and challenges, including the issue of bridging between cognitive (symbolic) systems, and non-symbolic systems.
Presentations
(slides and videos)
2022
EXPLAINABLE ML/DL
Evaluation of importance estimators in deep learning classifiers for Computed Tomography Authors: Lennart Brocki, Wistan Marchadour, Jonas Maison, Bogdan Badic, Panagiotis Papadimitroulas, Mathieu Hatt, Franck Vermet and Neo Christopher Chung
Recent Neural-Symbolic Approaches to ILP Based on Templates Authors: Davide Beretta, Stefania Monica and Federico Bergenti.
On the Design of PSyKI: a Platform for Symbolic Knowledge Injection into Sub-Symbolic Predictors Authors: Matteo Magnini, Giovanni Ciatto and Andrea Omicini.
Integration of local and global features explanation via CIU and explainable layers for improving global rules generation in ECLAIRE Authors: Victor Hugo Contreras Ordoñez, Michael Schumacher and Davide Calvaresi.
ReCCoVER: Detecting Causal Confusion for Explainable Reinforcement Learning Authors: Jasmina Gajcin and Ivana Dusparic.
Smartphone based grape leaf disease diagnosis and remedial system Assisted with explanations Authors: Avleen Malhi, Vlad Apopei, Manik Madhikermi, Kary Främling and Mandeep K.
Explainability Metrics and Properties for Counterfactual Explanation Methods Authors: Vandita Singh, Kristijonas Cyras and Rafia Inam.
The Mirror Agent Model: a Bayesian Architecture for Interpretable Agent Behavior Authors: Michele Persiani and Thomas Hellström.
Case-based reasoning via comparing the strength order of features Authors: Liuwen Yu and Dov Gabbay.
The use of partial order relations and measure theory in developing objective measures of explainability Authors: Wim De Mulder.
Semantic Web-based Interoperability for Intelligent Agents with PSyKE Authors: Federico Sabbatini, Giovanni Ciatto and Andrea Omicini.
(X)AI and Law
Risk and Exposure of XAI in Persuasion and Argumentation: The case of Manipulation Authors: Rachele Carli, Amro Najjar and Davide Calvaresi.
An Evaluation of Methodologies for Legal Formalization Authors: Tereza Novotná and Tomer Libal.
Requirements for Tax XAI under Constitutional Principles and Human Rights Authors: Blazej Kuzniacki, Marco Almada, Kamil Tyliński and Łukasz Górski.
2021
XAI & ML
To Pay or Not to Pay Attention: Classifying and Interpreting Visual Selective Attention using Frequency Features Authors: Lora Fanda, Yashin Dicente Cid, Pawel Matusz and Davide Calvaresi
GridEx: An Algorithm for Knowledge Extraction from Black-Box Regressors Authors: Frederico Sabbatini, Giovanni Ciatto and Andrea Omicini
Comparison of Contextual Importance and Utility with LIME and Shapley Values / ciu.image: an R package for Explaining Image Classification with Contextual Importance and Utility Authors: Kary Främling, Samanta Knapic, Marcus Westberg, Martin Jullum, Manik Madhikermi and Avleen Malhi
Shallow2Deep: Restraining Neural Networks Opacity through Neural Architecture Search Authors: Andrea Agiollo, Giovanni Ciatto and Andrea Omicini
Towards Explainable Recommendations of Resource Allocation Mechanisms in On-Demand Transport Fleets Authors: Alaa Daoud, Hiba Alqasir, Yazan Mualla, Amro Najjar, Gauthier Picard and Flavien Balbo
A Two-Dimensional Explanation Framework to Classify AI as Incomprehensible, Interpretable or Understandable Authors: Ruben Verhagen, Mark Neerincx and Myrthe Tielman
Towards Explainable Visionary Agents: License to Dare and Imagine Authors: Giovanni Ciatto, Amro Najjar, Jean-Paul Calbimonte and Davide Calvaresi
Towards an XAI-assisted third-party evaluation of AI systems Authors: Yongxin Zhou, Matthieu Boussard and Agnes Delaborde
What does it cost to deploy an XAI system? A case study in legacy systems Authors: Sviatlana Höhn and Niko Faradouris
Industry Panel
Panelists: Johanna Björklund (CTO, Adlede), Tathagata Chakraborti (Researcher, IBM Research), Kristijonas Cyras (Researcher, Ericsson Research), Elizabeth Sklar (Research Director, Lincoln Agri-Robotics)
XAI Applications
Explainable AI (XAI) models applied to the multi agents environment of financial markets
Authors: Jean Jacques Ohana, Steve Ohana, Eric Benhamou, David Saltiel and Beatrice Guez
XAI & Human Synergies to Explain the History of Art
Authors: Egberdien van der Peijl, Yazan Mualla, Thiago Jorge Bourscheid, Sana Nouzri, Yolanda Spinola Elias, Amro Najjar, and Daniel Karpati
Assessing Explainability in Reinforcement Learning
Authors: Amber Zelvelder, Marcus Westberg and Kary Främling
Visual Explanations for DNNs with Contextual Importance
Authors: Sule Anjomshoae, Lili Jiang and Kary Främling
XAI Logic and Argumentation
Schedule Explainer: An Argumentation-supported Tool for Interactive Explanations in Makespan Scheduling Authors: Kristijonas Cyras, Myles Lee and Dimitrios Letsios
Towards Explainable Practical Agency: A Logical Perspective Authors: Nourhan Ehab and Haythem Ismail
Towards Transparent Legal Formalization Authors: Tomer Libal and Tereza Novotná
Game-based Argumentation Framework for Explanation Authors: You Cheng, Beishui Liao and Jieting Luo
Panel: Distributed intelligent systems and XAI
Panelists: Andrea Omicini, University of Bologna; Reyhan Aydogan, Özyeğin University; Leon Van der Torre, University of Luxembourg.
2020
Explainable Agents
Agent-Based Explanations in AI: Towards an Abstract Framework Authors: Giovanni Ciatto, Michael I. Schumacher, Andrea Omicini and Davide Calvaresi
Agent EXPRI: Licence to Explain Authors: Francesca Mosca, Stefan Sarkadi, Jose M. Such and Peter McBurney
In-time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap Authors: Francesco Alzetta, Paolo Giorgini, Amro Najjar, Michael Schumacher and Davide Calvaresi
Cross Disciplinary XAI
Decision Theory meets Explainable AI Authors: Kary Främling
Towards the Role of Theory of Mind in Explanation Authors: Maayan Shvo, Toryn Q. Klassen and Sheila A. McIlraith
A Situation Awareness-Based Framework for Design and Evaluation of Explainable AI Authors: Lindsay Sanneman and Julie Shah
Explainable Machine Learning
Demystifying Subliminal Persuasiveness - Using XAI-Techniques to Highlight Persuasive Markers of Public Speeches Authors: Klaus Weber, Lukas Tinnes, Tobias Huber, Alexander Heimerl, Eva Pohlen, Marc-Leon Reinecker and Elisabeth André
Explainable Agents for less Bias in Human-Agent Decision Making Authors: Avleen Malhi, Samanta Knapic and Kary Främling
Demos
Explainable Agents as Static Web Pages: A UAV Simulation Example Authors: Yazan Mualla, Timotheus Kampik, Igor H. Tchappi, Amro Najjar, Stéphane Galland and Christophe Nicolle
2019
Explainable Agents
Toward Robust Summarization of Agent Policies Authors: Isaac Lage, Daphna Lifschitz, Finale Doshi-Velez and Ofra Amir
How Cognitive Science Impacts AI and What We Can Learn From It Authors: Marcus Westberg, Amber Zelvelder and Amro Najjar
Session 2: Explainable Robots
Explainable Multi-Agent Systems through Blockchain Technology
Authors: Davide Calvaresi, Yazan Mualla, Amro Najjar, Stéphane Galland and Michael Schumacher
Explaining Sympathetic Actions of Rational Agents
Authors: Timotheus Kampik, Juan Carlos Nieves and Helena Lindgren
Conversational Interfaces for Explainable AI: A Human-Centered Approach
Authors: Sophie F. Jentzsch, Sviatlana Höhn and Nico Hochgeschwender
Intent Classification in Maritime Domains with Multinomial HMMs
Authors: Logan Carlson, Dalton Navalta, Monica Nicolescu, Mircea Nicolescu and Gail Woodward
Temporal Multiagent Plan Execution: Explaining what Happened
Authors: Gianluca Torta, Roberto Micalizio and Samuele Sormano
Session 3: Explainable AI: Overview
Explainability in Human-Agent Systems Authors: Avi Rosenfeld and Ariella Richardson
Coffee Break
Session 4: Explanation & Transparency
Beyond obscurantism and illegitimate curiosity: how to be transparent only with a restricted set of trusted agents
Authors: Nicolas Cointe, Amineh Ghorbani and Caspar Chorus
Effects of Agents' Transparency on Teamwork Authors: Silvia Tulli, Filipa Correira, Samuel Mascarenhas, Samuel Gomes and Ana Paiva
Session 5: Argumentation & Explainability
Towards a transparent deep ensemble method based on multiagent argumentation Authors: Naziha Sendi, Nadia Abchiche-Mimouni and Farida Zehraoui
Explainable Argumentation for Wellness Consultation Authors: Isabel Sassoon, Elizabeth Sklar, Nadin Kokciyan and Simon Parsons
Session 6: Opening the Black Box
Explanations of Black-Box Model Predictions by Contextual Importance and Utility Authors: Sule Anjomshoae, Kary Främling and Amro Najjar
Explainable Artificial Intelligence based Heat Recycler Fault Detection in Air Handling Unit Authors: Manik Madhikermi, Avleen Malhi and Kary Framling
Coffee Break
Session 7: Explainable Agent Simulations
Explaining Aggregate Behaviour in Cognitive Agent Simulations using Explanation Authors: Tobias Ahlbrecht and Michael Winikoff
BEN : An Agent Architecture for Explainable and Expressive Behavior in Social Simulation Authors: Mathieu Bourgais, Patrick Taillandier and Laurent Vercouter
Organizations
2022
Program Chairs
Davide Calvaresi, Amro Najjar, Kary Främling, Michael Winikoff
Special Tracks Chairs
Réka Markovich and Giovanni Ciatto
Publicity Chairs
Yazan Mualla, Rachele Carli, Benoit Alcaraz
Program Committee
Natasha Alechina, Amr Alyafi, Cleber Jorge Amaral, Kim Baraka, Suna Bensch, Olivier Boissier, Grégory Bonnet, Joost Broekens, Jean-Paul Calbimonte, Tathagata Chakraborti, Nicolas Cointe, Kristijonas Cyras, Jérémie Dauphin, Alaa Daoud, Lora Fanda, Michael W. Floyd, Stephane Galland, Maike Harbers, Brent Harrison, Salima Hassas, Helen Hastie, Thomas Hellström, Koen Hindriks, Sviatlana Höhn, Isaac Lage, Beishui Liao, Tomer Libal, Brian Lim, Daniele Magazzeni, Jean-Guy Mailly, Avleen Malhi, Réka Markovich, Viviana Mascardi, Laëtitia Matignon, Juan Carlos Nieves Sanchez, Sana Nouzri, Andrea Omicini, Marrina Paolanti, Gauthier Picard, Patrick Reignier, Francisco Javier Rodríguez Lera, Stefan Sarkadi, Giovanni Sartor, Sarath, Sreedharan, Cesar A. Tacla, Paolo SernaniSilvia Tulli, Rob Wortham, Deshraj Yadav, Jessie Yang, Jamal Barafi,Andrea Agiollio, Igor Tchappi, Monica Palmirani, Victor Contreras, Christohper Leturc, Federico Sabbatini, Matteo Magnini, Rachele Carli, Remy Chaput, Katie Atkinson, Timotheus Kampik, Giovanni Sileno, Bart Verheij, and Michal Araszkiewicz.
2021
Program Chairs
Davide Calvaresi, Amro Najjar, Kary Främling, Michael Winikoff
Special Tracks Chairs
Timotheus Kampik
Publicity Chairs
Sviatlana Höhn, Giovanni Ciatto
Program Committee
Natasha Alechina, Amr Alyaf, Cleber Jorge Amaral, Kim Baraka, Suna Bensch, Grégory Bonnet, Joost Broekens, Jean-Paul Calbimonte, Tathagata Chakraborti, Nicolas Cointe, Kristijonas Cyras, Jérémie Dauphin, Alaa Daoud, Dustin Dannenhauer, Lora Fanda, Michael W. Floyd, Stephane Galland, Önder Gürcan, Maike Harbers, Brent Harrison, Salima Hassas, Helen Hastie, Thomas Hellström, Koen Hindriks Virje, Sviatlana Höhn, Isaac Lage, Beishui Liao, Tomer Libal, Brian Lim, Avleen Malhi, Niccolò Maltoni, Réka Markovich, Viviana Mascardi, Laëtitia Matignon, Yazan Mualla, Juan Carlos Nieves Sanchez, Sana Nouzri, Andrea Omicini, Marrina Paolanti, Gauthier Picard, Patrick Reignier, Francisco Javier Rodríguez Lera, Stefan Sarkadi, Giovanni Sartor, Sarath Sreedharan, Cesar A. Tacla, Silvia Tulli, Deshraj Yadav, Jessie Yang.
2020
Program Chairs
Davide Calvaresi, Amro Najjar, Kary Främling, Michael Winikoff
Publicity Chairs
Yazan Mualla, Timotheus Kampik, Giovanni Ciatto
Program Committee
Andrea Omicini, Ofra Amir, Olivier Boissier, J. Carlos N. Sanchez, Tathagata Chakraborti, Salima Hassas, Gauthier Picard, Jean-Guy Mailly, Aldo F. Dragoni, Patrick Reignier, Stephane Galland, Grégory Bonnet, Jean-Paul Calbimonte, Sarath Sreedharan, Laëtitia Matignon, Daniele Magazzeni, Cesar A. Tacla, Avleen Malhi, Stefano Bromuri, Rob Wortham, Suna Bensch, Timotheus Kampik, Yazan Mualla, Nicola Falcionelli, Önder Gürcan, Stefan Sarkadi, Silvia Tulli, Jérémie Dauphin, Francisco J. Rodríguez Lera, Julian Alfredo Mendez, Prashan Mathugama Babun Apuhamilage, Sviatlana Höhn, Isaac Lage.
2019
Program Chairs
Davide Calvaresi, Amro Najjar, Kary Främling, Michael Schumacher
Publicity Chairs
Yazan Mualla, Timotheus Kampik, Amber Zelvelder
Program Committee
Andrea Omicini, Ofra Amir, Joost Broekens, Olivier Boissier, J. Carlos N. Sanchez, Tathagata Chakraborti, Salima Hassas, Gauthier Picard, Jean-Guy Mailly, Aldo F. Dragoni, Patrick Reignier, Stephane Galland, Laurent Vercouter, Helena Lindgren, Grégory Bonnet, Jean-Paul Calbimonte, Sarath Sreedharan, Brent Harrison, Koen Hindriks, Laëtitia Matignon, Simone Stumpf, Michael W. Floyd, Kim Baraka, Dustin Dannenhauer, Daniele Magazzeni, Cesar A. Tacla, Mor Vered, Ronal Singh, Divya Pandove, Husanbir Singh Pannu, Avleen Malhi, Stefano Bromuri, Rob Wortham, Fabien Dubosson, Giuseppe Albanese, Suna Bensch, Timotheus Kampik, Flavien Balbo, Yazan Mualla, Nicola Falcionelli, A. El Fallah Seghrouchni, Önder Gürcan, Stefan Sarkadi