Venue: ExCeL Exhibition Centre, London, UK (link)
Room: South Gallery 9+10
[8.30 - 9.00] - Logistic and room check
[9.00 - 9.13] - Opening Speech
[9.13 - 9.15] - keynote speaker introduction
[9.15 - 9.50] - Keynote by Jeremy Pitt
Title: Untrustworthy AI
Abstract: EThe Digital Transformation to the Digital Society entails the increasing use of digital tools and technologies in the digitalisation of social and organisational processes and structures. Artificial Intelligence (AI) has a major role to play in this transformative process and the resulting systems. However, despite the plethora [sic] of ethical guidelines, design methodologies, and international standards, AI is too often used as a tool for abstracted power and abnegated responsibility. This talk will consider a range of serious threats in the form of "Untrustworthy AI", which can potentially bringing about a kind of "digital feudalism" or "techno-feudalism" and diminish the very essence of "being human" during and after the digital transformation. It will be argued that to restore accountability, what is required is not just "explainable AI", but "justifiable AI".
[9.50 - 10.00] Q&A
[10.00 - 10.45] - Coffee Break
Session 1 - Explainable Agents: 10:45 - 12:30 CEST
Session Chairs: Dr. Reyhan Aydoǧan & Prof. Kary Främling
[10.45 - 11.00] - Ahmad Alelaimat, Aditya Ghose and Hoa Khanh Dam.
Mining and Validating Belief-based Agent Explanations
[11.00 - 11.15] - Michael Winikoff and Galina Sidorenko.
Evaluating a mechanism for explaining BDI agent behaviour
[11.15 - 11.30] - Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen and Stefano V. Albrecht.
Causal Social Explanations for Stochastic Sequential Multi-Agent Decision-Making
[11.30 - 11.45] - Giovanni Ciatto, Matteo Magnini, Berk Buzcu, Reyhan Aydogan and Andrea Omicini.
A General-Purpose Protocol for Multi-Agent based Explanations
[11.45 - 12.00] - Joris Hulstijn, Igor Tchappi, Amro Najjar and Reyhan Aydoǧan.
Metrics for Evaluating Explainable Recommender Systems
[12.00 - 12.15] - Yifan Xu, Joe Collenette, Louise Dennis and Clare Dixon.
Dialogue Explanations for Rules-based AI Systems
[12.15 - 12.30] - Saaduddin Mahmud, Samer Nashed, Claudia Goldman and Shlomo Zilberstein.
Estimating Causal Responsibility for Explaining Autonomous Behavior
[12.30 - 14.00] - Lunch Break
Session 2 - Explainable Machine Learning: 14:00 - 16:45 CEST
Session Chairs: Dr. Giovanni Ciatto & Dr. Amro Najjar
[14.00 - 14.15] - Andrea Agiollo, Pradeep Kumar Murukannaiah, Luciano Cavalcante Siebert and Andrea Omicini.
The Quarrel of Local Post-hoc Explainers for Moral Values Classification in Natural Language Processing
[14.15 - 14.30] - Kary Främling.
Counterfactual, Contrastive and Hierarchical Explanations with Contextual Importance and Utility
[14.30 - 14.45] - Federico Sabbatini and Roberta Calegari.
Bottom-Up and Top-Down Workflows for Hypercube- and Clustering-based Knowledge Extractors
[14.45 - 15.00] - Victor Contreras, Andrea Bagante, Niccolò Marini, Michael Schumacher, Vincent Andrearczyk and Davide Calvaresi.
Explanation Generation via Decompositional Rules Extraction for Head and Neck Cancer Classification
[15.00 - 15.15] - Umanta Dey, Sharat Bhat, Pallb Dasgupta and Soumyajit Dey.
Imperative Action Masking for Safe Exploration in Reinforcement Learning
[15.15 - 15.30] - Fumito Uwano and Keiki Takadama.
Reinforcement Learning in Cyclic Environmental Change for Non-Communicative Agents: A Theoretical Approach
[15.30 - 15.45] - Ahmad Alelaimat, Aditya Ghose and Hoa Khanh Dam.
Leveraging Imperfect Explanations for Plan Recognition Problems
[15.45 - 16.15] - Coffee Break
[16.15 - 16.30] - Matija Franklin, David Lagnado, Chulhong Min, Akhil Mathur and Fahim Kawsar.
Using Cognitive Models and Wearables to Diagnose and Predict Dementia Patient Behaviour
[16.30 - 16.45] - Andreas Kontogiannis and George Vouros.
Inherently Interpretable Deep Reinforcement Learning through Online Mimicking
Session 3 - Explainable AI and Law: 16:45 - 17`:15 CEST
Session Chairs: Dr. Amro Najjar & Rachele Carli
[16.45 - 17.00] - Balint Gyevnar and Nick Ferguson.
Aligning Explainable AI and the Law: The European Perspective
[17.00 - 17.15] -
Rachele Carli and Davide Calvaresi.
Reinterpreting Vulnerability to Tackle Deception in Principles-Based XAI for Human-Computer Interaction.
[17.15 - 17.30] - Closing session
...