Detroit, US, 19-20 May 2025
The International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems (EXTRAAMAS) runs since 2019, and is a well-established workshop and forum. It aims to discuss and disseminate research on explainable artificial intelligence, with a particular focus on intra/inter-agent explainability and cross-disciplinary perspectives. In its 7th edition, EXTRAAMAS 2025 identifies four particular focus topics with the ultimate goal of strengthening cutting-edge foundational and applied research.
1. XAI Fundamentals.
EXTRAAMAS encourages the submission of seminal and visionary research papers.
2. XAI in Action: Applied perspectives.
EXTRAAMAS explicitly encourages the submission of applied research and demo papers.
3. Cross-disciplinary Perspectives: XAI and Law, Dialogs, GenAI and
prompting, ... .
Title: "Substantial fairness in AI ethics: the path forward"
Speacker: Simona Tiribelli, Professor at
Abstract: Prominent debates in AI ethics contend that biases are one of the main
problems
to be eradicated to design fair AI systems, leading the community to develop many technical
and
non-technical methods focused on eliminating them to create fairer AI tools. However, such
methods
are shown very often to be inadequate or counterproductive for the design of fairer AI
systems,
failing in fostering social justice as fairness through AI, especially in the domain of
healthcare.
As I will argue in this talk, one of the main problems is a procedural and merely
mathematical
understanding of fairness, which leads a passive understanding of biases just as mere errors
and
technical bugs in AI systems, as well as of the related solutions mainly aimed at their
simple
removal.
To address this problem, this talk will go beyond mathematical and procedural conception of
fairness,
drawing conceptual insights from moral philosophy, to show what substantial fairness means
and
how
to achieve it by leveraging responsible multi-agent architectures and explanations.
Particularly,
through a case study on gender and ethnic biases in healthcare AI, this talk will show how a
substantial
account fairness helps harnessing bias in developing multi-agent architecture-based AI
systems
to decode
patterns of unfairness and, drawing on ethical theories of affirmative action, generate
novel
compensatory
design actions for the development of truly fair healthcare AI systems, capable of
addressing
longstanding
unfair inequalities in healthcare, therefore promoting through AI better and more just
healthcare ecosystems.
Title: "Explanations in Responsible Autonomy: Trustworthy AI, Norm Deviation, and
Consent"
Speaker: Munindar P Singh, Professor of Computer Science, North Carolina State
University
Abstract: This talk will summarize some of our recent conceptual work on responsible
autonomy,
highlighting the opportunities and challenges for explanations.
This work steps back from computational models of morality to apply insights from philosophy
and social psychology to responsible autonomy. One challenge is to understand when
responsible
autonomy involves respecting of deviating from norms. We show how Habermas's notion of
objective, subjective,
practical validity criteria can be used to understand norm deviation. We apply case law from
the
US,
UK, and Canada as a source of empirical knowledge about when norm deviations are legitimate.
We apply the Habermasian framework as a basis for conceiving of consent. We adapt a model of
trust
based on the components of ability, benevolence, and integrity to show how trustworthiness
involves, again
applying case law as a source of empirical intuitions about how to evaluate AI agents. These
models can provide a
basis for effective and meaningful explanations as integral to responsible autonomy.
This of course comes in addition to the main theme of the workshop, focused as usual on XAI fundamentals. Below the four tracks for this year. Workshop CfP
All accepted papers are eligible for publication in the Springer Lecture Notes of Artificial Intelligence conference proceedings (after revisions have been applied).