London, UK, 29 May 2023
The 5th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems (EXTRAAMAS) runs since 2019, and is a well-established workshop and forum. It aims to discuss and disseminate research on explainable artificial intelligence, with a particular focus on intra/inter-agent explainability and cross-disciplinary perspectives. In its 5th edition, EXTRAAMAS 2023 identifies four particular focus topics with the ultimate goal of strengthening cutting-edge foundational and applied research.
1. XAI Fundamentals.
EXTRAAMAS encourages the submission of seminal and visionary research papers.
2. XAI in Action: Applied perspectives.
EXTRAAMAS explicitly encourages the submission of applied research and demo papers.
3. XAI and Law Cross-disciplinary Perspectives.
All accepted papers are eligible for publication in the Springer Lecture Notes of Artificial Intelligence conference proceedings (after revisions have been applied).
Track 1: XAI in symbolic and subsymbolic AI: the “AI dichotomy” separating symbolic AKA classical AI from connectionism AI has been persistent for more than seven decades. Nevertheless, the advent of explainable AI has accelerated and intensified the efforts to bridge this gap, since providing faithful explanations of black-box machine learning techniques would necessarily mean combining symbolic and subsymbolic AI. This track aims at discussing the recent works in this hot-topic of AI.
Track 1 chair: Dr. Giovanni Ciatto.
XAI for Machine learning
Explainable neural networks
Symbolic knowledge injection or extraction
Neuro-symbolic computation
Computational logic for XAI
Multi-agent architectures for XAI
Surrogate models for sub-symbolic predictors
Explainable planning (XAIP)
XAI evaluation
Track 2: XAI in negotiation and conflict resolution: Conflict resolution (e.g., agent-based negotiation, voting, argumentation, etc.) has been a prosperous domain within the MAS community since its foundation. However, as agents and the problems they are tackling become more complex, incorporating explainability becomes vital to assess the usefulness of the supposedly conflict-free solution. This is the main topic of this track, with a special focus on MAS negotiation and explainability.
Track 2 chair: Dr. Reyhan Aydogan.
Explainable conflict resolution techniques/frameworks
Explainable negotiation protocols and strategies
Explainable recommendation systems
Trustworthy voting mechanisms
Argumentation for explaining the process itself
Argumentation for explaining and supporting the potential outcomes
Explainable user/agent profiling (e.g., learning user's preferences or strategies)
User studies and assessment of the aforementioned approaches
Applications (virtual coaches, robots, IoT)
Track 3: Explainable Robots and Practical Applications: Explainable robots have been one of the main topics of XAI for several years. The main interest of this track is to publish the latest works whose focus is notably on (i) the impact of embodiment on explanation, (ii) explainability for remote robots, (iii) how humans receive and perceive explanations by robots, and (iv) practical XAI applications & simulations.
Track 3 chair: Dr. Yazan Mualla.
Explainable remote robots
Explainability and embodiment
Practical XAI applications
Emotions in XAI
Perception in XAI
Human-Computer Interaction (HCI) studies
Explanation of communication and reception
Agent simulations and XAI
Track 4: (X)AI in Law and Ethics: complying with regulation (e.g., GDPR) is among the main objectives for XAI. The right to explanation is key to ensuring transparency of ever more complex AI systems dealing with a multitude of sensitive AI applications. This track discusses works related to explainability in AI ethics, machine ethics, and AI & Law.
Track 4 chair: Rachele Carli.
XAI in AI & Law
Fair (X)AI
XAI & Machine Ethics
Bias reduction
Deception and XAI
Nudging and XAI
Legal issues of XAI
Liability and XAI
XAI, Transparency, and the Law
Enforceability and XAI
Culture-aware systems and XAI
Special Iusse on JAAMAS is out!
Link
The tentative program is out!
1.04.2023
Submission Deadline Extended
10.03.2023
Keynote: Jeremy Pitt (Untrustworthy AI)
(29.05.2023)
Website is updated!
(21 Dec 2022)