Speaker: Prof. Tim Miller, University of Melbourne
Title: Explainable artificial intelligence: beware the inmates running the asylum (or How I learnt to stop worrying and love the social and behavioural sciences)
Abstract: In his seminal book The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity, Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge. As a result, programmers design software that works for themselves, rather than for their target audience; a phenomenon he refers to as the ‘inmates running the asylum’. In this talk, I argue that explainable AI risks a similar fate if AI researchers and practitioners do not take a multi-disciplinary approach to explainable AI. I further assert that to do this, we must understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science; and focus evaluation on people instead of just technology. I paint a picture of what I think the future of explainable AI will look like if we went down this path. Short Bio: Tim is an associate professor of computer science in the School of Computing and Information Systems at The University of Melbourne, and Co-Director for the Centre of AI and Digital Ethics. His primary area of expertise is in artificial intelligence, with particular emphasis on human-AI interaction and collaboration and Explainable Artificial Intelligence (XAI). His work is at the intersection of artificial intelligence, interaction design, and cognitive science/psychology.
Speaker: Kary Främling, Umeå University / Aalto University
Title: Explainable AI - history, present and the future
Abstract: The need for explainability in AI systems has been identified to be a necessity for acceptance almost since the beginning of AI. Research activity around explainability has been going in waves, following the popularity and trends in AI. Despite the emergence of the new name Explainable AI (XAI), most of the challenges identified for XAI remain the same as before. The talk will give an overview of the history of XAI, the current trends, the main challenges and some guesses on what the future of XAI looks like. There will also be an overview of how the XAI workshop is expected to make the XAI domain progress.
Speaker: Michael Winikoff, University of Otago
Title: Explaining Cognitive Autonomous Agents: Directions and Challenges
Abstract: It is important that autonomous agents are able to explain their selected course of action. Such explanation can help humans develop an appropriate level of trust in the agent, and can improve the transparency and understanding of the agent, its capabilities, and its limitations. In this talk I review some recent work in the area, focussing on cognitive agents, i.e. agents structure in terms of folk-psychological constructs, such as goals and plans. I pose some questions and challenges, including the issue of bridging between cognitive (symbolic) systems, and non-symbolic systems.