| |||||||||||||||
EXTRAAMAS 2020 : 2nd International Workshop on EXplainable TRansparent Autonomous Agents and Multi-Agent Systems (EXTRAAMAS 2020) | |||||||||||||||
Link: https://extraamas.ehealth.hevs.ch/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
2nd International Workshop on
EXplainable TRansparent and Autonomous Agents and Multi-Agent Systems (EXTRAAMAS 2020) #Important Dates Deadline for Submissions: 29 February 2020 Notification of acceptance: 10 March 2020 Camera-ready: 1 April 2020 Workshop day: 13-14 May 2020 #Call for Papers Human decisions are increasingly relying on Artificial Intelligence (AI) techniques implementing autonomous decision making and distributed problem-solving. However, reasoning and dynamics powering such systems are becoming increasingly opaque. This has raised ethical concerns related to the lack of transparency and the need for explainability. As a consequence, new legal constraints have been defined to enforce transparency and explainability in IT systems. Emphasizing the need for transparency in AI systems, recent studies pointed out that equipping intelligent systems with explanation abilities has a positive impact on users, (e.g., contributing to overcome discomfort, confusion, and self-deception due to the lack of understanding). Being able to comprehend AI systems, would produce a better mapping “expectation - understanding”, thereby increasing their trust in decisions and behaviors displayed by AI systems. On the contrary, the absence of explanation may lead the users to construct erroneous ToM of the users which causes confusion, misunderstanding, and uneasy collaboration. For all these reasons, Explainable Artificial Intelligence (XAI) has recently re-emerged and is considered to be a crucial topic in AI, attracting research from domains such as machine learning, robot planning, and multi-agent systems. Agents and Multi-Agent Systems (MAS) can have two core contributions for XAI. The first is in the context of personal intelligent systems providing tailored and personalized feedback (e.g., recommendations and coaching systems). Autonomous agent and multi-agent approaches have recently gained noticeable results and scientific relevance in different research domains (e.g., e-health, UAVs, smart environments). However, despite possibly being correct, the outcomes of such agent-based systems, as well as their impact and effect on users, can be negatively affected by the lack of clarity and explainability of their dynamics and rationality. Nevertheless, if explainable, their understanding, reliability, and acceptance can be enhanced. In particular, user personal features (e.g., user context, expertise, age, and cognitive abilities), which are already used to compute the outcome, can be employed in the explanation process providing a user-tailored solution. The second axis is agent/robot teams or mixed human-agent teams. In this context, succeeding in collaboration necessitates a mutual understanding of the status of other agents/users/ their capacities and limitations. This ensures efficient teamwork and avoids potential dangers caused by misunderstandings. In such a scenario, explainability goes beyond single human-agent settings into agent-agent or even mixed agent-human team explainability. Based on the evidence highlighted in the first edition of EXTRAAMAS, new objectives and domains demand attention. For example, there is an emerging need to address the synergy between XAI and ethics, pivoting on explorable cognitive agents (e.g., BDI agents). Therefore, the purpose of this second “International workshop on Explainable Intelligence in Autonomous Agent and Multi-Agent Systems” (EXTRAAMAS) is seven-fold: - to strengthen the common ground among the explainable agents and robots communities, - to explore the ethical implication among XAI and non-XAI systems and within XAI itself, - to investigate the potential of agent-based systems in personalized user-aware XAI, -- to explore the generation of symbolic knowledge from subsymbolic representations - to assess the impact of transparent and explained solutions on the user/agent behaviors, - to discuss and motivate concrete applications and contributions overcoming the lack of explainability, and - to assess and discuss the first solutions paving the way for the next generation systems. #Topics Participants are invited to submit papers on all research and application aspects of explainable and transparent intelligence in agents and multi-agent system in relevant domains (e.g., e-health, smart environment, driving companion, recommender systems, coaching agents,etc.), including, but not limited to: ##Explainable Agents & Robots - Explainable agent architectures - Personalized XAI - Explainable & Expressive robots - Explainable planning - Explainable human-robot collaboration - Reinforcement learning agents - Multi-modal explanation presentation ##XAI & Ethics - Social XAI - AI ethics and explainability - XAI vs AI ##XAI & MAS - Multi-actors interaction in XAI - XAI for agents/robot teams - Simulations for XAI ##Interdisciplinary Aspects - Cognitive and social sciences perspectives on explanations - Legal aspects of explainable agent - Explanation visualization - HCI for XAI ##XAI Machine learning and Knowledge Representation - Bridging symbolic and subsymbolic XAI - Knowledge generation from interpretations - XAI and argumentation - Explainable knowledge generation Workshop Chairs Dr. Davide Calvaresi, HES-SO, Switzerland Dr. Amro Najjar, University of Luxembourg, Luxembourg Prof. Kary Främling, Umea University Sweden and Aalto University, Finland, Prof. Michael Winikoff, Victoria University Wellington. Advisory Board Dr. Tim Miller, School of Computing and Information Systems at The University of Melbourne. Prof. Leon van der Torre, University of Luxembourg, Luxembourg Prof. Virginia Dignum, Umea University, Sweden Prof. Michael Ignaz Schumacher, HES-SO, Switzerland |
|