| |||||||||||||||
SICSA XAI Workshop 2021 : SICSA eXplainable Artificial Intelligence Workshop | |||||||||||||||
Link: https://sites.google.com/view/sicsa-xai-workshop/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
The use of AI and ML systems is increasingly becoming more commonplace in everyday life. In everything from recommender systems for media streaming services to machine vision for clinical decision support, intelligent systems are supporting both the personal and professional spheres of our society. However explaining the outcomes and decision-making of these systems remains a challenge. As the prevalence of AI grows in our society, so too does the complexity and expectation surrounding the ability of autonomous models to explain their actions.
Regulations increasingly support users rights to fair and transparent processing in automated decision-making systems. This can be difficult when the latest trends in data-driven ML systems, such as deep learning architectures, tend to be black-boxes with opaque decision-making processes. Furthermore, the need for accountability means that pipeline, ensemble and multi-agent systems may require complex combinations of explanations before being understandable to their target audience. Beyond the models themselves, designing explainer algorithms for users remains a challenge due to the highly subjective nature of the explanation itself. The SICSA XAI workshop will provide a forum to share exciting research on methods targeting explanation of AI and ML systems. Our goal is to foster connections among SICSA researchers interested in Explainable AI by highlighting and documenting promising approaches, and encouraging further work. We expect to draw interest from AI researchers working in a number of related areas including NLP, ML, reasoning systems, intelligent user interfaces, conversational AI and adaptive user interfaces, causal modelling, computational analogy, constraint reasoning and cognitive theories of explanation and transparency. The SICSA XAI Workshop Organisation Committee would like to invite submissions of novel theoretical and applied research targeting the explainability of AI and ML systems. Example submission areas include (but are not limited to): • Design and implementation of new methods of explainability for intelligent systems of all types, particularly highlighting complex systems combining multiple AI components. • Evaluation of explainers or explanations using autonomous metrics, novel methods of user-centered evaluation, or evaluation of explainers with users in a real-world setting. • Ethical considerations surrounding explanation of intelligent systems, including subjects such as accountability, accessibility, confidentiality and privacy. Paper Submission Instructions Paper submissions should be formatted according to Springer instructions (see https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines). You have the following submission options: Short paper: 5-7 pages to describe preliminary work, present an overview of existing work or to be accompanied by a demonstration. Position paper: 2-4 pages to present an idea, discuss challenges or identify the landscape in this area of research. Workshop submissions will be subject to review by at least two reviewers who will be members of the Programme Committee. Researchers who submit demo systems will be required to provide access to the software in advance to facilitate evaluation. For further instructions, please check the workshop website: https://sites.google.com/view/sicsa-xai-workshop/ |
|