| |||||||||||||||
IEEE CS: Special Issue on Explainable AI 2020 : IEEE CS Computer: Special Issue on Explainable AI and Machine Learning | |||||||||||||||
Link: https://www.computer.org/digital-library/magazines/co/call-for-papers-special-issue-on-explainable-ai-and-machine-learning/?source=wiki | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
We are observing the rapid increase of artificial intelligence-oriented and machine learning-dependent algorithms and their applications all around us. In addition to everyday applications of speech and image recognition, these algorithms are increasingly used in safety-critical software, such as autonomous driving and robotics. The performance of most artificial intelligence and machine learning (AI/ML) algorithms typically equals or surpasses human performance. However, these AI/ML algorithm-based applications are highly opaque—that is, it is difficult to decipher the reasoning behind a particular classification or decision produced by an AI/ML application.
Although the accuracy level is usually high, AI/ML applications are not foolproof. Deadly accidents in autonomous vehicles are one example of the risks associated with complete reliance on these programs. To accept these applications in our lives, ultimately there must be some responsibility and accountability for the outcome produced by these applications. Knowing how determinations are made by these applications, and being able to justify the AI system’s action or decision is essential—particularly to address the following questions in appropriate scenarios. How do we know the system is working correctly? What combinations of factors support the decision? Why was another action not taken? This information constitutes explainability, which should be an integral part of verification and validation for AI/ML software. For this special issue, Computer seeks articles that describe different approaches and efforts towards AI/ML explainability. Topics of Interest: Examples of failures due to lack of explainability Performance of learning algorithms Appropriate levels of trust in learning algorithms Approaches to AI/ML explainability Causality and inference in AI/ML applications Human factors in explainability Psychological acceptability of AI/ML systems Key Dates Articles due for review: October 30, 2020 First notification to authors: February 26, 2021 Second revisions submission deadline: March 15, 2021 Second notification to authors: April 17, 2021 Camera-ready paper deadline: July 01, 2021 Publication: October 2021 Submission Guidelines For manuscript submission guidelines, visit www.computer.org/publications/author-resources/peer-review/magazines. When you are ready to submit, visit https://mc.manuscriptcentral.com/com-cs. Questions? Please contact the guest editors at co10-21@computer.org. Guest editors: M S Raunak, Loyola University Maryland/NIST Rick Kuhn, NIST |
|