posted by user: rgiot || 1029 views || tracked by 1 users: [display]

XAIE 2024 : 3rd Workshop on Explainable and Ethical AI jointly with ICPR’2024

FacebookTwitterLinkedInGoogle

Link: https://xaie.sciencesconf.org/
 
When Dec 1, 2024 - Dec 1, 2024
Where Kolkata
Submission Deadline Jul 14, 2024
Notification Due Sep 20, 2024
Final Version Due Sep 27, 2024
Categories    XAI   ethics   AI   pattern recognition
 

Call For Papers


***********************Call for papers******************************
3rd Workshop on Explainable and Ethical AI jointly with ICPR’2024 https://xaie.sciencesconf.org/
*****************************************************************
The third edition of WS XAI-E follows two successful editions at ICPR’2020( https://edl-ai-icpr.labri.fr/ ) and ICPR’2022 (https://xaie-icpr.labri.fr/).
The WS will be held on December 1st 2024 in Kolkata, India jointly with the ICPR’2024 conference https://icpr2024.org/.

**The topics covered by the workshop are:
- Naturally explainable AI methods,
- Post-Hoc Explanation methods of Deep Neural Networks, including transformers and Generative AI,
- Evaluation metrics for Explanation methods,
- Hybrid XAI,
- XAI in generative AI,
- Visualization of Explanations and user interfaces,
- Image-to-text explanations,
- Concept-based explanations,
- Use of explanation methods for Deep NN models in training and generalization.
- Ethical considerations when using pattern recognition models,
- Real-World Application of XAI methods

Methodology in explainability is related to the creation of explanations, their representation, as well as the quantification of their confidence, while those in AI ethics include automated audits, detection of bias in data and models, ability to control AI systems to prevent harm…. and others methods to improve AI explainability in general and trustfulness to AI.

We are witnessing the emergence of an “AI economy and society” where AI technologies are increasingly impacting many aspects of business as well as of everyday life. We read with great interest about recent advances in AI medical diagnostic systems, self-driving cars, and the ability of AI technology to automate many aspects of business decisions like loan approvals, hiring, policing etc. In the last years, generative AI is emerging as a major topic promising great benefits but also raising well-founded fears of significant disruption to all aspects of society. Its problems like “hallucinations” and bias are also well known. However, as evident by recent experiences, AI systems may produce errors, can exhibit overt or subtle bias, may be sensitive to noise in the data, and often lack technical and judicial transparency and explainability. These shortcomings have been reported in scientific but also and importantly in general press (accidents with self-driving cars, biases in AI-based policing, hiring and loan systems, biases in face recognition, seemingly correct medical diagnoses later found to be made due to wrong reasons etc.). These shortcomings are raising many ethical and policy concerns not only in technological and research communities, but also among policymakers and general public, and will inevitably impede wider adoption of AI in society.

The problems related to Ethical AI are complex and broad. They encompass not only technical issues but also legal, political and ethical ones. One of the key components of Ethical AI systems is explainability or transparency, but other issues like detecting bias, ability to control the outcomes, ability to objectively audit AI systems for ethics are also critical for successful applications and adoption of AI in society. Consequently, explainable and Ethical AI are very urgent and popular topics both in IT as well as in business, legal and philosophy communities. Many workshops in this field are held at top conferences.
The third workshop on explainable AI at ICPR aims to address methodological aspects of explainable and ethical AI in general, and include related applications and case studies with the aim to address these very important problems from a broad research perspective.

** Organizing committee:
Prof. J. Benois-Pineau, University of Bordeaux, jenny.benois-pineau@u-bordeaux.fr
Dr. R. Bourqui, University of Bordeaux, romain.bourqui@u-bordeaux.fr
Dr. R. Giot, University of Bordeaux, romain.giot@u-bordeaux.fr
Prof. D. Petkovic, CS Department, San Francisco State University, petkovic@sfsu.edu

**Important dates:
-July 14, 2024: paper submission
-September 20, 2024: Notification to authors
-September 27, 2024: Camera ready versions


The WS papers will be published in the proceedings of ICPR’2024.

Romain Giot, Jenny Benois-Pineau, Romain Bourqui, Dragutin Petkovic
WS organizers

Related Resources

COIT 2025   5th International Conference on Computing and Information Technology
FAIEMA 2024   2nd International Conference on Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications
ICAART 2025   17th International Conference on Agents and Artificial Intelligence
AI in Evidence Synthesis 2025   AI in Evidence Synthesis (Cochrane Evidence Synthesis and Methods)
IJSC 2024   International Journal on Soft Computing
SEAS 2025   14th International Conference on Software Engineering and Applications
ACM MLPR 2025   ACM--2025 The 3rd International Conference on Machine Learning and Pattern Recognition (MLPR 2025)
IEEE-EI/Scopus-IC2ECS 2024   2024 4th International Conference on Electrical Engineering and Control Science-IEEE Xplore/EI/Scopus
EAIH 2024   Explainable AI for Health
Canadian AI 2025   38th Canadian Conference on Artificial Intelligence