posted by user: confiance_ai || 1881 views || tracked by 2 users: [display]

WAISE 2023 : 5th International Workshop on Artificial Intelligence Safety Engineering

FacebookTwitterLinkedInGoogle

Link: https://www.waise.org/
 
When Sep 19, 2023 - Sep 19, 2023
Where Toulouse, France
Submission Deadline May 5, 2023
Notification Due May 31, 2023
Final Version Due Jun 7, 2023
Categories    artificial intelligence   safety
 

Call For Papers

Research, engineering and regulatory frameworks are needed to achieve the full potential of Artificial Intelligence (AI) because they will guarantee a standard level of safety and settle issues such as compliance with ethical standards and liability for accidents involving, for example, autonomous cars. Designing AI-based systems for operation in proximity to and/or in collaboration with humans implies that current safety engineering and legal mechanisms need to be revisited to ensure that individuals –and their properties– are not harmed and that the desired benefits outweigh the potential unintended consequences.

The different approaches taken to AI safety go from pure theoretical (moral philosophy or ethics) to pure practical (engineering). It appears as essential to combine philosophy and theoretical science with applied science and engineering in order to create safe machines. This should become an interdisciplinary approach covering technical (engineering) aspects of how to actually create, test, deploy, operate and evolve safe AI-based systems, as well as broader strategic, ethical and policy issues.

Increasing levels of AI in “smart” sensory-motor loops allow intelligent systems to perform in increasingly dynamic uncertain complex environments with increasing degrees of autonomy, with human being progressively ruled out from the control loop. Adaptation to the environment is being achieved by Machine Learning (ML) methods rather than more traditional engineering approaches, such as system modelling and programming. Recently, certain ML methods are proving themselves very promising, such as deep learning, reinforcement learning and their combination. However, the inscrutability or opaqueness of the statistical models for perception and decision-making we build through them pose yet another challenge. Moreover, the combination of autonomy and inscrutability in these AI-based systems is particularly challenging in safety-critical applications, such as autonomous vehicles, personal care or assistive robots and collaborative industrial robots.

The WAISE workshop explores new ideas on safety engineering for AI-based systems, ethically aligned design, regulation and standards for AI-based systems. In particular, WAISE provides a forum for thematic presentations and in-depth discussions about safe AI architectures, bounded morality, ML safety, safe human-machine interaction and safety considerations in automated decision-making systems, in a way that makes AI-based systems more trustworthy, accountable and ethically aligned.

WAISE aims at bringing together experts, researchers, and practitioners, from diverse communities, such as AI, safety engineering, ethics, standardization and certification, robotics, cyber-physical systems, safety-critical systems, and application domain communities such as automotive, healthcare, manufacturing, agriculture, aerospace, critical infrastructures, and retail.

Contributions are sought in (but are not limited to) the following topics:
* Regulating AI-based systems: safety standards and certification
* Safety in AI-based system architectures: safety by design
* Runtime AI safety monitoring and adaptation
* Safe machine learning and meta-learning
* Safety constraints and rules in decision-making systems
* AI-based system predictability
* Continuous Verification and Validation of safety properties
* Avoiding negative side effects
* Algorithmic bias and AI discrimination
* Model-based engineering approaches to AI safety
* Ethically aligned design of AI-based systems
* Machine-readable representations of ethical principles and rules
* Uncertainty in AI
* Accountability, responsibility and liability of AI-based systems
* AI safety risk assessment and reduction
* Confidence, self-esteem and the distributional shift problem
* Reward hacking and training corruption
* Self-explanation, self-criticism and the transparency problem
* Safety in the exploration vs exploitation dilemma
* Simulation for safe exploration and training
* Human-machine interaction safety
* AI applied to safety engineering
* Algorithmic bias and AI discrimination
* AI safety education and awareness
* Shared autonomy and human-autonomy teaming
* AI safety regulation and education
* Safety testing, verification and validation
* Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
CETA 2025   2025 4th International Conference on Computer Engineering, Technologies and Applications (CETA 2025)
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
Ei/Scopus-ACAI 2024   2024 7th International Conference on Algorithms, Computing and Artificial Intelligence(ACAI 2024)
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
IWBBIO 2025   International Work-Conference on Bioinformatics and Biomedical Engineering
CVAI 2026   2026 International Symposium on Computer Vision and Artificial Intelligence (CVAI 2026)
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
JCRAI 2025   2025 5th International Joint Conference on Robotics and Artificial Intelligence
ACDSA 2025   2nd International Conference on Artificial Intelligence, Computer, Data Sciences and Applications