|
Objectives and scope: Neural networks are increasingly used in domains such as medical, industrial, and space systems, where performance, power, and hardware constraints must be balanced alongside strong dependability requirements. This workshop focuses on the design, verification, and certification of AIenabled embedded and cyber-physical systems, ensuring safety, reliability, availability, and security under real-time and resource constraints. It highlights the joint role of functional safety, cybersecurity, and sustainability in maintaining dependable operation, particularly in safety-critical domains and in the presence of faults and adversarial threats. Topics of interest (non-exhaustive): • Safe architectures for deploying AI in embedded and cyber-physical systems (monitoring, redundancy, graceful degradation). • Verification, validation, and testing of AI in real-time and resource-constrained environments. • Fault tolerance and resilience in hardware and software for AI-enabled embedded systems. • Compliance with safety and certification standards (ISO, IEC, DO-178C, etc.). • Efficient AI model design under resource, timing, and performance constraints. • Security and threat mitigation, including modeling risks and defending against attacks (e.g., adversarial, poisoning). • Lifecycle management of AI models in operation (monitoring, updates, re-certification). • Anomaly and OOD detection with safe fallback mechanisms for unexpected conditions. • Real-world case studies in safety-critical domains (automotive, aerospace, healthcare, industry, etc.). • Tools and benchmarks for evaluating safety, reliability, and sustainability.
|