posted by system || 1100 views || tracked by 1 users: [display]

ATRACC 2024 : AAAI Fall Symposium: AI Trustworthiness and Risk Assessment for Challenged Contexts

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/aaai-atracc
 
When Nov 7, 2024 - Nov 9, 2024
Where Arlington
Submission Deadline Aug 9, 2024
Notification Due Aug 16, 2024
Final Version Due Aug 30, 2024
 

Call For Papers

Artificial intelligence (AI) has already become a transformative technology that is having revolutionary impact in nearly every domain from business operations to more challenging contexts such as civil infrastructure, healthcare and military defense. AI systems built on large language and foundational/multi-modal models (LLFMs) have proven their value in all aspects of human society, rapidly transforming traditional robotics and computational systems into intelligent systems with emergent, beneficial and even unanticipated behaviors. However, the rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. Furthermore, the design of AI-based critical systems requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons.

We can call it AI testing, validation, monitoring, assurance, or auditing, but the fundamental concept in all cases is to make sure the AI is performing well within its operational design and avoids unanticipated behaviors and unintended consequences. Such assessment begins from the early stages of research, development, analysis, design, and deployment. Thus, trustworthy AI systems and methods for their assessment should address full system-level functions as well as individual AI-models and require a systematic design both during training and development phases, ultimately providing assurance guarantees. At the theoretical and foundational level, such methods must go beyond explainability to deliver uncertainty estimations and formalisms that can bound the limits of the AI; find blind spots and edge-cases; and incorporate testing for unintended use-cases, such as adversarial testing and red teaming in order to provide traceability, and quantify risk. This level of performance is critically important to contexts that have highly risk-averse mandates such as, healthcare, essential civil systems including power and communications, military defense, and robotics that interface directly with the physical world.

The Symposium track aims to create a platform for discussions and explorations that are expected to ultimately contribute to the development of innovative solutions for quantitatively trustworthy AI. The symposium track will last 2 1/2 days and will feature keynote and invited talks from accomplished experts in the field of Trustworthy AI, panel sessions, the presentation of selected papers, student papers and a poster session. Potential topics of interest include, but are not limited to :

- Assessment of non-functional requirements such as explainability, including transparency, accountability, and privacy
- Methods that use data and knowledge to support system reliability requirements, quantify uncertainty, or balance over-generalizability
- Approaches for verification and validation (V&V) of AI systems and quantitative AI and system performance indicators
- Methods and approaches for enhancing reasoning in LLFMs, e.g. causal reasoning techniques and outcome verification approaches
- Links between performance, and trustworthiness and trust leveraged by AI sciences, system and software engineering, metrology, and Social Sciences and Humanities methods
- Research on and architectures/frameworks for Mixture-Of-Experts (MoE) and Multi-Agent systems with an emphasis on robustness, reliability, and emergent behaviors in risk-averse contexts
- Evaluation of AI systems vulnerabilities, risks and impact; including adversarial (prompt injection, data poisoning, etc.) and red-teaming approaches targeting LLFMs or multi-agent behaviors.

Important Dates:
- August 2:  Paper submission deadline, submitted in easyChair
- August 16: Notification of paper status sent to authors
- August 30: Final accepted paper revisions due
- October 4: Deadline for Registration Refund Requests – Late Registration Rate Begins

Useful Links:
- ATRAAC 2024 Paper Submission: https://easychair.org/my/conference?conf=fss24
- ATRAAC 2024 Home Page: https://sites.google.com/view/aaai-atracc
- 2024 AAAI Fall Symposium Series: https://aaai.org/conference/fall-symposia/fss24/

Related Resources

AAAI 2024   The 38th Annual AAAI Conference on Artificial Intelligence
AAAI 2025   The 39th Annual AAAI Conference on Artificial Intelligence
NDSS 2025   Network and Distributed System Security Symposium - Fall Review Cycle
Good-Data@AAAI 2025   AAAI 2025 Workshop on Preparing Good Data for Generative AI: Challenges and Approaches (Good-Data)
EuroSys 2024   The European Conference on Computer Systems (Fall Deadline)
AI in Evidence Synthesis 2025   AI in Evidence Synthesis (Cochrane Evidence Synthesis and Methods)
ICWSM 2025   International AAAI Conference on Web and Social Media third submission
Topical collection Springer 2025   CFP: Sense-Making and Collective Virtues among AI Innovators. Aligning Shared Concepts and Common Goals
ASPLOS 2024   The ACM International Conference on Architectural Support for Programming Languages and Operating Systems (Fall)
HAICTW 2025   Human-AI Collaboration Transforming Workforces with Gen AI