posted by organizer: tnaeven || 1115 views || tracked by 1 users: [display]

SAIA 2024 : Symposium on Scaling AI Assessments - Tools, Ecosystems and Business Models

FacebookTwitterLinkedInGoogle

Link: https://www.zertifizierte-ki.de/symposium-on-scaling-ai-assessments/
 
When Sep 30, 2024 - Oct 1, 2024
Where Cologne
Submission Deadline Jul 22, 2024
Notification Due Aug 19, 2024
Final Version Due Sep 9, 2024
Categories    artificial intelligence   AI   computer science   trustworthy ai
 

Call For Papers

This symposium aims to advance marketable AI assessments and audits for trustworthy AI. Specifically, papers and presentations both from an operationalization perspective (including governance and business perspectives) and from an ecosystem & tools perspective (covering approaches from computer science) are encouraged. Topics include but are not limited to:

Perspective: Operationalization of market-ready AI assessment
- Standardizing AI Assessments
- Risk and Vulnerability Evaluation
- Implementing Regulatory Requirements
- Business Models Based on AI Assessments

Perspective: Testing tools and implementation methods for trustworthy AI products
- Infrastructure and Automation
- Safeguarding and Assessment Methods
- Systematic Testing

Organization: Fraunhofer IAIS
Organization Committee contact: zki-symposium@iais.fraunhofer.de

For further information please visit the symposium website:
https://www.zertifizierte-ki.de/symposium-on-scaling-ai-assessments/

*Motivation*

Trustworthy AI is considered a key prerequisite for Artificial Intelligence (AI) applications. Especially against the background of European AI regulation, AI conformity assessment procedures are of particular importance, both for specific use cases and for general-purpose models. But also in non-regulated domains, the quality of AI systems is a decisive factor as unintended behavior can lead to serious financial and reputation damage. As a result, there is a great need for AI audits and assessments and in fact, it can also be observed that a corresponding market is forming. At the same time, there are still (technical and legal) challenges in conducting the required assessments and a lack of extensive practical experience in evaluating different AI systems. Overall, the emergence of the first marketable/commercial AI assessment offerings is just in the process and a definitive, distinct procedure for AI quality assurance has not yet been established.


1. AI assessments require further operationalization both at level of governance and related processes and at the system/product level. Empirical research is pending that tests/evaluates governance frameworks, assessment criteria, AI quality KPIs and methodologies in practice for different AI use cases.

2. Conducting AI assessments in practice requires a testing ecosystem and tool support, as many quality KPIs cannot be calculated without tool support. At the same time automation of such assessments is a prerequisite to make the corresponding business model scale.

Related Resources

Topical collection Springer 2025   CFP: Sense-Making and Collective Virtues among AI Innovators. Aligning Shared Concepts and Common Goals
Ei/Scopus-CCNML 2025   2025 5th International Conference on Communications, Networking and Machine Learning (CCNML 2025)
ICTAI 2025   IEEE 37th International Conference on Tools with Artificial Intelligence
Ei/Scopus-SGGEA 2025   2025 2nd Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2025)
AIEAAP 2026   HICSS 2026 Minitrack AI Ecosystems: Assistants, Agents and Platforms
ITA 2026   International Congress on Information, Technology and Applications
GreeNet Symposium - SGNC 2025   16th Symposium on Green Networking and Computing (SGNC 2025)
PJA 78 (1) 2027   AI, Art, and Ethics - The Polish Journal of Aesthetics
Ei/Scopus-IPCML 2025   2025 International Conference on Image Processing, Communications and Machine Learning (IPCML 2025)
SSGRB 2026   3rd Sustainable Solutions for Growth - Research and Business