| |||||||||||||||
SAIA 2024 : Symposium on Scaling AI Assessments - Tools, Ecosystems and Business Models | |||||||||||||||
Link: https://www.zertifizierte-ki.de/symposium-on-scaling-ai-assessments/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
This symposium aims to advance marketable AI assessments and audits for trustworthy AI. Specifically, papers and presentations both from an operationalization perspective (including governance and business perspectives) and from an ecosystem & tools perspective (covering approaches from computer science) are encouraged. Topics include but are not limited to:
Perspective: Operationalization of market-ready AI assessment - Standardizing AI Assessments - Risk and Vulnerability Evaluation - Implementing Regulatory Requirements - Business Models Based on AI Assessments Perspective: Testing tools and implementation methods for trustworthy AI products - Infrastructure and Automation - Safeguarding and Assessment Methods - Systematic Testing Organization: Fraunhofer IAIS Organization Committee contact: zki-symposium@iais.fraunhofer.de For further information please visit the symposium website: https://www.zertifizierte-ki.de/symposium-on-scaling-ai-assessments/ *Motivation* Trustworthy AI is considered a key prerequisite for Artificial Intelligence (AI) applications. Especially against the background of European AI regulation, AI conformity assessment procedures are of particular importance, both for specific use cases and for general-purpose models. But also in non-regulated domains, the quality of AI systems is a decisive factor as unintended behavior can lead to serious financial and reputation damage. As a result, there is a great need for AI audits and assessments and in fact, it can also be observed that a corresponding market is forming. At the same time, there are still (technical and legal) challenges in conducting the required assessments and a lack of extensive practical experience in evaluating different AI systems. Overall, the emergence of the first marketable/commercial AI assessment offerings is just in the process and a definitive, distinct procedure for AI quality assurance has not yet been established. 1. AI assessments require further operationalization both at level of governance and related processes and at the system/product level. Empirical research is pending that tests/evaluates governance frameworks, assessment criteria, AI quality KPIs and methodologies in practice for different AI use cases. 2. Conducting AI assessments in practice requires a testing ecosystem and tool support, as many quality KPIs cannot be calculated without tool support. At the same time automation of such assessments is a prerequisite to make the corresponding business model scale. |
|