posted by user: Alessandro || 278 views || tracked by 1 users: [display]

TRUST 2025 : 23rd International Workshop on Trust in the New Agent Societies

FacebookTwitterLinkedInGoogle


Conference Series : International Workshop on Trust in Agent Societies
 
Link: https://sites.google.com/view/trust-2025/home-page?authuser=0
 
When Aug 16, 2025 - Aug 18, 2025
Where Montreal, Canada
Submission Deadline May 9, 2025
Notification Due Jun 6, 2025
Final Version Due Jun 27, 2025
Categories    trust   multi-agent systems   agents   artificial intelligence
 

Call For Papers

CALL FOR PAPERS

Generative AI, with the extraordinary results it is proving to produce, in practically all areas of application, represents a real paradigmatic leap in the technological advancement of society. At the same time, it introduces concerns and reservations about the impact it is able to exert on our individual lives with respect to control, manipulation, indirect and imperceptible influence on decisions, as well as in the social and ethical sphere with risks that are not entirely obvious with respect to the organizational and political governance of companies. For these main reasons, for some time now, lines of study have been developed that directly refer to trustworthy AI, that is, the possibility of developing theories and systems of Artificial Intelligence capable of safeguarding us from the risks indicated above. In reality, on the topic of trust in AI there is a long tradition in AI studies and in particular in the community of cognitive sciences and multi-agent systems. The approach has always been to analyze the importance of trust in the various types of interaction, including direct or computer-mediated human interaction, human-computer interaction and that between social agents. The goal is basically to characterize and investigate those elements (nature, dynamics, relations with analogous concepts) that are essential in social trustworthiness.

With the increasing prevalence of social interaction via electronic means, and even more so with new generative AI systems, trust, reputation, privacy and identity become increasingly important. Trust is not just a simple and monolithic concept, it is multifaceted, operates at many levels and plays many roles in interaction. We can consider: trust in the environment and infrastructure (the socio-technical system), including trust in your personal agent and other mediating agents; trust in potential partners; trust in guarantors and authorities (if any).

Furthermore, identity and associated trustworthiness must be ascertained for trustworthy interactions and transactions. Trust is central to the notion of agency and its defining relation of acting “on behalf of”. It is also central to modeling and supporting groups and teams, organizations, coordination, negotiation, with the related trade-off between individual utility and collective interest; or in modeling the distribution of (dis)information. In several cases, the electronic medium appears to weaken the usual bonds in social control: and the predisposition to cheat becomes stronger. In computer-supported cooperation experiments it has been found that people tend to defect more frequently than in face-to-face interaction and that prior direct acquaintance reduces this effect. With our increasing existence online, in environments with a huge number of peers, the modeling of trust becomes a fundamental way to cope with the flow of information. Technology can also damage the trust relationships already existing in organizations and human relationships, and foster further challenges of deception and trust.

With the proliferation of generative AI systems, seemingly indistinguishable in their outputs from humans, a particularly relevant area of trust is the willingness to trust these systems and the need/ability to simulate truly trustworthy behavior of the same systems.

Exploring these questions will be the focus of the workshop discussions, and we will solicit new contributions to the discussion in these areas. Topics of Interest:

§ How generative AI technologies affect trust and autonomy
§ Trust and risk-aware decision making
§ Game-theoretic models of trust
§ Deception and fraud, and its detection and prevention
§ Intrusion resilience in trusted computing
§ Reputation mechanisms
§ Trust in the socio-technical system
§ Trust in partners and in authorities
§ Trust during coordination and negotiation of agents
§ Privacy and access control in multi-agent systems
§ Trust and information provenance
§ Detecting and preventing collusion
§ Trust in human-agent interaction
§ Trust and identity
§ Trust within organizations
§ Trust, security and privacy in social networks
§ Trustworthy infrastructures and services
§ Trust modeling for real-world applications



Submission

Submitted contributions should be original and not submitted elsewhere. Papers accepted for presentation must be relevant to the workshop and demonstrate clear exposition, offering new ideas in suitable depth and detail. The proceedings of the workshop will be published through CEUR-WS.org. Papers (min 10 pages, max 14 pages excluding references) should follow the CEURART paper style (https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-workshop-proceedings-ceur-ws-dot-org/wqyfdgftmcfw).

Papers can be submitted in PDF format via Easychair using the following link https://easychair.org/conferences/?conf=trust2025

It is mandatory that, for each accepted contribution, at least one author registers for the workshop through the IJCAI 2025 booking system.



GENERAL CHAIRS
Rino Falcone – Institute of Cognitive Sciences and Technologies - CNR
Jaime Simão Sichman – Universidade de São Paulo, Brazil
Alessandro Sapienza – Institute of Cognitive Sciences and Technologies - CNR



PROGRAM COMMITTEE (in progress)
Balázs Bodó – University of Amsterdam, Netherlands
Filippo Cantucci – Institute of Cognitive Sciences and Technologies - CNR, Italy
Cristiano Castelfranchi – Institute of Cognitive Sciences and Technologies - CNR, Italy
Robin Cohen – University of Waterloo, Canada
Rino Falcone – Institute of Cognitive Sciences and Technologies - CNR, Italy
Churn-Jung Liau – Academia Sinica, Taiwan
Emiliano Lorini – IRIT, CNRS, Toulouse University, France
Jordi Sabater-Mir – Artificial Intelligence Research Institute, Spain
Alessandro Sapienza – Institute of Cognitive Sciences and Technologies - CNR, Italy
Jaime Simão Sichman – Universidade de São Paulo, Brazil
Munindar P. Singh – North Carolina State University, USA
Chris Snijders – Eindhoven University of Technology, Netherlands
Jie Zhang – Nanyang Technological University, Singapore

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
Security 2025   Special Issue on Recent Advances in Security, Privacy, and Trust
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
FLAIRS-ST XAI, Fairness, and Trust 2025   FLAIRS-38 Special Track on Explainable, Fair, and Trustworthy AI
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
AISyS 2025   The Second International Conference on AI-based Systems and Services
ICISSP 2025   11th International Conference on Information Systems Security and Privacy
Ei/Scopus-IPCML 2025   2025 International Conference on Image Processing, Communications and Machine Learning (IPCML 2025)
PST 2025   The 22nd Annual International Conference on Privacy, Security & Trust (PST2025)
PRICAI 2025   22nd Pacific Rim International Conference on Artificial Intelligence