23rd International Workshop on Trust in the New Agent Societies
TRUST 2025
August 16-18 2025, Montreal
Co-located with the 34th International Joint Conference on Artificial Intelligence (IJCAI 2025)
August 16-18 2025, Montreal
Co-located with the 34th International Joint Conference on Artificial Intelligence (IJCAI 2025)
Generative AI, with the extraordinary results it is proving to produce, in practically all areas of application, represents a real paradigmatic leap in the technological advancement of society. At the same time, it introduces concerns and reservations about the impact it is able to exert on our individual lives with respect to control, manipulation, indirect and imperceptible influence on decisions, as well as in the social and ethical sphere with risks that are not entirely obvious with respect to the organizational and political governance of companies. For these main reasons, for some time now, lines of study have been developed that directly refer to trustworthy AI, that is, the possibility of developing theories and systems of Artificial Intelligence capable of safeguarding us from the risks indicated above. In reality, on the topic of trust in AI there is a long tradition in AI studies and in particular in the community of cognitive sciences and multi-agent systems. The approach has always been to analyze the importance of trust in the various types of interaction, including direct or computer-mediated human interaction, human-computer interaction and that between social agents. The goal is basically to characterize and investigate those elements (nature, dynamics, relations with analogous concepts) that are essential in social trustworthiness.
With the increasing prevalence of social interaction via electronic means, and even more so with new generative AI systems, trust, reputation, privacy and identity become increasingly important. Trust is not just a simple and monolithic concept, it is multifaceted, operates at many levels and plays many roles in interaction. We can consider: trust in the environment and infrastructure (the socio-technical system), including trust in your personal agent and other mediating agents; trust in potential partners; trust in guarantors and authorities (if any).
Furthermore, identity and associated trustworthiness must be ascertained for trustworthy interactions and transactions. Trust is central to the notion of agency and its defining relation of acting “on behalf of”. It is also central to modeling and supporting groups and teams, organizations, coordination, negotiation, with the related trade-off between individual utility and collective interest; or in modeling the distribution of (dis)information. In several cases, the electronic medium appears to weaken the usual bonds in social control: and the predisposition to cheat becomes stronger. In computer-supported cooperation experiments it has been found that people tend to defect more frequently than in face-to-face interaction and that prior direct acquaintance reduces this effect. With our increasing existence online, in environments with a huge number of peers, the modeling of trust becomes a fundamental way to cope with the flow of information. Technology can also damage the trust relationships already existing in organizations and human relationships, and foster further challenges of deception and trust.
With the proliferation of generative AI systems, seemingly indistinguishable in their outputs from humans, a particularly relevant area of trust is the willingness to trust these systems and the need/ability to simulate truly trustworthy behavior of the same systems.
Exploring these questions will be the focus of the workshop discussions, and we will solicit new contributions to the discussion in these areas. Topics of Interest:
How generative AI technologies affect trust and autonomy
Trust and risk-aware decision making
Game-theoretic models of trust
Deception and fraud, and its detection and prevention
Intrusion resilience in trusted computing
Reputation mechanisms
Trust in the socio-technical system
Trust in partners and in authorities
Trust during coordination and negotiation of agents
Privacy and access control in multi-agent systems
Trust and information provenance
Detecting and preventing collusion
Trust in human-agent interaction
Trust and identity
Trust within organizations
Trust, security and privacy in social networks
Trustworthy infrastructures and services
Trust modeling for real-world applications
Important Dates
Submission deadline: May 9, 2025 May 16, 2025
Notification of acceptance: June 6, 2025
Camera Ready due: June 27, 2025
Workshop: August 16-18, 2025
General Chairs
Rino Falcone – Institute of Cognitive Sciences and Technologies - CNR
Jaime Simão Sichman – Universidade de São Paulo, Brazil
Alessandro Sapienza – Institute of Cognitive Sciences and Technologies - CNR