The emerging Web era of Artificial Intelligence (AI) presents a paradox: the same innovations that threaten security and truth also offer unprecedented solutions. AI technologies are becoming ever more interwoven in the fabric of online systems, presenting potential that can be used both for constructive and destructive purposes. In the post-truth era, generating persuasive yet deceptive content and automating social engineering attacks and the spread of disinformation has never been easier since the conception of the World Wide Web. At the same time, there exist exceptional opportunities for innovation and creating powerful tools to mitigate these threats, thanks to intrusion detection, threat modeling, and context-aware access control, etc.
The AiOfAi workshop, which has had three prior editions at the International Joint Conference on Artificial Intelligence (IJCAI), aims to highlight the double-edged nature of AI in the digital age, examining how it can be exploited to undermine trust, privacy, and integrity, while also serving as a foundation for more secure, ethical, and resilient digital ecosystems. We will discuss the societal impact of widespread adoption of AI tools, especially with the advent of Generative AI and its consequences, ranging from the erosion of public trust to the blurring privacy lines. AiOfAi will also address the ethical and legal frameworks needed to guide responsible AI deployment, which embodies fairness, transparent decision-making, and privacy preservation.
We aim to bring together researchers, practitioners, and enthusiasts interested in AI, cybersecurity, ethics, law, and human computer interaction to discuss methodologies, case studies, and tools that address the complex tradeoffs between AI capabilities and vulnerabilities. We welcome original contributions that present innovative ideas, proof of concepts, and use cases to tackle the challenges of the AI-powered Web.