This is a side event offered by CeSIA (Centre pour la sécurité de l'IA). It is meant primarily for young researchers, but would be of interest also to anyone with a keen interest in the topic, including journalists, general public, or more experienced researchers interested in an introduction to the field.
It will run in the morning of October 14th, 10 to 12, at Sorbonne Université, Jussieu campus, a very short walk from the CISCU auditorium where the main event takes place in the afternoon.
Registration is free but required. It appears as an optional add-on when registering for the main event. Please note that attendance is limited and we might not be able to accomodate all requests. Note: this side event is now sold out ! Stay tuned if interested in case of cancellations from registered attendees.
AI Safety is an emerging field, gaining importance as AI systems become more capable and widespread. With increasing adoption and integration of AI, there are also increasing concerns around potential risks. The Centre pour la Sécurité de l'IA (CeSIA) will deliver a tutorial during the morning of the symposium, giving you an overview of both the risks and potential solutions in the current AI landscape. It will explain AI safety challenges and recent agendas pursued by labs like OpenAI, DeepMind and more. These talks will provide an overview of the first chapters from a new textbook, The AI Safety Atlas.
Tutorial room (different from afternoon programme, but on same campus):
Sorbonne University, 4 Place Jussieu. Corridor 25-26, Room 105 - access by rotunda 26, one floor up by lift or stairs
Time: 10:00 am - 12:00 pm.
Note: The room will be accessible from 9:30am with some coffee available.
Contact information
Human level AIs, what, when? (10:00 am - 10:30 am): Markov Grey & Charbel-Raphael Segerie
This session will focus on capabilities, including a discussion on timelines to get human-level intelligence, and possible limitations of large language models (LLMs).
What are the risks? (10:30 am - 11:00 am): Markov Grey
A detailed examination of the risks posed by advanced AI systems, including potential failures and unintended consequences, and systemic risks.
Break (11:00 am - 11:15 am)
What are the solutions? (11:15 am - 11:45 am): Charbel-Raphael Segerie
Exploration of the strategies to mitigate AI risks, covering technical agendas and policy-based approaches.
AI Governance (11:45 am - 12:00 pm): Charles Martinet
Discussion of the governance frameworks required to manage AI safely, focusing on regulation and international cooperation.
CeSIA, a Paris-based organization, is dedicated to advancing AI safety through education, advocacy, and research. It develops an AI safety textbook, gives accredited courses in top institutions like ENS Ulm, and helps organize ML4Good bootcamps to train talent globally. The center engages policymakers and industry leaders through regular roundtables and consultations, while raising public awareness via media collaborations and publications. CeSIA's current research focuses on creating benchmarks for AI monitoring systems, aiming to establish standards in the field.