Speakers
Current AI systems are based on Machine Learning which process data to build statistical models, and then use them to make decisions, to predict outcomes or to generate content. They have impressive performance and have become of widespread use in almost all sectors, from Healthcare to Warfare, as well as of general use in daily life. We will introduce scientific and technical foundations of AI systems, illustrate their operation, expose what are their limitations, and why they raise ethical considerations related to safety, human autonomy, understandability, or equity. Then we will introduce the main ethics principles and the requirements that should be included in the design and operation of AI systems, such as safety and security, transparency, human oversight, equity, privacy, social end environmental well-being. We introduce what governance mechanisms and means should be put in place to ensure compliance with these requirements. Use-cases such as chatbots, self-driving cars or medical diagnostics will be used to illustrate the lecture.
Judgment aggregation is a formal framework that has been introduced to model and reason about group decision-making over a set of interconnected issues. Our starting point will be the doctrinal paradox, a simple example showing us how decisions taken by the majority rule may lead to inconsistencies. We will then look into some of the main mathematical, computational and strategic results of this field, as well as some of its concrete applications.
Responsible AI has become an important topic as AI systems increasingly shape critical decisions across various sectors. The rapid deployment of these systems raises pressing concerns about their ethical and societal impact. In this talk, we will explore why responsible AI is essential to ensure technology serves humanity in a fair and sustainable way. We will break down the key components of responsible AI: privacy and security, fairness, explainability, robustness, transparency, and governance. Through real-world examples, we will examine instances where fairness was compromised and discuss practical solutions to these issues. We will also provide an overview of the challenges associated with explainable AI and assess the robustness of existing algorithms. Additionally, we will touch on the intersection of AI and ecology, highlighting the environmental considerations tied to the increasing use of AI technologies.
The lecture will discuss how the relationship between artificial intelligence (AI) and democracy is mediated by economic mechanisms and business models. We will begin by presenting the economic features of digital technologies and AI as 'information goods', and the peculiar economic dynamics these create or induce. In particular, we will stress the contradiction characterising AI producers, that are asked to limit the dis-information generated by their AI models, while also looking to generate widespread economic returns from their products. Then, we will discuss democracy-related impacts of AI use and diffusion: how AI might radicalise 'left-behind places'; how AI-driven automation polarises the labour force; how the training of AI systems requires labour work that is often sourced in non-democratic contexts; how AI can be employed by non-democratic actors to produce harms (locally and globally). Finally, we will expand the view to the inputs of AI, in particular hardware and compute, and discuss the relationship between AI and industrial policy and how that feeds growing global rivalries.
Misinformation is an important problem but mitigators are overwhelmed by the amount of false content that is produced online every day. To assist human experts in their efforts, several projects are proposing computational methods that aim at supporting the detection of malicious content online. In the first part of the lecture, we will overview the different approaches, spanning from solutions involving humans and a crowd of users to fully automated approaches. In the second part, we will focus our attention on the data driven verification for computational fact checking. We will review methods that combine solutions from the ML and NLP literature to build data driven verification. We will also cover how the rich semantics in knowledge graphs and pre-trained language models can be used to verify claims and produce explanations, which is a key requirement in this space. Better access to data and new algorithms are pushing computational fact checking forward, with experimental results showing that verification methods enable effective labeling of claims. However, while fact checkers start to adopt some of the resulting tools, the misinformation fight is far from being won. In the last part of this lecture, we will cover the opportunities and limitations of computational methods and their role in fighting misinformation.
The democratisation of deep fake technologies, in particular with to the development of generative AI systems, has led to a proliferation of uses of these technologies in different contexts (artistic, entertainment, media and even legal). These uses may pursue criminal and democratic destabilisation objectives or may be justified by freedom of information and creation. In this context, this conference will provide an analysis of the applicable legal framework (including the AI Act). La démocratisation des technologies de deep fake, notamment grâce à l'essor des systèmes d'IA générative, a conduit à une prolifération des usages de ces technologies dans différents contextes (artistique, ludique, médiatique et même juridique). Ces utilisations peuvent poursuivre des objectifs criminels et de déstabilisation démocratique ou encore relever de la liberté d'expression et de création. Dans ce contexte, cette conférence proposera une analyse du cadre juridique applicable (y compris le Règlement Intelligence artificielle).
MultiAgent Systems (MAS) fit naturally in the development of AI. A multiagent system is composed by a set of autonomous entities (software or human entities) acting and interacting in a same environment in order to achieve their objectives (Russell and Norvig, 2003; Weiss, 1999). A large range of AI applications are inherently composed of a set of autonomous and interacting agents. We can cite for instance autonomous vehicles traveling in a city, rescue rovers operating in a disaster area to find and rescue some victims, robots assisting people, humans collaborating to make collective decisions ... Since these agents interact in dynamic environments, they need to continuously adapt to changes. Moreover, the complexity of the environments requires the agents to collaborate to achieve their objectives. In the first part of the lecture, we will draw an overview of the multiagent system domain. Then, we will discuss issues related to democracy and to the development of multiagent systems. We will stress the key principles for the development of responsible multiagent systems. We will next discuss how the development of adaptative and collaborative multiagent systems can contribute to democracy. A more specific focus will be given to fair resource allocation and then to deliberation and argumentation.