An Interdisciplinary Conference on AI Methods, AI Applications, and Ethical/Legal Implications

June 13-14, 2023, Gutenberg Digital Hub, Mainz, Germany

Organized by TOPML, AI Alliance RLP and KI@JGU

Keynote Speakers

Nuria Oliver, PhD, Nuria Oliver, PhD, director of the ELLIS Alicante Foundation, Chief Scientific Adviser at the Vodafone Institute, Chief Data Scientist at DataPop Alliance

Data Science against COVID-19: The Valencian Experience

This invited talk describes the work that a multi-disciplinary team of 20+ volunteer scientists did between March of 2020 and April of 2022, working very closely with the Presidency of the Valencian Government to support their decision-making during the COVID-19 pandemic in Spain. This team was known as the Data Science against COVID-19 taskforce. The team's work was structured in 4 areas: (1) large-scale human mobility modeling; (2) development of computational epidemiological models (metapopulation, individual and LSTM-based models); (3) development of predictive models of hospital and intensive care units' occupancy; and (4) a large-scale, online citizen surveys called the COVID19 impact survey (https://covid19impactsurvey.org) with over 720,000 answers worldwide. This survey enabled us to shed light on the impact that the pandemic had on people's lives during the period of study [3,4,5]. In the talk, I will present the results obtained in each of these four areas, including winning the 500K XPRIZE Pandemic Response Challenge [1] and obtaining a best paper award at ECML-PKDD 2021 [2]. I will share the lessons learned in this very special initiative of collaboration between the civil society at large (through the citizen survey), the scientific community (through the Data Science against COVID-19 taskforce) and a public administration (through our collaboration with the Presidency of the Valencian Government). For those interested in knowing more about this initiative, WIRED magazine published an extensive article describing the story of this effort: https://www.wired.co.uk/article/valencia-ai-covid-data

Nuria Oliver is a computer scientist. She holds a Ph.D. from the Media Lab at MIT. She is the first female computer scientist in Spain to be named an ACM Distinguished Scientist and an ACM Fellow. She is also a Fellow of the European Association of Artificial Intelligence and a IEEE Fellow. She is a member of the Academia Europaea and the fourth and youngest female member of the Spanish Royal Academy of Engineering. In 2018 she was named Engineer of the Year by the Professional Association of Telecommunication Engineers of Spain and she received an honorary doctorate from the University Miguel Hernandez.

Prof. Roberto Esposito, University of Torino

Fairness and Neural Networks

Recent years have witnessed a Cambrian explosion of tools and techniques able to tackle problems that were only solvable by humans up to a few years ago; deep learning in particular is accumulating astounding successes at a breakneck pace in both research and applications: from helping in recovering photos by their descriptions on devices used by billions of people, to providing tools for investigating the depths of the visible universe. It is then unfortunate that these very models are utterly inscrutable and inaccessible to human understanding.While in many cases the difficulty of understanding these models does not matter, in some very specific contexts it creates problems that are important and hard to solve. Big and small companies are, in fact, investing in these technologies and deploying them in contexts that directly impact human well-being such as loan applications, candidate selection for job offers, and evaluating the chance of re-offending for people who committed crimes. In all these cases using inscrutable models poses difficult ethical issues related to the risk of discrimination of people belonging to protected groups.In absence of techniques allowing to solve the problem by explaining the decisions of these models, the fair ML literature focused on  approaches based on the construction of non-discriminating models. In this context, neural networks provide both challenges and opportunities. I will contextualize the problem, show how there exists many (and contrasting) definitions of fairness, and introduce some state-of-the-art approaches in this field. I will focus in particular on methods targeting neural networks, specifically on methods that constrain the representation learnt by the network to be fairer with respect to given sensible attributes.

Roberto Esposito is currently an associate professor at the University of Turin. His research interests are in the field of machine learning, including -- among others -- neural networks, semi-supervised learning, and federated learning. He has published more than 60 research papers in international journals and conferences, he collaborated to several international projects and he sits in the editorial board of the Machine Learning journal and is a PC members of several important conferences including IJCAI (as a senior PC), ECAI/PKDD, KDD, and UAI.

 Wanja Wiese PhD., Ruhr-Universität Bochum

What is at stake in debates about artificial consciousness?

As artificial intelligence (AI) is becoming increasingly proficient in various domains, many are starting to wonder whether existing or future AIs may be conscious. Ideally, the science of consciousness would enable an answer to this and related questions, thereby providing guidance for how to deal with the associated moral, legal, and political challenges. Since research on consciousness is still far from being in a mature stage, however, no unequivocal answer to questions about artificial consciousness can be expected.

This is unfortunate, because the resulting uncertainty about whether and which AIs can be conscious also entails uncertainty about the moral status of such systems. Furthermore, our tendency to anthropomorphize might make us and our children prone to deception by non-conscious chatbots, robot companions, or similar systems that only seem to be conscious. Conversely, we might inadvertently inflict harm on conscious artificial systems that we don’t recognize as conscious beings.

The talk clarifies what is at stake in these debates and maps different approaches to research on artificial consciousness.


Dr. Wanja Wiese is currently a postdoctoral assistant researcher and lecturer in philosophy at Ruhr-Universität Bochum, Germany. His research centers on consciousness, mental representation, philosophy of cognitive science, and philosophy of mind.

Dinesh Kumari Chenchanna, ZDF

Künstliche Intelligenz: Journalistisches Werkzeug, Kollege oder Konkurrenz? - Ein Zwischenruf

Bard, LLama, ChatGPT – AI-Programm sind zunächst einmal spannend, eröffnen neue Möglichkeiten der text- und sprachbasierten Kommunikation.  Und es macht natürlich Spaß, diese neuen Formen auszuprobieren und auf die Probe zu stellen. Doch schnell kommen Befürchtungen dazu: Macht AI Journalist*innen bald überflüssig? Wie sehr hinterfragen die neuen Möglichkeiten deren bisherige Berufsdefinition und Identität? Welche Möglichkeiten und Anforderungen an Berufsbilder im journalistischen Arbeiten entstehen neu? Wenn die Maschine schneller denkt und produziert, was bleibt dann an Bedarf für die menschlichen Möglichkeiten übrig? 


Venue

The conference will be held on June 13-14, 2023 at the Gutenberg Digital Hub, Mainz, Germany.

Contact

All questions about the conference should be emailed to topml-management@uni-mainz.de.