Keynotes


Michael Rovatsos

Bio

Michael Rovatsos is Professor of AI at the University of Edinburgh, and Director of the Bayes Centre, the University’s innovation hub for Data Science and AI. He also represents the University at the Alan Turing Institute, the UK’s national institute for Data Science and AI. His research interests are in Artificial Intelligence with a specific focus on multiagent systems, i.e. systems where either artificial or human agents collaborate or compete with each other. His involvement in large interdisciplinary projects has led to a major shift of his research agenda toward ethical AI over the last five years, where he focuses on developing intelligent decision-making algorithms and platform architectures that support the moral values of their human stakeholders.

Abstract

Ubiquitous connectivity among the world's citizen through online platforms has brought the promise not only of being able to leverage human intelligence and knowledge through crowdsourcing and human computation on a massive scale, but also to use such platforms to democratise digitally mediated decision making by capturing the views and opinions of many citizens in novel ways, and allowing them to co-design the 'social contract' they would like their societies to implement. However, we now see many examples which put this optimistic vision of a future cybersociety into doubt, and evidence is mounting that many of these systems will inherit human biases, prejudice and discriminatory practices. In this talk, I will report on some of our work with users around understanding their ideas of fairness and the challenges involved, and present some theoretical arguments on 'democratic self-regulation of collectives', in an attempt to shed some light on these challenges and suggest possible solutions and avenues for future research.

Slides: https://drive.google.com/open?id=15UK20uiyUPBnUkHaFFXMN_YIJQFaqWj4

Ricardo Kawase

Bio

Ricardo Kawase is a data scientist at mobile.de GmbH, the leading online automotive marketplace in Germany. He is the lead data scientist on topics such fraud fighting/prevention, user profiling, customer behavior prediction and personalization. He holds a Ph.D in Computer Science (Doctor rerum naturalium, Dr. rer. nat.) from the Gottfried Wilhelm Leibniz Universität Hannover, Germany. Before joining mobile.de he worked as a researcher for over 7 years at the L3S Research center in Hannover on several topics such as data mining, information retrieval, semantic web, e-learning, social networks, crowdsourcing and Web science in general. He has written, collaborated, and published over 60 peer reviewed academic articles and journals.

Abstract

Since a few years, mobile.de is heavily investing in machine learning solutions to create a better online experience for its users. Personalization, recommendations and search are some examples where we already apply state of the art machine learning algorithms. In these examples, content, shape, position and order in which information is presented, have a substantial impact on human decision and a significant impact in business KPIs. At the same time that we try to predict users’ intent and facilitate access to certain information, we implicitly learn from users’ actions what are their preferences to support these predictions. This creates a vicious cycle, introducing bias that is constantly reinforced by human implicit feedback. Many machine learning solutions rely on the learnings from implicit human feedback. However, implicit feedback carries the biases that are introduced by our own algorithms and the limitations of how information is presented. In this talk, I deep dive in two main topics, personalization and search at mobile.de, how bias affects the machine learning algorithms and the strategies we use to mitigate it.

Slides: Author/s do not authorize to publish