Invited speakers

Krishna P. Gummadi

Krishna Gummadi is a scientific director and head of the Networked Systems research group at the Max Planck Institute for Software Systems (MPI-SWS) in Germany. He also holds a professorship at the University of Saarland. He received his Ph.D. (2005) and B.Tech. (2000) degrees in Computer Science and Engineering from the University of Washington and the Indian Institute of Technology, Madras, respectively.

Krishna's research interests are in the measurement, analysis, design, and evaluation of complex Internet-scale systems. His current projects focus on understanding and building social computing systems. Specifically, they tackle the challenges associated with (i) assessing the credibility of information shared by anonymous online crowds, (ii) understanding and controlling privacy risks for users sharing data on online forums, (iii) understanding, predicting and influencing human behaviors on social media sites (e.g., viral information diffusion), and (iv) enhancing fairness and transparency of machine (data-driven) decision making in social computing systems.

Krishna's work on fair machine learning, online social networks and media, Internet access networks, and peer-to-peer systems has been widely cited and his papers have received numerous awards, including Test of Time Awards at ACM SIGCOMM and AAAI ICWSM, Casper Bowden Privacy Enhancing Technologies (PET) and CNIL-INRIA Privacy Runners-Up Awards, IW3C2 WWW Best Paper Honorable Mention, and Best Papers at NIPS ML & Law Symposium, ACM COSN, ACM/Usenix SOUPS, AAAI ICWSM, Usenix OSDI, ACM SIGCOMM IMC, ACM SIGCOMM CCR, and SPIE MMCN. He has also co-chaired AAAI's ICWSM 2016, IW3C2 WWW 2015, ACM COSN 2014, and ACM IMC 2013 conferences. He received an ERC Advanced Grant in 2017 to investigate "Foundations for Fair Social Computing".

Hoda Heidari

Hoda Heidari os an Assistant Professor at Carnegie Mellon University with joint appointments in the Machine Learning Department and the Institute for Software Research.

Hoda's current research is broadly concerned with the societal and economic aspects of Artificial Intelligence, and in particular, on issues of unfairness and inexplicability in Machine Learning. Hoda completed her doctoral studies in computer and information science at the University of Pennsylvania under the supervision of Professors Michael Kearns and Ali Jadbabaie. During her time at UPenn, she also obtained an M.Sc. degree in statistics from the Wharton school of business. Hoda has organized multiple events on the topic of her research, including a tutorial at the Web Conference (WWW) and a workshop at the Neural and Information Processing Systems (NeurIPS) conference.

Kasper Lippert-Rasmussen

Since 2008 Kasper Lippert-Rasmussen is a professor of political science with a special focus on political theory at the Department of Political Science and Government, School of Business and Social Sciences, University of Aarhus. He is also an adjunct professor at the University of Roskilde and a professor II at the University of Tromsø.

Kasper Lippert-Rasmussen has a broad academic background, which includes a cand. scient. pol. degree (Aarhus 1990), a D.Phil. degree in philosophy (Oxford 1995), and a dr. phil. degree in philosophy (Copenhagen 2005).

His main research areas are discrimination, affirmative action, theories of democracy, and equality. He has published influential work in these fields such as Making Sense of Affirmative Action (Oxford University Press, 2020), Relational Egalitarianism (Cambridge University Press, 2018), Luck Egalitarianism (Bloomsbury, 2015), and Born Free and Equal? (Oxford University Press, 2013).

Moral Objections to Discrimination and Unfairness-Based Objections to Algorithms:

How Are They (Not) Related?

In recent years, much attention has been given to the view that various algorithm-based decision-making procedures are discriminatory. This gives rise to the important question of how moral objections to discrimination relate to moral objections to unfair (uses of) algorithms. First, I review some of the main moral objections to discrimination to show why most of these do not – or need not – apply to typical (uses of) algorithms. One common moral objection to discrimination – i.e., that it is unfair – seems different though. Hence, in the latter half of the talk I explore how the notions of unfairness at stake in standard objections to non-algorithmically-based discrimination, e.g., the unfairness of selecting one candidate over another based on irrelevant properties, is (not) related to the notions of unfairness at stake in some of the main fairness-based objections to (certain uses of) algorithms.