Montreal Speaker Series in the Ethics of AI

Conférences de Montréal en éthique de l’intelligence artificielle

2019-2020


Avi Goldfarb

Ellison Professor of Marketing, Rotman School Of Management

The Simple Economics Of Artificial Intelligence

Discussant: Peter Dietsch (Professeur de philosophie, Université de Montréal)

Wednesday, 27 November 2019, 4 - 6 p.m.

Room PK-1140, 201 av. du Président-Kennedy, UQAM

Recent excitement in artificial intelligence has been driven by advances in machine learning. In this sense, AI is a prediction technology. It uses data you have to fill in information you don’t have. These advances can be seen as a drop in the cost of prediction. The framing generates some powerful, but easy-to-understand implications. As the cost of something falls, we will do more of it. Cheap prediction means more prediction. Also, as the cost of something falls, it affects the value of other things. As machine prediction gets cheap, human prediction becomes less valuable while data and human judgment become more valuable. Business models that are constrained by uncertainty can be transformed, and organizations with an abundance of data and a good sense of judgment have an advantage.

Avi Goldfarb is the Rotman Chair in Artificial Intelligence and Healthcare, and Professor of Marketing, at the Rotman School of Management, University of Toronto. Avi is also Chief Data Scientist at the Creative Destruction Lab, Senior Editor at Marketing Science, and a Research Associate at the National Bureau of Economic Research. Avi’s research focuses on the opportunities and challenges of the digital economy. Along with Ajay Agrawal and Joshua Gans, Avi is the author of Prediction Machines: The Simple Economics of Artificial Intelligence (www.predictionmachines.ai) and editor of the NBER book The Economics of Artificial Intelligence: An Agenda.

Vincent C. Müller

Professor of Philosophy, Eindhoven University of Technology

Is it Time for Robot Rights?

Discussants: AJung Moon (Professor of Electrical & Computer Engineering, McGill University) and Jonathan Simon (Professor of Philosophy, Université de Montréal)

Thursday, 23 January 2020, 4 - 6 p.m.

Agora-Café, 6650 St-Urbain (Ground Floor), Montréal, Mila / Ivado

There is ethics of AI, and there is ‘machine ethics’: the project to build ethics into AI systems. I will investigate the suggestions some authors have made, that we should allocate rights to robots as part of machine ethics, or for artificial moral agents that are able to make complex moral decisions and act upon them.

Vincent C. Müller research focuses on theory and ethics of disruptive technologies, particularly artificial intelligence. He is Professor of philosophy at Technical University of Eindhoven (TU/e), University Fellow at the University of Leeds and Turing Fellow at the Alan Turing Institute, London - as well as President of the European Society for Cognitive Systems and Chair of the euRobotics topics group on 'ethical, legal and socio-economic issues'.

Antoinette Rouvroy

Se­ni­or re­se­ar­cher at the Re­se­arch Cent­re In­for­ma­ti­on, Law and So­cie­ty and Professor of Philosophy, Namur University

Justice out of the (black) box or the political art of accommodating human and algorithmic biases

Discussant: Daniel Weinstock (Professor of Philosophy and Law, McGill University) and Charles Morgan (Partner, McCarthy Tétrault)

CANCELED! Thursday, 20 February 2020, 4 - 6 p.m., UQAM

“Behind the sheltering sky is a vast universe and we are just so small,” wrote Paul Bowles in his 1949 novel. Algorithms open a new perspective on the universe behind the sheltering sky of our humane-centric representations. Algorithms evolve in a purely metric space whose obtention presupposes what Heidegger calls a neutralisation — the process by which the regions of the world around a being become mere dimensions of space. They reveal a glacial universe organized according to indifferent series of coordinates whose zero point is no longer the human being (F. Gironi). This machinic, "alien" perception defeat our preconceptions, and may indeed "desautomatise" our preconceptions and judgements. But other biases, less perceptible than human ones, unavoidably pervade algorithmic processes (biases in the social reality passively naturalized into amnesic data points, biases in the objective function of the algorithms, in their metrics, in their learning processes, in their detection of spurious correlations,...). Well intentioned efforts to render algorithms fair, accountable and transparent, however, will never, alone, suffice to meet the requirements of justice as principle of perfectibility of the social order, which requires:

  • Considering data not as "facts" but as "effects" of antecedent power relations and dominations, rendered "imperceptible " to algorithms;

  • Assessing the fairness, accountability and transparency, not of algorithms alone, but of the whole socio-technical systems (including the reactions of human decision makers confronted with algorithmic recommendations) in which algorithms intervene only as technical sub-systems;

  • Considering the differential impacts algorithmic false positives, false negatives or wrong scorings or ratings may have on individuals depending of their wealth or means to accommodate or contest the decision made about them on that basis.

To think of justice as a property of the algorithmic black box without further consideration of the global context can happen only at the cost of justice itself. In this talk, I will argue that, instead of trying to reach justice through fair, accountable and transparent algorithms, a more sustainable course would be to try to acknowledge and combine human and algorithmic biases in ways that guarantee, at least the effective contestability, for all, of the decisions informed by the algorithmic processing of digital data.

Antoinette Rouvroy is per­ma­nent re­se­arch as­so­cia­te at the Bel­gi­an Na­tio­nal Fund for Sci­en­ti­fic Re­se­arch (FNRS), se­ni­or re­se­ar­cher at the Re­se­arch Cent­re In­for­ma­ti­on, Law and So­cie­ty, Law Fa­cul­ty, Uni­ver­si­ty of Na­mur (Bel­gi­um), and professor in the Department of Philosophy of the same university. She has been awarded the Francqui Chair at the Université de Liège (department of Law, Political Sciences and Criminology) in 2019-2020 where she will present a series of conferences on algorithmic governmentality. She is a member of the foresight committee of the CNIL (French data protection authority), co-investigator of the "Autonomy through Cyberjustice Technologies" project led by the Laboratory of Cyberjustice of the University of Montreal, and collaborator to the Ecole Normale Supérieure's Chair on Geopolitics of Risks.

Derek Leben

Professor of philosophy, University of Pittsburgh at Johnstown

Moral Principles for Evaluating Fairness Metrics in AI

Discussant: Martin Gibert (Agent de recherche, CRÉ-Ivado)

Thursday, 12 March 2020, 4 - 6 p.m.

Agora-Café, 6650 Saint-Urbain (Ground Floor), Mila / Ivado

Machine learning (ML) algorithms are increasingly being used in both the public and private sectors to make decisions about jobs, loans, college admissions, and prison sentences. The appeal of ML algorithms is clear; they can vastly increase the efficiency, accuracy, and consistency of decisions. However, because the training data for ML algorithms contains discrepancies caused by historical injustices, these algorithms often reveal biases towards historically oppressed groups. The field of "Fairness, Accountability, and Transparency in Machine Learning" (FAT ML) has developed several metrics for determining when such bias exists, but satisfying fairness in all of these metrics is mathematically impossible, and some of them require large sacrifices to the accuracy of ML algorithms. I propose that we can make progress on evaluating fairness metrics by drawing on traditional principles from moral and political philosophy. These principles are largely designed around the problem of determining a fair distribution of resources, such as Egalitarianism, Libertarianism, Desert-Based Approaches, Intention-Based Approaches, and Consequentialism. My goal is to describe in detail how each of these approaches will favor a particular set of fairness metrics for evaluating ML algorithms.

Derek Leben is Department Chair and Associate Professor of Philosophy at the University of Pittsburgh at Johnstown. His research focuses on the intersection between ethics, cognitive science, and emerging technologies. In his recent book, Ethics for Robots: how to design a moral algorithm (Routledge, 2018), he demonstrates how traditional moral principles can be formalized and implemented into autonomous systems. He is currently on sabbatical as a visiting professor at Carnegie Mellon University, working on extending this approach from the domain of harm in autonomous systems to the domain of fairness in machine learning algorithms.

Shannon Vallor

Shannon Vallor

Professor of Philosophy and Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute, University of Edinburgh

Technology's Unpaid Debt: AI and the Promise of a More Humane Future

Discussant: Cathy Cobey (Global Trusted AI Advisory Leader, Ersnt & Young) and Jocelyn Maclure (Professeur de philosophie, Université Laval)

Jeudi / Thursday, 30 Apr. 2020, 11:30 a.m. - 1:30 p.m.

Video Conference on Zoom (a link will be sent to registered participants)

The humane future that champions of advanced technology once promised--a future with greater leisure and equality, more enlightened minds and sentiments, and cleaner and healthier environments--has been indefinitely delayed in its arrival. Instead, a growing 'moral debt' has been incurred by technologists as the environmental and social costs of 19th-20th c. industrialization and 21st c. computerization continue to accumulate, while compensating advantages are distributed increasingly unequally. In this talk I will ask how AI fits into this picture: will we allow AI to add to that increasingly unsustainable debt? Or might recent shifts in how we understand the ethical responsibilities of AI developers allow us to use AI and other emerging technologies in ways that finally begin to pay down that moral debt, and fulfill technology's unmet promise of a more humane world?

Professor Vallor’s research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media, for human character and institutions. She received the 2015 World Technology Award in Ethics from the World Technology Network. She has served as President of the Society for Philosophy and Technology, and as a Co-Director of the nonprofit Foundation for Responsible Robotics. In addition to her many articles and published educational modules on the ethics of data, robotics, and artificial intelligence, she is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) and the forthcoming Lessons from the AI Mirror: Rebuilding Our Humanity in an Age of Machine Thinking.

Thanks to our partners / Merci à nos partenaires