Montreal Speaker Series in the Ethics of AI

Conférences de Montréal en éthique de l’intelligence artificielle

Karen Levy

Assistant Professor of Information Science, Cornell University and Associated Faculty, Cornell Law School

RoboTruckers: The Double Threat of AI for Low-Wage Work?

Discussant: AJung Moon (Professor of Electrical & Computer Engineering, McGill University)

Thursday, 11 November 2021, 11:30 a.m. - 1:30 p.m.

Video Conference on Zoom (a link will be sent to registered participants)

Of late, much attention has been paid to the risk artificial intelligence poses to employment, particularly in low-wage industries. The question has invited well-placed concern from policymakers, as the prospect of millions of low-skilled workers finding themselves rather suddenly without employment brings with it the potential for tremendous social and economic disruption. Long-haul truck driving is perceived as a prime target for such displacement, due to the fast-developing technical capabilities of autonomous vehicles (many of which lend themselves in particular to the specific needs of truck driving), characteristics of the nature of trucking labor, and the political economy of the industry. In most of the public rhetoric about the threat of the self-driving truck, the trucker is contemplated as a displaced party. He is displaced both physically and economically: removed from the cab of the truck, and from his means of economic provision. The robot has replaced his imperfect, disobedient, tired, inefficient body, rendering him redundant, irrelevant, and jobless. But the reality is more complicated. The intrusion of automation into the truck cab indeed presents a threat to the trucker—but the threat is not solely or even primarily being experienced, as it is so often described, as a displacement. The trucker is still in the cab, doing the work of truck driving—but he is joined there by intelligent systems that monitor his body directly. Hats that monitor his brain waves and head position, vests that track his heart rate, cameras trained on his eyelids for signs of fatigue or inattention: these systems flash lights in his face, jolt his seat, and send reports to his dispatcher or even his family members should the trucker’s focus waver. As more trucking firms integrate such technologies into their safety programs, truckers are not currently being displaced by intelligent systems. Rather, they are experiencing the emergence of intelligent systems as a compelled hybridization, a very intimate incursion into their work and bodies. This paper considers the dual, conflicting narratives of job replacement by robots and of bodily integration with robots, in order to assess the true range of potential effects of AI on low-wage work.

Karen Levy is an Assistant Professor of Information Science at Cornell University and Associated Faculty at Cornell Law School. She is a sociologist and lawyer whose research focuses on legal, social, and ethical dimensions of data-intensive technologies.

Cynthia Rudin

Professor of computer science, electrical and computer engineering, statistical science, biostatistics and bioinformatics, Duke University

Scoring Systems: At the Extreme of Interpretable Machine Learning

Discussants: Ulrich Aïvodji (Professor of Software and Information Technology Engineering, ÉTS Montréal)

Thursday, December 9th, 2021, 11:00 a.m. - 1:00 p.m.

Video Conference on Zoom (a link will be sent to registered participants)

With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice, flawed models in healthcare, and black box loan decisions in finance. Interpretability of machine learning models is critical in high stakes decisions.

In this talk, I will focus on one of the most fundamental and important problems in the field of interpretable machine learning: optimal scoring systems. Scoring systems are sparse linear models with integer coefficients. Such models first started to be used ~100 years ago. Generally, such models are created without data, or are constructed by manual feature selection and rounding logistic regression coefficients, but these manual techniques sacrifice performance; humans are not naturally adept at high-dimensional optimization. I will present the first practical algorithm for building optimal scoring systems from data. This method has been used for several important applications to healthcare and criminal justice.

Cynthia Rudin is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics & bioinformatics at Duke University, and directs the Interpretable Machine Learning Lab. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is the recipient of the 2021 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI). This award is the most prestigious award in the field of artificial intelligence. Similar only to world-renowned recognitions, such as the Nobel Prize and the Turing Award, it carries a monetary reward at the million-dollar level. Prof. Rudin is also a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics.

Prof. Rudin is past chair of both the INFORMS Data Mining Section and the Statistical Learning and Data Science Section of the American Statistical Association. She has also served on committees for DARPA, the National Institute of Justice, AAAI, and ACM SIGKDD. She has served on three committees for the National Academies of Sciences, Engineering and Medicine, including the Committee on Applied and Theoretical Statistics, the Committee on Law and Justice, and the Committee on Analytic Research Foundations for the Next-Generation Electric Grid. She has given keynote/invited talks at several conferences including KDD (twice), AISTATS, CODE, Machine Learning in Healthcare (MLHC), Fairness, Accountability and Transparency in Machine Learning (FAT-ML), ECML-PKDD, and the Nobel Conferences.

Sylvie Delacroix

Professor in Law and Ethics, Uni. of Birmingham

Fellow, Alan Turing Institute & Mozilla

Co-Chair, datatrusts.uk


Algorithmic Habits and Precluded Transformations

Discussant: Karine Gentelet (Professor of Social Sciences, Université du Quebec en Outaouais)

Wednesday, March 9th, 2022, 11:30 a.m. - 1:30 p.m.

Video Conference on Zoom (a link will be sent to registered participants)

What if data-intensive technologies’ ability to mould habits with unprecedented precision is also capable of triggering some mass disability of profound consequences? What if we become incapable of shifting or modifying the deeply-rooted habits that stem from our increased technological dependence?

In this talk, professor Delacroix argues that the deleterious effects of the profile-based optimisation of user content are best understood as a form of alienation. What is compromised is our ‘inner mobility’: our ability to continually transform the habits that shape our pre-reflective intelligence. To counter this danger, two concrete interventions are considered. They are both meant to revive the scope for normative experimentation within data-reliant infrastructures.

‘Ensemble contestability’ features are to enable collective, critical engagement with optimisation tools. This critical engagement is made possible by outlining the outputs of differently trained, ‘ghost’ optimisation systems and introducing ways for users to interactively assess those counterfactual outcomes.

‘Bottom-up data trusts’ are designed to enable groups to regain agency over the data that makes these optimisation tools possible in the first place. Not only can this personal data thereby become a lever for social and political change; data trusts’ ‘bottom-up’ design opens the door to the development of a variety of participation habits that are far from the widespread passivity encouraged by top-down approaches to data governance.

Professor Delacroix’s research focuses on the intersection between law and ethics, with a particular interest in habits and the infrastructure that molds our habits (data-reliant tools are an increasingly big part of that infrastructure). She is considering the potential inherent in bottom-up Data Trusts as a way of reversing the current top-down, fire-brigade approach to data governance. She co-chairs the Data Trust Initiative. Funded by the McGovern Foundation, this initiative is in the process of selecting its inaugural round of data trusts pilots. Professor Delacroix has served on the Public Policy Commission on the use of algorithms in the justice system (Law Society of England and Wales) and the Data Trusts Policy group (under the auspices of the UK AI Council). She is also a Fellow of the Alan Turing Institute and a Mozilla Fellow. Professor Delacroix's work has been funded by the Wellcome Trust, the NHS and the Leverhulme Trust, from whom she received the Leverhulme Prize. This talk is based on the last chapter of the forthcoming book Habitual Ethics? (2022, Bloomsbury / Hart Publishing).



Susan Landau

Bridge Professor in Cyber Security and Policy, The Fletcher School and Professor of Computer Science, Tufts University

Securing Us: How the Issue Isn't Really Security versus Privacy After All

Discussants: Dominique Payette (Privacy and AI Lawyer, National Bank of Canada) and Benoit Dupont (Professor of Criminology, Université de Montréal)

Thursday, March 31st, 2022, 11:30 a.m. - 1:30 p.m.

Video Conference on Zoom (a link will be sent to registered participants)

Whether discussing encryption policy or various types of surveillance, the question is often posed as "security versus privacy." In fact, in a world of ubiquitous surveillance and borderless wars, those conflicts are really conflicts over differing notions of security. In this talk, I will focus on two topics---the encryption wars and use of communications metadata---to show how what appears to be a security versus privacy issue is really a security versus security one.

Susan Landau is Bridge Professor in Cyber Security and Policy at The Fletcher School and the School of Engineering, Department of Computer Science, Tufts University. Landau works at the intersection of cybersecurity, national security, law, and policy. Landau has written four books, including People Count: Contact-Tracing Apps and Public Health and Listening In: Cybersecurity in an Insecure Age, which came about because of her Congressional testimony in the Apple/FBI case. Landau has frequently briefed US and European policymakers on encryption, surveillance, and cybersecurity issues. She has been a Senior Staff Privacy Analyst at Google, a Distinguished Engineer at Sun Microsystems, and a faculty member at Worcester Polytechnic Institute, the University of Massachusetts Amherst and Wesleyan University. Landau is currently a member of the National Academies Forum on Cyber Resilience and has served on the National Academies Computer Science and Telecommunications Board, the NSF Computer and Information Science Advisory Committee, and the Information Security and Privacy Advisory Board. She is a member of the Cybersecurity Hall of Fame and of the Information System Security Hall of Fame; she is a fellow of the AAAS and ACM, as well as having been a Guggenheim and Radcliffe Fellow.

Thanks to our partners / Merci à nos partenaires