Program

*EN | FR

Program at-a-glance:

Detailed program below.

Update: Slides from the talks are now available -- click the link next to the talk title.

Update: Videos are now available for most talks!

Feb. 21, 2019

  • 2 keynote speeches (30 minutes + 5 minutes for questions and changeover)
  • 10 Invited talks (15 minutes + 5 minutes for questions and changeover) + 2 video addresses
  • 2 panel discussions (45 minutes each)
  • 2 coffee breaks (15 minutes each)
  • Student poster session and lunch break (60 minutes)


8:00 Registration and breakfast

8:45 Opening remarks by the organizers (NRC and Digital Catapult)

8:55 Welcome address by Iain Stewart, President of NRC


9:00--10:25 Session 1 Session Chair: Cyril Goutte, NRC

9:00 Video address by Luciano Floridi, University of Oxford

A message of support

9:05 Invited talk by Christine Henry, Amnesty International

Data Science, for Good? Adventures in Practical Ethics Implementation in the 'AI For Good' Space [slides]

9:25 Video address by Ann Cavoukian, Privacy by Design Centre of Excellence, Ryerson University

AI Ethics by Design: An Extension of Privacy by Design to Artificial Intelligence

9:45 Invited talk by Anat Elhalal, Digital Catapult

Moving the AI Ethics Conversation From the 'What' to the 'How' [slides]

10:05 Invited talk by Keith Jansa, CIO Strategy Council

Key Ingredient to Implementing Ethical AI and Growing the Digital Economy [slides]


10:25 -- 10:45 Coffee break


10:45--12:00 Session 2 Session Chair: Michel Simard, NRC

10:45 Keynote talk by Joelle Pineau, McGill University and Facebook

Ethical Challenges in Data-Driven Dialogue Systems [slides]

11:20 Invited talk by Graeme Hirst, University of Toronto

Ethical Issues in Natural Language Processing [slides]

11:40 Invited talk by Saif M. Mohammad, NRC

Examining Fairness through Emotions in Language [slides]


12:00 -- 13:00 Lunch and student poster session


13:00--14:15 Session 3 Session Chair: Kathleen Fraser, NRC

13:00 Keynote talk by Alison Paprica, Vector Institute

Social Licence, Health Data and AI [slides]

13:35 Invited talk by Jennifer L. Gibson, University of Toronto

AI for Health: Key Ethical Considerations

13:55 Invited talk by David Van Bruwaene, SafeToNet

Privacy, Machine Learning, and the Digital Parent


14:15-- 15:00 Panel Discussion on Privacy, Transparency, and Explainability in AI Applications

Moderator: Libby Kinsey, Digital Catapult

Panelists:

Sébastien Gambs, Université du Québec à Montréal

Jo Kennelly, Sightline Innovation

Patricia Kosseim, Osler, Hoskin & Harcourt LLP

Jocelyn Maclure, Université Laval


15:00 -- 15:15 Coffee break


15:15--16:15 Session 4 Session Chair: Peter Bloomfield, Digital Catapult

15:15 Invited talk by Petra Molnar, University of Toronto

The Human Rights Impacts of AI and Emerging Technologies: Experiments with Migration Management and Refugee Decision-Making [slides]

15:35 Invited talk by Wilco van Ginkel, a3i

Trust or not to Trust AI - That’s the Question!

15:55 Invited talk by Samantha Brown, Doteveryone

Responsibility in Tech Practice [slides]

16:15--17:00 Panel Discussion on Governance and Accountability for AI Algorithms/Products

Moderator: Svetlana Kiritchenko, NRC

Panelists:

Rob Davidson, Information and Communications Technology Council

Hessie Jones, International Council on Global Privacy and Security, by Design

Mark Robbins, Institute on Governance

Sarah Villeneuve, Brookfield Institute for Innovation + Entrepreneurship


17:00 -- 17:02 Closing remarks by the organizers

Detailed Program

Feb. 21, 2019

  • 2 keynote speeches (30 minutes + 5 minutes for questions and changeover)
  • 10 Invited talks (15 minutes + 5 minutes for questions and changeover) + 2 video addresses
  • 2 panel discussions (45 minutes each)
  • 2 coffee breaks (15 minutes each)
  • Student poster session and lunch break (60 minutes)


8:00 Registration and breakfast

8:45 Opening remarks by the organizers (NRC and Digital Catapult)

8:55 Welcome address by Iain Stewart, President of NRC

9:00--10:25 Session 1 Session Chair: Cyril Goutte, NRC

9:00 Video address by Luciano Floridi, University of Oxford

A Message of Support

Abstract: It's possible and necessary to move from theory to practice, and now is the time to get practical. The United Kingdom and Canada combining forces can help show what is possible and move in the right direction.

Speaker bio: Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, where he directs the Digital Ethics Lab of the Oxford Internet Institute, and is Professorial Fellow of Exeter College. He is also Turing Fellow and Chair of the Data Ethics Group of the Alan Turing Institute. Professor Floridi is also Chairman of MIGarage's Ethics Committee. His areas of expertise include digital ethics, the philosophy of information, and the philosophy of technology. Among his recent books, all are published by Oxford University Press: The Fourth Revolution – How the infosphere is reshaping human reality (2014), winner of the J. Ong Award; The Ethics of Information (2013); The Philosophy of Information (2011); The Logic of Information (forthcoming in 2019).

9:05 Invited talk by Christine Henry, Amnesty International

Data Science, for Good? Adventures in Practical Ethics Implementation in the 'AI For Good' Space

Abstract: In this talk we will introduce new methods and processes to help the teams building tech products and services put responsibility at the heart of their business planning and product management in order to address the social harms of technology early and design for consequences.

Speaker bio: Christine Henry is a contract Product Manager and Data Ethics Consultant. She is currently a Product Manager at Amnesty International. Christine has over eight years of experience in healthcare data analysis, forecasting, and market access, as well as knowledge of machine learning and data science. She holds a PhD in physical chemistry from the Australian National University, and a law degree. Christine is passionate about investigating the ethical and social impacts of new technologies and data. She is a volunteer at DataKind UK, where she works with teams of pro bono data scientists to help charities and nonprofits to use data science techniques to have a greater impact. She led the development of DataKind UK’s ethical principles for data science volunteers and has presented on this work at conferences and meetings.

9:25 Video address by Ann Cavoukian, Privacy by Design Centre of Excellence, Ryerson University

AI Ethics by Design: An Extension of Privacy by Design to Artificial Intelligence

Abstract: Privacy is presently under siege. With the growth of ubiquitous computing, online connectivity, social media, wireless/wearable devices, and concern over the direction of Artificial Intelligence, people are being led to believe they have no choice but to give up on privacy. This is not the case! Using the Privacy by Design framework will enable our privacy and our freedom to live well into the future. Dr. Cavoukian dispels the notion that privacy acts as a barrier to public safety, security and innovation. She argues that the limiting paradigm of “zero-sum” – that you can either have privacy or innovation, but not both – is an outdated, win/lose model of approaching the question of privacy in the age of massive surveillance. Instead, a “positive-sum” solution is needed in which the interests of both sides may be met, in a doubly-enabling, “win-win” manner through Privacy by Design (PbD). PbD is predicated on the rejection of zero-sum propositions by proactively identifying the risks and embedding the necessary protective measures into the design and data architecture involved. Her new AI Ethics by Design explores the need to proactively embed an ethical framework on AI developments, in order to maximize the gains: win/win! Dr. Cavoukian has also convened a new International Council on Global Privacy and Security, by Design, to respond to the growing pressures of zero-sum models seeking to advance security at the expense of privacy. Say NO to win/lose models. She outlines how organizations can embed privacy and security into virtually any system or operation, to achieve positive-sum, win/win outcomes, enabling both privacy and security – not one at the expense of the other. We can do this!

Speaker bio: Dr. Ann Cavoukian is recognized as one of the world’s leading privacy experts. Dr. Cavoukian served an unprecedented three terms as the Information & Privacy Commissioner of Ontario, Canada. There she created Privacy by Design, a framework that seeks to proactively embed privacy into the design specifications of information technologies, networked infrastructure and business practices, thereby achieving the strongest protection possible. In 2010, International Privacy Regulators unanimously passed a Resolution recognizing Privacy by Design as an international standard. Since then, PbD has been translated into 40 languages. She is presently the Distinguished Expert-in-Residence, leading the Privacy by Design Centre of Excellence at Ryerson University. Dr. Cavoukian is also a Senior Fellow of the Ted Rogers Leadership Centre at Ryerson University, and a Faculty Fellow of the Center for Law, Science & Innovation at Sandra Day O’Connor College of Law at Arizona State University. Dr. Cavoukian is the author of two books, “The Privacy Payoff: How Successful Businesses Build Customer Trust” with Tyler Hamilton and “Who Knows: Safeguarding Your Privacy in a Networked World” with Don Tapscott. She has received numerous awards recognizing her leadership in privacy, including being named as one of the Top 25 Women of Influence in Canada, named among the Top 10 Women in Data Security and Privacy, named as one of the ‘Power 50’ by Canadian Business, named as one of the Top 100 Leaders in Identity, she was awarded the Meritorious Service Medal by the Governor General of Canada for her outstanding work on creating Privacy by Design and taking it global (May, 2017), named as one of the 50 Most Impactful Smart Cities Leaders, (November, 2017), named among the Top Women in Tech, (December, 2017), and most recently, was awarded the Toastmasters District 60 Communication and Leadership Award, (April, 2018).

9:45 Invited talk by Anat Elhalal, Digital Catapult

Moving the AI Ethics Conversation From the 'What' to the 'How'

Abstract: 2018 has been the year of responsible AI frameworks. While highly useful in starting the conversation on concepts such as fairness, transparency, accountability and explainability, they normally inhabit an analogue space and slow down Machine Learning based products development. We strongly believe that responsible choices equal competitive advantage, but often this is hard to justify in practice. For the responsible way to be the default way we need to remove the friction from the process to make it the path of least resistance, which can only be achieved by using technology tools for responsible AI. In this talk we suggest an international programme to accelerate development and adoption of such tools.

Speaker bio: Dr. Anat Elhalal is head of AI technology at Digital Catapult. She provides technological leadership across industry sectors and programmes, with a focus on identifying innovation and adoption barriers specific to AI, along with developing interventions to address those barriers. This year Digital Catapult launched Machine Intelligence Garage to support early stage startups with access to computation power, expertise and most recently resources around responsible AI. Anat is a researcher in analysing and predicting behaviour with over 15 years of academic and industry experience, based on research, real world data and mathematical modelling. Previously, Anat led global machine learning teams at a number of startups within the vibrant London scene. Anat trained in physics, neural networks and cognitive science, and has a PhD in modelling human memory using neural networks. Her interests include distributed ledger technology, baking and rock climbing.

10:05 Invited talk by Keith Jansa, CIO Strategy Council

Key Ingredient to Implementing Ethical AI and Growing the Digital Economy

Abstract: Emerging technologies using machine learning and artificial intelligence are set to expand the limits of what is possible with data. We have reached a defining point where collective action is needed to define the right rules to propel the prosperity of Canadian and UK businesses. Hear from Keith Jansa, CIO Strategy Council’s acting executive director on Canada’s leadership in setting and driving the adoption of standards and best practices for the ethical use of AI and big data.

Bio: As A/Executive Director, Keith is responsible for advancing the vision of the CIO Strategy Council, bringing the country’s foremost and forward-thinking CIOs together to share best practices and provide a national forum to inform, develop and drive technology adoption and champion national initiatives to transform the country's ICT ecosystem. With 10+ years of industry experience, Keith brings strategic insight, leadership and depth of expertise to the CIO Strategy Council, delivering strategic value to the membership to hone Canada’s global leadership and competitiveness in the digital economy. Keith has significant standards-setting experience and has served on over thirty national and international technical committees and working groups throughout his career, including: ULC, UL, CSA, IEEE, ISO, IEC, and ITU. Among his previous positions Keith led a team of specialists in a crown corporation devising many winning standardization strategies designed to accelerate market access for Canadian companies and their innovative technologies; managed a trade association standards program advancing strategic priorities in the interests of member companies and their customers; worked as a standards specialist for a leading standards development organization; and served as a board director for a non-profit organization providing strategic direction, overseeing financial reporting and enhancing the quality of life of those with mental health challenge. Keith holds a Bachelor in Health Sciences honours degree from the University of Ottawa, and is married to university sweetheart Kayla Jansa, raising 3 children.

10:25 -- 10:45 Coffee break


10:45--12:00 Session 2 Session Chair: Michel Simard, NRC

10:45 Keynote talk by Joelle Pineau, McGill University and Facebook

Ethical Challenges in Data-Driven Dialogue Systems

Abstract: The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm. A growing number of dialogue systems use conversation strategies that are learned from large datasets. There are well-documented instances where interactions with these system have resulted in biased or even offensive conversations due to the data-driven training process. Here, we highlight potential ethical issues that arise in dialogue systems research, including: implicit biases in data-driven systems, the rise of adversarial examples, potential sources of privacy violations, safety concerns, special considerations for reinforcement learning systems, and reproducibility concerns. We also suggest areas stemming from these issues that deserve further investigation. Through this initial survey, we hope to spur research leading to robust, safe, and ethically sound dialogue systems.

Speaker bio: Joelle Pineau is an Associate Professor and William Dawson Scholar at McGill University where she co-directs the Reasoning and Learning Lab. She also leads the Facebook AI Research lab in Montreal, Canada. She holds a BASc in Engineering from the University of Waterloo, and an MSc and PhD in Robotics from Carnegie Mellon University. Dr. Pineau's research focuses on developing new models and algorithms for planning and learning in complex partially-observable domains. She also works on applying these algorithms to complex problems in robotics, health care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a recipient of NSERC's E.W.R. Steacie Memorial Fellowship (2018), a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.

11:20 Invited talk by Graeme Hirst, University of Toronto

Ethical Issues in Natural Language Processing

Abstract: Ethical issues in natural language processing relate both to the applications of NLP and to problems of bias and discrimination within the systems themselves. I will talk about virtuous and evil applications, about the problem of bias in learning from linguistic data, and specifically about bias in word-embeddings.


Speaker bio: Graeme Hirst is a computer scientist at the University of Toronto. His research covers a broad range of topics in applied computational linguistics and natural language processing, including lexical semantics, the resolution of ambiguity in text, the analysis of authors’ styles in literature and other text (including plagiarism detection and the detection of online sexual predators), and the automatic analysis of arguments and discourse (especially in political and parliamentary texts). Hirst’s recent research includes detecting markers of Alzheimer’s disease in language; determining ideology in political texts; and the identification of the native language of a second-language writer of English. With colleagues in Canada, the U.K. and the Netherlands, he was a co-PI of a Digging Into Data grant on processing linked parliamentary data. He is the author of two monographs: Anaphora in Natural Language Understanding and Semantic Interpretation and the Resolution of Ambiguity. He is the editor of the series Synthesis Lectures on Human Language Technologies (Morgan & Claypool Publishers), which has become the leading venue for monograph publication in computational linguistics and natural language processing. He was also one of the six coordinating editors of the 14-volume Encyclopedia of Language and Linguistics (2nd edition), published by Elsevier in 2006. In 2017, he received the Lifetime Achievement Award from the Canadian Artificial Intelligence Association.

11:40 Invited talk by Saif M. Mohammad, NRC

Examining Fairness through Emotions in Language

Abstract: Language and emotions are central to human experience, creativity, and behavior. They are crucial for organizing meaning and reasoning about the world we live in. They are ubiquitous and everyday, yet complex and nuanced. In this talk, I will describe our work on the search for fairness in language, with a focus on the emotions expressed through language. I will describe experiments on quantifying biases in language and how these biases percolate to the latest machine learning emotion detection systems. I will give examples of new sentiment analysis applications that on the one hand have substantial potential for social good, but on the other hand can be easily used for manipulation and exploitation. Finally, I will show that different demographic groups perceive the connotative meaning of words differently, which raises further questions on why such differences exist, to what extent such differences are the result of unequal social power structures, and how such inequities can be meaningfully addressed.

Speaker bio: Dr. Saif M. Mohammad is Senior Research Scientist at the National Research Council Canada (NRC). He received his Ph.D. in Computer Science from the University of Toronto. Before joining NRC, Saif was a Research Associate at the Institute of Advanced Computer Studies at the University of Maryland, College Park. His research interests are in Computational Linguistics and Natural Language Processing (NLP), especially Lexical Semantics, Emotions in Language, Sentiment Analysis, Computational Creativity, and Fairness in Language. He has served in various capacities at prominent NLP journals and conferences, including: co-chair of SemEval (the largest platform for semantic evaluations), co-organizer of WASSA (a sentiment analysis workshop), and area chair for ACL, NAACL, and EMNLP (in the areas of sentiment analysis and fairness in NLP). His team developed a sentiment analysis system which ranked first in shared task competitions. His word-emotion resources, such as the NRC Emotion Lexicon, are widely used for for analyzing affect in text. His work has garnered media attention, including articles in Time, SlashDot, LiveScience, io9, The Physics arXiv Blog, PC World, and Popular Science.

12:00 -- 13:00 Lunch and student poster session

Fateha Khanam Bappee, Dalhousie University

Crime Pattern Detection and Prediction: Fidelity, Interpretability and Ethical Considerations

Jonathan Bowen, Western University and the Rotman Institute of Philosophy

Non-Instrumental Reasons for Creating Artificial Persons

Chris Dulhanty, University of Waterloo

ImageNet Demographics Audit

Atoosa Kasirzadeh, University of Toronto

Ethics, Explanation, and Machine Learning

Nishila Mehta, University of Toronto

Assessing Medical Trainees’ Knowledge and Perceptions of Artificial Intelligence in Medicine

Victor do Nascimento Silva, University of Alberta

Algorithms and Social Media a Challenge to Democracy

Patricia Thaine, University of Toronto

Perfectly Privacy-Preserving AI: What is it and How do we Achieve it?

Christine Wang, University of Toronto

Incorporating Ethics of Artificial Intelligence Education into Medical School Curricula: A Call to Action


13:00--14:15 Session 3 Session Chair: Kathleen Fraser, NRC

13:00 Keynote talk by Alison Paprica, Vector Institute

Social Licence, Health Data and AI

Abstract: AI and machine learning algorithms unavoidably reflect the characteristics of the individuals in the data sets that are used for model development and validation. Though Canada is a small country in the global context, Canada has unique assets in its longitudinal population-wide data sets for publicly funded health, municipal and social services that cover entire, ethnically diverse, populations. With the critical mass of world-class AI researchers and companies that are being brought together through the Pan-Canadian AI Strategy, these population-wide data can be the foundation for research and innovation that creates AI-enabled technologies which provide unparalleled benefits for Canadians, and the world. However, while the research literature suggests that there is social licence and public support for data-intensive health R&D, there is no blanket approval. The public cares about details including how privacy will be protected, how the private sector uses data, and what the public benefits and risks will be when data are used by companies or governments. P.A. Paprica will present the concept of social licence and the results of recent qualitative research into the Ontario general public’s views on users and uses of population-wide administrative health data. Live polling will be used to present and prompt thinking about several fictional health AI scenarios. The presentation will conclude with a discussion of what previous research into social licence related to health data may mean for AI, and priorities for further qualitative research and public engagement that is focused on AI.

Speaker bio: As Vice President, Health Strategy and Partnerships, Alison is Vector’s corporate lead for health strategy, overseeing health research collaborations, health data partnerships and health AI application projects. She also leads workshops and courses focused on the leadership and management of research at the University of Toronto where she is Assistant Professor (status). Previously, she held senior roles at the Institute for Clinical Evaluative Sciences (ICES) and Ontario’s health ministry, and worked for seven years in multinational pharmaceutical R&D. Alison holds a combined HBSc in Biochemistry and Chemistry (McMaster), a PhD in Organic Chemistry (Western University) and completed a fellowship with the Canadian Foundation for Healthcare Improvement EXTRA program.

13:35 Invited talk by Jennifer L. Gibson, University of Toronto

AI for Health: Key Ethical Considerations

Abstract: What ethical issues are emerging related to the use of AI methods and AI-enabled technologies in health? While some of these issues are common to other areas of AI application, some are unique to their specific application in health, including health care and public health. In this presentation, I'll flag key ethical consideration in AI for Health that should call our attention now.

Speaker bio: Dr. Jennifer Gibson is Sun Life Financial Chair in Bioethics and Director of the University of Toronto Joint Centre for Bioethics, Associate Professor in the Dalla Lana School of Public Health, and Director of the World Health Organization Collaborating Centre for Bioethics at the University of Toronto. Jennifer has a PhD in Philosophy. Her program of research employs qualitative social science methods and normative analysis to study ethical issues in health institutions and systems. She is particularly interested in the role and interaction of values in decision-making at different levels in the health system. Currently, she is leading a new program of research on ‘Ethics and AI for Good Health’. Jennifer has served on government and policy advisory committees related to medical assistance in dying, public health emergency preparedness, public health surveillance, critical care triage, drug funding and supply, and healthcare resource allocation. She also works closely with the World Health Organization on global health ethics issues.

13:55 Invited talk by David Van Bruwaene, SafeToNet

Privacy, Machine Learning, and the Digital Parent

Abstract: The parental responsibility to protect and guide children in the digital world requires access to information about their online activity. This access needs to be balanced against children’s right to privacy. But how can a parent determine the need to access information without first seeing it? I propose that ML has matured to the state that it can arbitrate issues of access by detecting when child safety and wellness issues arise in text and image based communication. Moreover, it is possible to present information about safety and wellness issues in a way that preserves a child’s right to privacy. However, this solution introduces a second layer of complexity around digital privacy: ML used to detect safety issues in personal data requires training on similar data that is known to produce effective results in production. It is a challenge to justify beliefs regarding effectiveness of detection without confirmation requiring human access to personal data. Yet this challenge must be met because presenting evaluative information to parents about their children’s online activity has material consequences. I consider solutions to this secondary issue including: public data as proxy for personal data, advanced anonymization techniques, user-generated evaluations, and voluntary donation of personal data.

Speaker bio: David Van Bruwaene is the CEO of SafeToNet Canada and is Director of Research and Development for SafeToNet worldwide. While studying logic and the formal semantics of natural language during graduate work at Cornell University and UC Berkeley, David developed a research interest in Natural Language Processing. Following graduate studies, David lectured at the University of Waterloo on a range of topics, including the Philosophy of Language, Logic, and Business Ethics. During this time David consulted with and eventually became Lead Data Scientist for VISR Inc., a company offering a parenting service providing alerts to parents of safety and wellness concerns identified in their children’s social media activity. He designed and oversaw implementation of AI products at VISR. Subsequently, he was promoted to the CEO position where he facilitated the sale of VISR to SafeToNet. David has won SSHRC, OCE, NSERC, and Mitacs grants on behalf of himself and several organizations. David is industry director of a research program at the University of Ottawa with a mandate to develop new techniques to identify early indicators of the development of mental health conditions in children.

14:15-- 15:00 Panel Discussion on Privacy, Transparency, and Explainability in AI Applications

Moderator: Libby Kinsey, Digital Catapult

Data is a crucial part of many modern AI systems. Data is being collected at each step of our everyday lives (credit card transactions, internet browsing, email, etc), and used in ways which we sometimes don’t fully comprehend. Data is an asset that gives competitive advantage to IT giants like Google, Facebook, or Microsoft. In this panel, we will explore issues of data ownership and fair use, the tension between the data privacy and system transparency requirements, as well as the trade-off between interpretability and power of AI models.

Panelists:

Sébastien Gambs, Université du Québec à Montréal

Sébastien Gambs currently holds the Canada Research Chair (Tier 2) in Privacy-preserving and Ethical Analysis of Big Data since December 2017. He has joined the Computer Science Department of the Université du Québec à Montréal (UQAM) in January 2016, after having held a joint Research chair in Security of Information Systems between Université de Rennes 1 and Inria from September 2009 to December 2015. His research interests encompass subjects such as location privacy, privacy-preserving data mining as well privacy-enhancing technologies in general. He is also interested to solve long-term scientific questions such as addressing the tension between privacy and the analysis of Big Data as well as the fairness, accountability and transparency issues raised by personalized systems.

Jo Kennelly, Sightline Innovation

As Vice President of Strategy, Jo is responsible for defining and executing strategies to shape the future of Sightline. Jo began her career as a health economist at Ernst & Young and has acquired significant experience serving as senior advisor to companies, universities, foundations, mayors, ministers and Prime Ministers. As one of the founders and acting Executive Director of EMILI* Jo has helped her clients raise more than $3 billion in grant, partnership and private investor funding. Jo is a graduate of the University of Otago in Geography and Economics and earned a PhD from the University of Cambridge.

Patricia Kosseim, Osler, Hoskin & Harcourt LLP

Patricia Kosseim is Counsel in Osler's Privacy and Data Management Group and co-leads Osler’s AccessPrivacy platform, an integrated suite of innovative information solutions, consulting services and thought leadership. Patricia is a national leading expert in privacy and access law, having served over a decade as Senior General Counsel and Director General at the Office of the Privacy Commissioner of Canada (OPC). She provided strategic legal and policy advice on complex and emerging privacy issues; advised Parliament on privacy implications of legislative bills; led research initiatives on new information technologies and advanced privacy law in major litigation cases before the courts, including the Supreme Court of Canada. Prior to that, Patricia worked at Genome Canada and the Canadian Institutes of Health Research, where she developed and led national strategies for addressing legal, ethical and social aspects of health and genomic technologies. She began her career in Montreal practicing in the areas of health law, civil litigation, human rights, privacy and labor & employment with another leading national law firm. Patricia has published and spoken extensively on matters of privacy law, health law and ethics. She has taught part-time at the University of Ottawa, Faculty of Law and has held many professional appointments and board memberships, including: Governor on the Board of Governors of The Ottawa Hospital; Chair of The Board of Directors of the Ottawa Hospital Research Institute; Vice-Chair of the Research Integrity Committee of les Fonds de recherche du Quebec; and member of the National DNA Databank Advisory Committee.

Jocelyn Maclure, Université Laval

Jocelyn Maclure is Full Professor of Philosophy at Laval University, where he teaches ethics and political philosophy. He is also the president of Quebec Ethics in Science and Technology Commission. He has published widely on theories of social justice, cultural and religious diversity and human rights. His recent work focusses on the ethics of AI and on medical assistance in dying. His publications include Retrouver la raison (Québec Amérique, 2016) and, with Charles Taylor, Secularism and Freedom of Conscience (Havard University Press, 2011)

15:00 -- 15:15 Coffee break


15:15--16:15 Session 4 Session Chair: Peter Bloomfield, Digital Catapult

15:15 Invited talk by Petra Molnar, University of Toronto

The Human Rights Impacts of AI and Emerging Technologies: Experiments with Migration Management and Refugee Decision-Making

Abstract: With the increasing proliferation of new technologies such as AI and automated decision-making, what are the human rights ramifications of deploying these technologies without appropriate oversight and accountability mechanisms? This presentation is based on "Bots at the Gate," a report done by the University of Toronto on the use of emerging technologies in Canada's immigration system. Artificial intelligence and automated decision-making is increasingly used in various facets of migration management globally. From predictions about population movements in the Mediterranean, to Canada’s use of AI in immigration decisions, to retinal scanning of refugees in Jordan, governments are keen to explore the use of these new technologies, yet often fail to take into account profound human rights ramifications and real impacts on human lives. These impacts are particularly salient for populations with fewer access to resources and inability to exercise their rights, such as migrants and refugees. Concerns around emerging technologies force us to re-examine our assumptions, norms, and available rights-frameworks and these technologies are a useful lens through which to examine state practices, democracy, notions of power, and accountability. https://ihrp.law.utoronto.ca/news/canadas-adoption-ai-immigration-raises-serious-rights-implications#overlay-context=news/canadas-adoption-ai-immigration-raises-serious-rights-implications

Speaker bio: Petra Molnar is a human rights and refugee lawyer in Toronto, Canada. She is a researcher at the International Human Rights Program, University of Toronto Faculty of Law and the co-author of Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada's immigration and Refugee System.

15:35 Invited talk by Wilco van Ginkel, a3i

Trust or not to Trust AI - That’s the Question!

Abstract: Given the large impact of AI on our society, we need to ensure that AI can be trusted. For this, we need Responsible AI. During my talk, I discuss what this all means and how we can get a handle on this.

Speaker bio: Wilco van Ginkel is a seasoned professional and entrepreneur with in-depth knowledge and extensive international working experience in the field of AI, Cyber Security, Big Data, and Cloud. His current field of interest is how to build responsible AI systems, which can be trusted. He fulfilled various roles at strategic, tactical and operational levels within companies. He founded two Canadian organizations: a3i (a3i.ai) and Seculior (seculior.com). He also founded international organizations, such as the CSA Big Data Working Group, and the CSA Dutch Chapter. He enjoys being a public speaker, author and lecturer. Wilco holds Master Degrees in Business Economics, Computer Science, Information Security, and Business Administration (MBA), as well as different certifications in AI and Cyber Security.

15:55 Invited talk by Samantha Brown, Doteveryone

Responsibility in Tech Practice

Abstract: At Doteveryone we want technology to be better for everyone. Innovation doesn’t have to be about moving fast and breaking things; it can also be about achieving what you believe in. This is why we’ve developed TechTransformed, the practice to help responsible innovators stay close to what’s important whilst also making new things happen.

Speaker bio: Doteveryone is an independent think tank based in London that explores how technology is changing society, shows what responsible technology can look like, and catalyses communities to shape technology to serve people better. Sam is the Programme Lead on the TechTransformed programme, focused on creating practices, tools and resources for product teams to design technology more responsibly.


16:15--17:00 Panel Discussion on Governance and Accountability for AI Algorithms/Products

Moderator: Svetlana Kiritchenko, NRC

New and powerful AI technologies provide remarkable opportunities to increase the efficiency and quality of our work and reduce costs. The wide adoption of AI tools is rapidly transforming our economies and societies.To ensure that these technologies are used in a safe, fair, and accountable manner, we need effective governance and regulations. New standards and governing institutions may be required to direct these efforts on national and international levels. This panel will bring together researchers and policy makers from government, academia, and industry to discuss the existing regulations and standards in AI use as well as the standing issues and their potential solutions.

Panelists:

Rob Davidson, Information and Communications Technology Council

Rob is a 25-year seasoned veteran of the software industry and has excelled in senior roles ranging from Director of Marketing & Communications, VP of Product Management to Chief Technologist. He is a passionate open data advocate, promoting the use of open data for social good and business creation. In June 2016, Rob founded the Open Data Institute Ottawa Node to help crystallize the open data movement in Ottawa. Rob is co-chair of Canada's Multi-Stakeholder Forum for the Open Government Partnership and is also an organizer for Data for Good Ottawa meetup group. Rob has spoken at national and international events on open data and emerging technologies. Rob is the Manager, Data Analytics and Research at the Information and Communications Technology Council (ICTC). Rob has a BSc. in Data Analysis from the University of New Brunswick and an MBA from the University of Western Ontario.

Hessie Jones, International Council on Global Privacy and Security, by Design

As a seasoned digital strategist, author, tech geek and data junkie, Hessie spent the last few decades on the internet in banking, publishing platforms, tech start-ups including Yahoo!, Citi, CIBC, Aegis Media, Cerebri, OverlayTV, and Rapp Collins. Hessie also published EVOLVE: Marketing (as we know it) is Doomed! She saw the rise of social and digital platforms disrupt marketing, and set out to understand out how the new market dynamics would impact corporate environments forever in process, in culture and in mindset. Hessie is also the Founder of ArCompany advocating AI readiness, education and ethical distribution of AI and a regular contributor to Forbes, Towards Data Science and Cognitive World publications.

Mark Robbins, Institute on Governance

Mark's work principally addresses impact of the digital revolution on government, governance and public administration as well as how government itself impacts technological development through its actions for governing the ICT sector. Mark can be found working on a range of projects related to 21st century policy areas including digital transformation, innovation, digital government and artificial intelligence. When not writing research, Mark also organizes the IOG's Policy Crunch speaker series and annual Future Forum conference. Prior to joining the IOG, he held various research positions on economic and political affairs, including at the Munk School at the University of Toronto, the Conference Board of Canada, UN-ESCAP, the Canadian Transportation Agency and the Parliament of Canada.

Sarah Villeneuve, Brookfield Institute for Innovation + Entrepreneurship

Sarah is a Policy Analyst at the Brookfield Institute for Innovation + Entrepreneurship where she conducts research within the AI + Society work stream on topics related to public policy, ethics, and industry adoption. She is also a member of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and IEEE Standards Association. She has previously conducted research on algorithmic discrimination, smart-city marginalization, and predictive analytics for governance. Sarah holds a MSc in Data and Society from the London School of Economics and Political Science and a BA in Politics and International Relations from the University of London.

17:00 -- 17:02 Closing remarks by the organizers