Keynote Speakers

AI in Society

Understanding and Shaping the Future of AI to Benefit Everyone

Joanna Bryson

Bias, Trust, and Doing Good: Scientific Explorations of Topics in AI Ethics

This talk takes a scientific look at the cultural phenomena behind the #tags many people associate with AI ethics and regulation. I will introduce the concept of public goods, show how these relate to sustainability, and then provide a quick review of three recent results concerning:

Joanna Bryson is Professor of Ethics and Technology at the Hertie School. Her research focuses on the impact of technology on human cooperation, and AI/ICT governance. From 2002-19 she was on the Computer Science faculty at the University of Bath. She has also been affiliated with the Department of Psychology at Harvard University, the Department of Anthropology at the University of Oxford, the School of Social Sciences at the University of Mannheim, and the Princeton Center for Information Technology Policy. During her PhD she observed the confusion generated by anthropomorphised AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. In 2010, she co-authored the first national-level AI ethics policy, the UK's Principles of Robotics. She holds degrees in psychology and artificial intelligence from the University of Chicago (BA), the University of Edinburgh (MSc and MPhil), and Massachusetts Institute of Technology (PhD). 

Sylvester Johnson

Race, Cyborgs, and Weaponized AI: How Will Algorithmic Security Impact Multiracial Democracy?

Recent decades have witnessed remarkable developments of successful Artificial Intelligence (AI) applications. Massive data sets and complex deep neural networks are enabling machine learning software to transform multiple domains—healthcare, law, marketing, transportation, entertainment, finance, and militarism—by leveraging AI-enabled digital technology systems.

Combining humans with intelligent machine systems (cybernetics) is arguably the most promising yet fraught dimension of the AI frontier. Cybernetics offers new possibilities for medical therapies and warfare, as smart prosthetics restore capacity to recovering patients and military industrial states ponder the value of law enforcement and soldiers digitally enhanced for national security.

As we confront the digital future of humanity, what lessons should our technological society learn from the global history of race? How have racial regimes shaped the politics of ‘being human’? How might human-machine hybridization determine the future of race? In this talk, Sylvester Johnson examines what a “human security” approach implies for understanding the linkage cybernetics, racialization practices, democracy, and national security.

Sylvester A. Johnson is Assistant Vice Provost for the Humanities and Executive Director of the “Tech for Humanity” initiative at Virginia Tech. He is the founding director of Virginia Tech’s Center for Humanities, which is supporting human-centered research and humanistic approaches to the guidance of technology. Sylvester’s research has examined religion, race, and empire in the Atlantic world; religion and sexuality; national security practices; and the impact of intelligent machines and human enhancement on human identity and race governance. In addition to co-facilitating a national working group on religion and US empire, Johnson led an Artificial Intelligence project that developed a successful proof-of-concept machine learning application to ingest and analyze a humanities text. He is the author of The Myth of Ham in Nineteenth-Century American Christianity (Palgrave 2004), a study of race and religious hatred that won the American Academy of Religion’s Best First Book award; and African American Religions, 1500-2000 (Cambridge 2015), an award-winning interpretation of five centuries of democracy, colonialism, and freedom in the Atlantic world. Johnson has also co-edited The FBI and Religion: Faith and National Security Before and After 9/11 (University of California 2017). He is a founding co-editor of the Journal of Africana Religions. He is currently producing a digital scholarly edition of an early English history of global religions and writing a book on human identity in an age of intelligent machines and human-machine symbiosis. 

Benjamin Kuipers

Hunting for Unknown Unknowns: AI and Ethics in Society

Homo Sapiens is considered a “hyper-cooperative species,” and this aptitude for cooperation may be responsible for our dominance over the Earth. Cooperation promises great benefits, but each participant is vulnerable to exploitation by their partners. Successful cooperation requires trust: acceptance of vulnerability, with confidence that it will not be exploited. The culture of any society includes ethical principles specifying how to be trustworthy, to whom trustworthiness is owed, and how to recognize who is likely to be trustworthy. The continued viability of a society depends on how well this mechanism does its job.

Deployments of AI systems for autonomous vehicles, facial recognition, medical diagnosis, decisions about credit or parole, and other domains have raised questions about their trustworthiness. These questions apply not only to robotic and AI systems based on digital computers, but also to institutional structures such as governments and corporations. Trust failures arise when a carefully designed decision mechanism confronts a situation outside its comprehension: an “unknown unknown.”

Science, engineering, economics, law, and public policy all depend on models to cope with the unbounded complexity of the real world. A model specifies a limited set of elements and relations that support inferences relevant to the purpose of the model. Everything else is considered negligible relative to the purpose of the model. If some of these unknown unknowns stop being negligible, the model can fail, possibly with serious consequences.

Game-theoretic reasoning, maximizing expected utility, can be a powerful decision tool in multi- agent settings, but its validity depends critically on the quality of the model, especially the definition of utility. A particular failure mode, when the utility measure is oblivious to trust and trustworthiness, is to encourage each participant to optimize expected utility by exploiting the vulnerabilities of the other participants.

Trust and cooperation are thereby discouraged. Widespread loss of trust and cooperation can become an existential threat to the society. Our task in AI is to identify potentially dangerous unknown unknowns, and find appropriate ways to incorporate them into our models, supporting trust and cooperation in our society.

Benjamin Kuipers is a Professor of Computer Science and Engineering at the University of Michigan.  He was previously at the University of Texas at Austin, where he held an endowed professorship and served as Computer Science department chair.  He received his B.A. from Swarthmore College, his Ph.D. from MIT, and he is a Fellow of AAAI, IEEE, and AAAS.  His research in artificial intelligence and robotics has focused on the representation, learning, and use of foundational domains of commonsense knowledge, including knowledge of space, dynamical change, objects, and actions.  He is currently investigating ethics as a foundational domain of knowledge for robots and other AIs that may act as members of human society. 

Frank Pasquale

Renewing the Political Economy of Automation

Too many CEOs tell a simple story about the future of work: if a machine can do what you do, your job will be automated. They envision everyone from doctors to soldiers rendered superfluous by ever-more-powerful artificial intelligence. They offer stark alternatives: make robots or be replaced by them.

Another story is possible. In virtually every walk of life, robotic systems can make labor more valuable, not less. Frank Pasquale tells the story of nurses, teachers, designers, and others who partner with technologists, rather than meekly serving as data sources for their computerized replacements. This cooperation reveals the kind of technological advance that could bring us all better health care, education, and more, while maintaining meaningful work. These partnerships also show how law and regulation can promote prosperity for all, rather than a zero-sum race of humans against machines. AI is poised to disrupt our work and our lives, but we can harness these technologies rather than fall captive to them—but only through wise regulation.

How far should AI be entrusted to assume tasks once performed by humans? What is gained and lost when it does? What is the optimal mix of robotic and human interaction? New Laws of Robotics makes the case that policymakers must not allow corporations or engineers to answer these questions alone. The kind of automation we get—and who it benefits—will depend on myriad small decisions about how to develop AI. Pasquale proposes ways to democratize that decision making, rather than centralize it in unaccountable firms. Sober yet optimistic, New Laws of Robotics offers an inspiring vision of technological progress, in which human capacities and expertise are the irreplaceable center of an inclusive economy.

Frank Pasquale is an expert on the law of artificial intelligence, algorithms, and machine learning, and author of New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020). His widely cited book, The Black Box Society (Harvard University Press, 2015), develops a social theory of reputation, search, and finance, and promotes pragmatic reforms to improve the information economy, including more vigorous enforcement of competition and consumer protection law. The Black Box Society has been reviewed in Science and Nature, published in several languages, and its fifth anniversary of publication has been marked with an international symposium in Big Data & Society

Pasquale is Professor of Law at Brooklyn Law School, an Affiliate Fellow at the Yale Information Society Project, and the Minderoo High Impact Distinguished Fellow at the AI Now Institute. He is also the Chairman of the Subcommittee on Privacy, Confidentiality, and Security of the National Committee on Vital and Health Statistics at the U.S. Department of Health and Human Services. 

He has testified before or advised groups ranging from the Department of Health and Human Services, the House Judiciary Committee, the House Energy and Commerce Committee, the Senate Banking, Housing, and Urban Affairs Committee, the Federal Trade Commission, and directorates-general of the European Commission.

Pasquale has keynoted conferences on information law & policy in Asia, Australia, Europe, and North America. He is an Affiliate Fellow at Yale University's Information Society Project, and a member of the National Committee on Vital and Health Statistics and the American Law Institute. Pasquale has been recognized as one of the leaders of a global movement for “algorithmic accountability,” and co-organized one of the leading conferences on the topic in 2016. In media and communication studies, he has developed a comprehensive legal analysis of barriers to (and opportunities for) regulation of internet platforms. He is on the board of the Association to Promote Political Economy and Law (APPEAL), which has hosted cutting-edge thinkers in antitrust and competition policy at its workshops.