Overview

March 21-23, 2016, Stanford University, Stanford, CA (USA)

The moral implications of our technological creations have long been the staple of science-fiction rumination. Speculative films from Metropolis to 2001: A Space Odyssey to Blade Runner and Robocop explore the idea that autonomous intelligence without moral constraint must inevitably lead to deadly hazard. Indeed, this anxiety long precedes the modern age of science (and its fiction), as demonstrated by the popularity of the tales of Frankenstein, the Golem of Prague and the Sorcerer's Apprentice. We have always worried about the unintended consequences of our complex creations.

Artificial Intelligence has now reached a point — not least in the public imagination, and in the prognostications of thought leaders in other fields — where these moral concerns have become the substance of science fact. Our machines are tasked with ever more autonomous decisions that directly impact on the well-being of other humans. There is a very real need to imbue our AI creations with a robust moral sense that real people would recognize as a functional model of human morality.

Recently, many researchers are endeavouring to bring the moral dimension of autonomous non-human agents to the public eye. There are academic groups that attempt to forestall or halt the militarization of autonomous agents, such as the International Committee for Robot Arms Control. South Korea and other countries are working to adapt their legal systems to account for the issues of responsibility, liability, insurance etc within this new technological realm.

In this symposium, we aim to bring together researchers from AI, law practitioners, philosophers of ethics, and neuro-cognitive scientists to shed light on the problems of design and regulation of ethically- and morally-informed autonomous systems as they become part of our everyday life.  We expect a stimulating interdisciplinary debate that will break new ground on this important and timely topic.