Workshop on Ethical, Social and Governance Issues in AI
December 7th, 2018, Montreal, Canada
Ethics is the philosophy of human conduct: It addresses the question “how should we act?” Throughout most of history the repertoire of actions available to us was limited and their consequences constrained in scope and impact through dispersed power structures and slow trade. Today, in our globalised and networked world, a decision can affect billions of people instantaneously and have tremendously complex repercussions. Machine learning algorithms are replacing humans in making many of the decisions that affect our everyday lives. How can we decide how machine learning algorithms and their designers should act? What is the ethics of today and what will it be in the future?
This workshop aims at bringing together experts from a variety of disciplines (e.g. ethics, computational social sciences, law, machine learning / AI, ...) and practitioners to explore the interaction of AI, society, and ethics through three general themes:
- Advancing and Connecting Theory and Methodology:
- How do fairness, accountability, transparency, interpretability and causality relate to ethical decision making?
- How can such principles be built into machine learning systems and what are the trade-offs involved?
- What can we learn from economics and social science to understand the implications of the widespread application of machine learning algorithms on markets and society?
- How can ethical principles established in philosophy and law be applied to questions about the ethics of machine decisions? Are these principles still relevant today?
- What are the fundamental principles on which we might base an ethical approach to AI?
- Tools and Applications:
- Real-world examples of how ethical considerations are affecting the design of ML systems and pipelines.
- Applications of algorithmic fairness, transparency or interpretability to produce better outcomes. Approaches to designing systems to be auditable by 3rd parties whilst maintaining privacy of user data. Tools that aid in identifying and or alleviating issues such as bias, discrimination, filter bubbles, feedback loops etc. and enable actionable exploration of the resulting trade-offs.
- Policy and Governance:
- How can regulatory, legal or policy frameworks be designed to continue to encourage innovation, so society as a whole can benefit from AI, whilst still providing protection against its harms?
- What are the implications of the shift to algorithmic decision making for existing laws, social policy and governance structures?
- How are organizations, institutions and industry responding to regulatory efforts such as the GDPR?
- Hannah Wallach, Microsoft Research
- Manuel Gomez-Rodriguez, Max Planck Institute
- Roel Dobbe, UC Berkeley and AI Now Institute
- Jon Kleinberg, Cornell
- Hoda Heidari, ETH Zurich
- Rich Caruana, Microsoft Research