As part of its mission, the Cooperative AI Foundation (CAIF) is hosting a fortnightly seminar series on New Directions in Cooperative AI, in which we invite leading thinkers to offer their vision for research on cooperative AI. Unlike typical academic talks, these seminars will be explicitly agenda-setting, describing a line of work that many researchers could pursue and that CAIF could support. If you are interested in submitting a proposal for a seminar, we invite you to apply here. Proposals will be evaluated on a rolling basis, will receive notification of acceptance, revision, or rejection within one month, and – in recognition of the additional burden of preparing such a talk – will also receive a $5000 honorarium if successful.
Our next seminar will be given by Jesse Clifton (Center on Long-Term Risk, Cooperative AI Foundation, NCSU) and Sammy Martin (KCL, Center on Long-Term Risk). We will also be joined by Zoe Cremer (University of Oxford, University of Cambridge) and José Hernández-Orallo (Universitat Politècnica de València, University of Cambridge). Further details of upcoming talks can be found below and you can also subscribe to our public calendar of events via Google, or by adding this ICS URL to your calendar application.
Previous talks, such as those of Vincent Conitzer and Joel Leibo, can be viewed on our YouTube channel.
Differential Progress in Cooperative AI: Motivation and Measurement
Speakers: Jesse Clifton (Center on Long-Term Risk, Cooperative AI Foundation, NCSU) and Sammy Martin (KCL, Center on Long-Term Risk)
Discussants: Zoe Cremer (University of Oxford, University of Cambridge) and José Hernández-Orallo (Universitat Politècnica de València, University of Cambridge)
Time: 13:00-14:00 GMT on Thursday 10th March 2022
Calendar Event: Google, ICS file
Zoom Meeting: Link (Please sign up to the mailing list here to receive the passcode, or send us an email if you're having trouble accessing the event)
Abstract
Skills that lead to strong performance in multi-agent systems can be detrimental to social welfare. This is true even of skills that play a central role in cooperation: the ability to understand other agents can make it easier to deceive and manipulate them, and the ability to commit to peaceful agreements can also facilitate coercive commitments. We will argue that if the Cooperative AI Foundation is to achieve its mission of improving the cooperative intelligence of advanced AI systems for the benefit of all humanity, it should focus on improving skills that robustly lead to improvements in social welfare, rather than those that are dangerously dual-use. We refer to this as “differential progress on cooperative intelligence”. To know whether we are making differential progress, we need to be able to rigorously define and measure it. Towards these ends, we present early-stage work on the definition and measurement of cooperative intelligence.
Bios
Jesse Clifton is a research analyst at the Cooperative AI Foundation and a researcher at the Center on Long-Term Risk, where he works on possible causes of conflict involving AI systems. He is also a PhD student in statistics at North Carolina State University.
Sammy Martin is a PhD student at the King's/Imperial Center for Doctoral Training on Safe and Trusted AI. He has also worked as an AI forecaster for the Modelling Transformative AI Risks project and an independent researcher supported by the Center on Long-Term Risk. He earned his Master's degree in Artificial Intelligence from the University of Edinburgh.