The 1st workshop on


  Human-Level AI: 

Possibilities, Challenges, and Societal Implications

June 22-24, 2023, Boston, USA

Schedule


Sponsors:

Overview:

Recent advances in ML suggest that human-level AI systems are a real possibility, with potentially transformational effects on society, but there is no single field that “owns” this discussion right now. Many of our colleagues have talked to us about it in private conversations, but it is not a topic that one can easily write papers about, so there is limited public dialog. In addition, thinking about this requires expertise not just from ML but in other fields such as economics, governance, forecasting, etc. Moreover, "human-level" AI will likely have a different profile of strengths, weaknesses, and capabilities than humans themselves do, so there is a need for better definitions and understanding. We are interested in forecasting the future development and implications of these systems and identifying what work needs to be done today, from a research, engineering, and policy perspective, to ensure that human-level AI systems are beneficial.


This is a 3-day workshop, organized by Jacob Steinhardt,  Sham Kakade, Roger Grosse, Amanda Askell, and Irina Rish. The goal of the workshop is to bring together a broad variety of perspectives to analyze key questions around human-level AI, such as (1) when it will occur and what it will look like, (2) risks and how to mitigate them, including misalignment, misuse, and economic and other societal impact, and (3) concrete research and policy directions we can pursue today.


Possible topics/sessions:

 * Timelines: When will HLAI happen? What are the best arguments that it’s far away? Very soon?

 * Characteristics: What are the likely effects of HLAI? Is “HLAI” the right way to think about future systems or is it too anthropomorphic? What skills will it be relatively strong or weak at compared to humans? How expensive will it be to run, how quickly can it learn and adapt, etc.?

* Societal effects: What will the proliferation of powerful AI technologies mean for society? Analyze at multiple levels: state actors (authoritarianism, cyberattacks), individuals (widespread access to “DIY”, possibility of cheating, etc.), and companies/economy (possible rapid unemployment, corporations will need to use AI to compete, can make products more addictive, etc.). What possible risks should we avoid and what is their magnitude? Does HLAI pose an existential threat to humanity?

 * Emergence: Current systems exhibit emergent behavior which leads to the possibility of surprise. How can we understand and tame emergent behavior? What are the most likely pathways (if any) by which agency, deception, or other concerning characteristics could emerge in machine learning models?

* Alignment: HLAI may act at odds with human interests, due to either intrinsic misalignment or misuse, which could pose a serious threat. How can we avoid this? Are there safeguards to reduce the threat?