Princeton AI Alignment

A community working to reduce risks from advanced AI

MAILING LIST » 

APPLY to our Introduction to AI Alignment and Advanced Reading Group by Friday February 9th 11:59PM.

Our Mission

AI may soon radically transform our society, for better or worse

Experts broadly expect significant progress in AI during our lifetimes, potentially to the point of achieving human-level intelligence. Digital systems with such capabilities would revolutionize every aspect of our society, from business, to politics, to culture. Worryingly, these machines will not be beneficial by default, and public interest is often in tension with the incentives of the many actors developing this technology. 

LEARN MORE »

We work to ensure AI is developed to benefit humanity's future

Absent a dedicated safety effort, AI systems will outpace our ability to explain their behavior, instill our values in their objectives, and to build robust safeguards against their failures. Our organization empowers students and researchers at Princeton to contribute to the field of AI safety. 

MAILING LIST »


Get Involved

Apply for our introductory seminars

Want a deep dive into AI alignment or governance? Join our 8-week reading and discussion groups diving into the field. We have two tracts focusing on both technical and societal challenges in the field. Applications due Wednesday September 13th.

INTRODUCTORY SEMINARS »

Apply for our advanced reading group

Interested in learning about AI alignment and related research? Check out our advanced reading group! Participants will benefit from our Introduction to AI Alignment seminar or having equivalent knowledge, but prior AI alignment knowledge is not strictly required, ML knowledge is assumed. Applications due Wednesday September 13th.

ADVANCED READING GROUP »

Research

Interested in doing AI alignment research? Reach out to the organizers and we can help you find a mentor.

CONTACT US »

Jobs in AI Safety

Check out various AI Safety positions at various organizations. You might have heard of some of the bigger ones like Anthropic and OpenAI. 

AI SAFETY POSITIONS »

Take part in worldwide contests and hackathons

48 hours of intense, fun and collaborative research on the most interesting questions of our day from machine learning & AI safety!

ALIGNMENT JAM HACKATHONS»

Take a look at a variety of competitions from making benchmarks for safe AI ($500K prize pool) to designing AI that reliably detects moral uncertainty in text ($100K prize pool). 

SAFE AI COMPETITIONS»

How do we make an AI that does not misgeneralize our goals? How do we overcome the seemingly-natural desire for survival by making an advanced AI that lets us shut it off? Take a stab at these open problems in AI Safety and win $1,000-100,000. 

AI ALIGNMENT AWARDS»