{Your Group Name} AI Alignment

A community working to reduce risks from advanced AI

MAILING LIST »  (!) Insert hyperlink 


(!) You may use this part of the landing page for latest announcements. Feel free to delete it, if there is no need for such a thing. The page is currently filled with meaningless placeholder text.

 

Special Announcements

Talk: Genius Instant Solution to the AI Risk by Alan Turing (!) There would be a Linkedin link here, Friday Dec 8th 2-10pm in Hall 505.

Abstract: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed nec egestas turpis, ut elementum est. Nulla lectus sapien, laoreet interdum velit eu, lobortis porttitor libero. Praesent accumsan interdum dolor, lobortis rutrum nisl finibus ut. In aliquet tempus egestas. Aliquam at velit sed dolor tristique viverra a id leo. Vestibulum efficitur dui a mollis viverra. Integer eu vulputate est, ac facilisis enim. Nulla suscipit iaculis sem, sit amet tincidunt tellus feugiat ut. Ut id lacus vel ipsum malesuada fringilla sit amet id magna.

Relevant works:

Bio: Aliquam aliquet quam id neque fringilla, eget varius erat ullamcorper. Pellentesque eros felis, molestie congue laoreet a, facilisis a purus. Nam orci quam, efficitur vitae augue sed, feugiat laoreet augue. Sed augue neque, posuere nec massa gravida, condimentum feugiat tellus. Morbi metus est, euismod vel dolor sed, gravida interdum odio. Vivamus ipsum ex, dignissim id sapien vel, gravida convallis felis. Quisque quis fermentum sapien, ut consectetur massa. Praesent metus enim, pulvinar a tortor vitae, suscipit lacinia neque.

​Interdum et malesuada fames ac ante ipsum primis in faucibus. Mauris ac felis urna. Praesent vulputate nulla quis justo convallis sodales.

Our Mission

AI may soon radically transform our society, for better or worse

Experts broadly expect significant progress in AI during our lifetimes, potentially to the point of achieving human-level intelligence. Digital systems with such capabilities would revolutionize every aspect of our society, from business, to politics, to culture. Worryingly, these machines will not be beneficial by default, and public interest is often in tension with the incentives of the many actors developing this technology. 

LEARN MORE »

We work to ensure AI is developed to benefit humanity's future

Absent a dedicated safety effort, AI systems will outpace our ability to explain their behavior, instill our values in their objectives, and to build robust safeguards against their failures. Our organization empowers students and researchers at Princeton to contribute to the field of AI safety. 

MAILING LIST » (!) Insert hyperlink 

(!) Feel free to tweak these to reflect your group's idiosynchracies

Get Involved

Apply for our introductory seminars

If you are new to the field, and are interested in taking a deep dive into AI Safety, consider joining our 8-week reading and discussion groups diving into the field. In our Intro to AI Safety Governance, in addition to basics of AI Safety field, you will learn about the existing and potential ways of steering AI Safety policy on micro and macro levels. In the AI Safety Alignment program we aim to give you an overview of AI alignment — the research field aiming to align advanced AI systems with human intentions. Applications for fall period due {when}.

INTRODUCTORY SEMINARS » (!) Insert hyperlink 

Apply for our advanced reading group

Interested in learning about AI alignment and related research? Check out our advanced reading group! Participants will benefit from our Introduction to AI Alignment seminar or having equivalent knowledge, but prior AI alignment knowledge is not strictly required, ML knowledge is assumed. Applications for fall period due {when}.

ADVANCED READING GROUP » (!) Insert hyperlink 

Research

Interested in doing AI alignment research? Reach out to the organizers and we can help you find a mentor.

CONTACT US » (!) Insert hyperlink 

Jobs in AI Safety

Check out various AI Safety positions at various organizations. You might have heard of some of the bigger ones like Anthropic and OpenAI. 

AI SAFETY POSITIONS »

Take part in worldwide contests and hackathons

48 hours of intense, fun and collaborative research on the most interesting questions of our day from machine learning & AI safety!

ALIGNMENT JAM HACKATHONS»

 [needs updating for newer competitions or just delete it] Take a look at a variety of competitions from making benchmarks for safe AI ($500K prize pool) to designing AI that reliably detects moral uncertainty in text ($100K prize pool). 

SAFE AI COMPETITIONS» (!) Insert hyperlink 

How do we make an AI that does not misgeneralize our goals? How do we overcome the seemingly-natural desire for survival by making an advanced AI that lets us shut it off? Take a stab at these open problems in AI Safety and win $1,000-100,000. 

AI ALIGNMENT AWARDS»