October 16, 2023
Securing
the Future of GenAI
Mitigating Security Risks
Reston, VA & virtual attendance
Registration is closed.
 Overview
GenAI technologies, such as large language models (LLMs) and diffusion models, have changed the computing landscape. They have enabled exciting applications, such as generating realistic images, automatic code completion, and document summarization. However, adversaries can use GenAI as well (this is the classic case of "dual use"). For example, adversaries can use GenAI to generate spearfishing emails or realistic-looking content that spreads misinformation. Note that these attacks were possible before, but the velocity/scale of these attacks might be greatly enhanced because of GenAI.
The first workshop on this topic in June focused on change in the threat landscape and mitigation strategies with the advent of GenAI. The resulting report reflects and summarizes this focus.
This workshop will delve into policy issues, alignment, and detection of AI-generated content. The focus of this workshop is more on techniques (policy or technical) to mitigate risks of GenAI. Some questions to ponder on:
What are some important policy questions related to GenAI?
What are the limits of alignment and is it achievable?
What are limits of detecting whether content is GenAI generated?
The workshop is organized by Google, Stanford, and UW-Madison.