Thursday, May 23, 2024
Security Architectures for Generative-AI Systems
(SAGAI'24)
a workshop affiliated with the 45th IEEE Symposium on Security and Privacy
at the Hilton San Francisco Union Square, San Francisco, CA
Goal
The workshop will host discussions into the safety, security, and privacy of GenAI-powered applications from a system design perspective. We believe that this new category of important and critical systems requires a whole new approach and we intend for this workshop to explore new kinds of security architectures for GenAI.
Background
Generative AI (GenAI) is quickly advancing and fast becoming a widely deployed technology, with predictions that it will become as transformational as the Internet. At its core, a GenAI-based system relies on machine-learning (ML) models trained on large amounts of data using deep-learning techniques. These powerful and flexible models can be used in a variety of use cases, opening them up to attacks that use adversarial inputs to create malicious outputs.
If the existing research on ML robustness and safety is any indication, it is unlikely that GenAI models can always be trained to be intrinsically safe and secure. A broader issue is therefore the growing architectural complexity of GenAI-based systems, where, more often than not,
multiple models are deployed,
sequences of model queries are needed to complete a task, and
external (non-ML) components are used to enhance the model's operation via database queries or API calls.
Thus it is necessary to address safety and security in a holistic manner that considers the ML models together with the systems built around them.
Workshop Information
The workshop is part of the 45th IEEE Symposium on Security and Privacy workshop series, and will take place in San Francisco, CA, on May 23, 2024.
For more information, see:
Organizing Committee
Mihai Christodorescu (Google)
John Mitchell (Stanford)
Somesh Jha (University of Wisconsin, Madison)
Khawaja Shams (Google)