We invite submissions that take principled approaches to advancing the understanding of generative modeling. This understanding may be pursued from diverse perspectives, including mathematical theory, physical modeling, and rigorous empirical analysis. In particular, we encourage contributions that address the following foundational questions:
Expressivity of Transformer architecture and diffusion models: What kinds of functions or distributions can generative models represent---and what makes certain architectures (e.g., transformers or diffusion models, including discrete diffusions) particularly effective at capturing linguistic, visual, or algorithmic structures? While approximation theory has provided a solid foundation for components like convolutional and feedforward layers, it remains unclear how these elements, when integrated into modern generative architectures, give rise to high-level capabilities such as in-context learning, compositional generalization, the ability to emulate reasoning in particular through test-time computation.
Learnability and Generalization: How do generative models acquire these capabilities from data through self-supervised training? What are the corresponding inductive biases? What is the role of sequence length? Recent theoretical results have begun to shed light on these questions in controlled settings. While insightful, these analyses are tailored to specific toy datasets or architectures that isolate only individual aspects of real-world structures. To move forward, we must identify general principles that unify these results and develop a mathematical framework capable of describing them precisely across models and data regimes.
Interdisciplinary Connections: From protein folding to drug discovery, from materials design to climate modeling, generative models are rapidly transforming how scientific problems are approached. These applications raise new foundational questions: how can domain knowledge be incorporated into model architectures or training objectives? How models can be adapted to data distributions that shift in time, such as in non-stationary processes in climate science? What inductive biases are required for models to capture the symmetries, conservation laws, or multi-scale structures inherent in scientific data? Should these structures be ``hardwired'' in the architectures or learned? And what is their impact in optimization? How can we leverage physical principles or physics-inspired ideas to build performant yet efficient generative models? Addressing these questions not only advances our theoretical understanding of generative models but also informs their responsible and effective use in scientific domains.
Submission instructions
Submissions are limited to four single-column pages, plus unlimited pages for references and appendices.
The reviewing process will be double-blind and all submissions must be anonymized. Please do not include author names, affiliations, acknowledgements, or any other identifying information in your submission. Submissions and reviews will not be made public.
All submissions must be made through OpenReview. We ask you to use the standard LaTeX NeurIPS style files.
Note: If you are creating a new OpenReview profile, we strongly recommend using your institutional email address. Profiles created without an institutional email may require a moderation process, which can take up to two weeks.
Timeline
Paper submission deadline: October 17th, 2025, AoE
Review period: October 18th - October 31st
Notification date: October 31st, 2025, AoE
Dual submissions: This workshop is non-archival and will not have official proceedings. Workshop submissions can be submitted to other venues. We welcome ongoing and unpublished work, including papers that are under review at the time of submission. We do not accept submissions that have been accepted for publication in other venues with archival proceedings.
Attending the workshop: Our workshop is an in-person event, and authors are asked to present a poster at the workshop. A subset of papers will be selected for presentation in 15-minute contributed talks.
Contact: prigm-eurips-2025@googlegroups.com