We invite submissions to the first EurIPS Workshop on Private AI Governance, which will be held at the Bella Center Copenhagen on the 6th or 7th December 2025
Exploring private governance mechanisms - and the technical tools that support them - to complement regulation in AI oversight.
This workshop convenes technical and governance communities for knowledge exchange on how private governance mechanisms may work in tandem with government regulation and connect technical insights into incentives and requirements for responsible development and deployment of AI.
While not a substitute for hard regulation, private governance can be complimentary, and for sectors with the potential for far-reaching externalities, offer flexible adaptation to technological progress uninhibited by jurisdictional and political boundaries. As such, private governance offers a particularly promising opportunity through which technical researchers can contribute to effective AI oversight.
This workshop invites participants to convene and contribute to knowledge exchange on how private governance mechanisms may work in tandem with government regulation, the type of technical work needed to operationalise such mechanisms, and how technological safety advances and private governance mechanisms may influence and feed into each other.
This workshop thus aims to bring together researchers from technical and policy backgrounds to share insights on how their disciplines can cooperate to achieve effective private governance mechanisms that can contribute to societally beneficial outcomes for AI.
We welcome two types of submissions:
A. Governance architecture and market design We welcome less-technical submissions addressing concerns, or advocating for positions, relating to the field of private AI governance as a whole. Such submissions could, for example, aim to situate private AI governance within wider AI capabilities, AI regulation, technical AI governance (TAIG) or AI safety discussions.
Examples (illustrative, not exhaustive) of work relating to governance themes:
Innovating Liability: The Virtuous Cycle of Torts, Technology and Liability Insurance (Lior, 2023).
Insuring Emerging Risks from AI (Weil et al., 2024).
AI Governance through Markets (Tomei et al., 2025).
Procurement as AI Governance (Ben Dor & Coglianese, 2021).
Regulatory Markets: The Future of AI Governance (Hadfield & Clark, 2023).
Insuring Generative AI: Risks and Mitigation Strategies (Munich Re, 2024).
Public vs Private Bodies: Who Should Run Advanced AI Evaluations and Audits? A Three-Step Logic Based on Case Studies of High-Risk Industries (Stein et al., 2024).
A Framework for the Private Governance of Frontier Artificial Intelligence (Ball, 2025).
Understanding accountability in algorithmic supply chains (Cobbe, Veale & Singh, 2023).
B. Technical artefacts for private oversight We welcome research on tools and approaches that might enable private governance mechanisms and that integrate into assurance, certification, insurance and procurement workflows, as well as perspectives reflecting on their limitations and implications for practice.
Examples (illustrative, not exhaustive) of work relating to technical themes:
Holistic Evaluation of Language Models (HELM) (Liang et al., 2022).
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal (Mazeika et al., 2024).
Model Cards for Model Reporting (Mitchell et al., 2019).
Datasheets for Datasets (Gebru et al., 2021).
A Watermark for Large Language Models (Kirchenbauer et al., 2023).
C2PA Technical Specification (Content Provenance & Authenticity) (Coalition for Content Provenance and Authenticity (C2PA), 2023).
Data Shapley: Equitable Valuation of Data for Machine Learning (Ghorbani & Zou, 2019).
Estimating Training Data Influence by Tracing Gradient Descent (TracIn) (Pruthi et al., 2020).
Helpful, harmless, honest? Sociotechnical limits of AI alignment and safety through Reinforcement Learning from Human Feedback (Lindstrom et al, 2025).
From policy to practice in data governance and responsible data stewardship: System design for data intermediaries (Powar et al, 2025).
Format
Submissions will need to follow NeurIPS submission format, and can be either regular papers (up to 9 pages maximum) or tiny papers (up to 2 pages). Please see the NeurIPS Call for Papers for guidance on NeurIPS formatting requirements.
We expect tiny papers to still follow roughly the same structure of full papers, but with max 2 pages of main text. Reference and appendices do not count towards the page limit (but reviewers will only focus on the main text of the tiny paper). Although we recommend that authors submit regular papers if they are able to, tiny papers are available for everyone considering the tight timelines.
All accepted papers will be presented during our poster sessions, and a select few will be chosen for short oral presentations.
Submissions will be managed through OpenReview. Please ensure that the submitting author is registered with the OpenReview platform, and please note the OpenReview moderation policy: new profiles created without an institutional email will go through a moderation process that can take up to two weeks. New profiles created with an institutional email will be activated automatically. Please find the submission link here.
Publication and Presentation
All accepted papers will be presented as posters during the workshop, for which at least one paper author must be present in-person. Additionally, a select number of outstanding papers will be invited for lightning talks.
The workshop is non-archival, and links to accepted papers will be published on the workshop website. We also welcome papers that have already undergone peer review and/or have been published at other venues.
Important dates
Paper submissions open: 15 September 2025
Paper submissions deadline: 17 October 2025
Author notification: 31 October 2025
Main workshop: 6 or 7 December 2025 (TBD)
Questions?
Please email the workshop organizers at: jsmakman@adalovelaceinstitute.org