2nd workshop on critical evaluation of
generative models and their impact on society
20 October 2025
at ICCV 2025, Honolulu, Hawaii
20 October 2025
at ICCV 2025, Honolulu, Hawaii
Visual generative models have revolutionized our ability to generate realistic images, videos, and other visual content. However, with great power comes great responsibility. While the computer vision community continues to innovate with models trained on vast datasets to improve visual quality, questions regarding the adequacy of evaluation protocols arise. Automatic measures such as CLIPScore and FID may not fully capture human perception, while human evaluation methods are costly and lack reproducibility. Alongside technical considerations, critical concerns have been raised by artists and social scientists regarding the ethical, legal, and social implications of visual generative technologies. The democratization and accessibility of these technologies exacerbate issues such as privacy, copyright violations, and the perpetuation of social biases, necessitating urgent attention from our community.
This interdisciplinary workshop aims to convene experts from computer vision, machine learning, social sciences, digital humanities, and other relevant fields. By fostering collaboration and dialogue, we seek to address the complex challenges associated with visual generative models and their evaluation, benchmarking, and auditing.
Aylin Caliskan is an Assistant Professor in the Information School and holds an adjunct appointment in the Paul G. Allen School of Computer Science & Engineering at the University of Washington where she co-directs the UW Tech Policy Lab. Caliskan studies and addresses the societal impact of artificial intelligence (AI) by developing methods and transparency enhancing approaches. Specifically, Caliskan's research focuses on empirical and technical AI ethics in natural language processing, multimodal machine learning, and human-AI collaboration. Caliskan's work was among the first to rigorously show that machine learning models trained on language corpora contain human-like biases. Her contributions to the impact of natural language processing on fairness and privacy have received best talk and best paper awards. Her honors include recognition as a Rising Star in EECS at Stanford University, being named one of the 100 Brilliant Women in AI Ethics, an IJCAI Early Career Spotlight, the Frontiers of Science Award, and the NSF CAREER Award.
Simran Khanuja is a PhD student at the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University since August 2022. Her research focuses on expanding the capabilities of multimodal systems to serve a wide range of users across languages and cultures, with applications in localization, information access, conversational AI, education, and assistive technologies. Previously, she was a Pre-Doctoral Researcher at Google Research and worked at Microsoft Research. She has made contributions towards advancing under-represented languages in NLP and her work has been published at top NLP conferences like ACL and EMNLP, including best paper awards at EMNLP 2024, IEEE BigData 2024, and SLT 2022. She is also a recipient of the Waibel Presidential Fellowship for 2024-25.
Alice Xiang is the Global Head of AI Ethics at Sony. As the VP leading AI ethics initiatives across Sony Group, she manages the team responsible for conducting AI ethics assessments across Sony's business units and implementing Sony's AI Ethics Guidelines. In addition, as the Lead Research Scientist for AI ethics at Sony AI, Alice leads a lab of AI researchers working on cutting-edge research to enable the development of more responsible AI solutions. Alice also recently served as a General Chair for the ACM Conference on Fairness, Accountability, and Transparency (FAccT), the premier multidisciplinary research conference on these topics. Alice is both a lawyer and statistician, with experience developing machine learning models and serving as legal counsel for technology companies. Alice holds a Juris Doctor from Yale Law School, a Master’s in Development Economics from Oxford, a Master’s in Statistics from Harvard, and a Bachelor’s in Economics from Harvard.
Format. We call for novel work and work in progress that has not been published elsewhere. Submissions must follow the ICCV 2025 template and will be peer-reviewed in a single-blind fashion. We welcome the following formats:
Full papers: 8 pages (excluding references). By default, accepted papers will be included in the ICCV workshop proceedings, unless authors choose to opt out of inclusion in the proceedings, in which case the paper should be made available on arXiv.
Extended abstracts: 1 to 4 pages (excluding references). Accepted extended abstracts will not be published in the proceedings but should be made available on arXiv.
Outstanding previously published papers: recently published papers elsewhere such as ICCV, CVPR, NeurIPS, etc. We will not review these papers again and it is not necessary to change the format to the ICCV 2025 template. A jury of organizers will select these papers. They will be presented as a poster presentation during the workshop and not included in the proceedings.
Topics. The workshop revolves around two main topics, visual quality assessment and impact on society:
Visual generation quality assessment
Image-quality evaluation metrics aligned with human perception.
Language and vision alignment metrics in text-to-image generation.
Protocols for reproducible human evaluation.
New benchmarks for visual generative models.
Visual generation impact on society
Data audition and analysis.
Social bias evaluation.
Privacy threads detection.
Intellectual property violation detection.
Impact on the informational environment.
Impact on the cultural environment.
Impact on the natural environment.
Due to ethical considerations, topics involving the generation of images of humans or human body parts, such as faces or other anatomical features, are excluded from the scope of the workshop unless they are explicitly focused on the social impact of such images. For example, work solely focused on generating facial images, which may have applications in surveillance, would not be considered to be within scope unless the primary focus is on their social impact.
Submission site: https://cmt3.research.microsoft.com/CEGIS2025
The Microsoft CMT service will be used for managing the peer-reviewing process for this conference. This service will be provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.
Paper submission: June 26th 2025. Extended deadline: July 2nd 2025
Notification to authors: July 10th, 2025.
Camera-ready submission: August 18th, 2025.
Workshop: October 20th, 2025.
All dates are 11:59PM, Pacific Time.
Noa Garcia
The University of Osaka
Amelia Katirai
University of Tsukuba
Kento Masui
CyberAgent
Mayu Otani
CyberAgent
Yankun Wu
The University of Osaka
To contact the organizers please use cegis-workshop@googlegroups.com