We welcome submissions at the intersection of responsible AI and generative AI, where generative models may be single-modal or multi-modal with at least one the modalities being visually grounded. Topics of interests include but are not limited to:
Fairness, biases, harms, risks and socio-ethical failures of generative AI systems.
Privacy and copy right concerns in generative AI systems.
Hallucinations of generative AI systems.
Factuality of the generated content.
The effect of training large scale generative AI systems on the environment.
Methods for measuring concerns of fairness, bias, privacy, and diversity of generations, including new libraries.
Challenges with addressing fairness, bias, privacy, and diversity in generative AI systems.
Generative models as socio-technical systems: the role of these technologies and the impact of their development on people, including people from marginalized communities.
Intersection of culture, religion and generative AI systems. How are cultures affected by the introduction of these technologies? How are these models performing in different cultures and locations in the world? Can these models contribute to cultural appropriation?
Development of taxonomies of harms and risks, capturing the tension between restrictive and expansive interventions (e.g, disallowing certain prompts vs. encouraging diversity of generations).
Detection of hallucinated and fake information, i.e, assessing the factuality and deleting data that is generated and manipulated by generative AI systems.
Methods for mitigating the bias, fairness, privacy, and hallucination of generative AI systems.
Participatory approaches and HCI approaches assessing the generated images for social and ethical risks.
Explainability, interpretability and transparency techniques for generative AI systems.
Practical use cases of generative AI systems.
Models trained with generated data -- promises and concerns around representation diversity, bias and ethical considerations.
Practical and ethical challenges when using generative AI systems in representation learning, e.g, to augment group representations.
Applications of generative models to improve evaluations of fairness, diversity, bias, etc, of discriminative and representation learning models.
Legal and regulatory considerations of the data used to train generative AI systems, the models, and the generated content.
Ethics of data collection, annotation, and use of data for generative modeling.
Ethics of use of generated data.
Qualitative and human evaluation of generative models and what aspects of the models could not be assessed by quantitative metrics?
We invite submissions of four pages (double column), excluding references. The papers should follow the CVPR formatting style. The papers should be submitted on the CMT portal (https://cmt3.research.microsoft.com/ReGenAI2025). Submitted work may include shorter versions of work presented at the main conference or other venues. We also encourage to submit preliminary work on relevant topics of the workshop that may be also submitted to a different venue afterwards. Accepted short papers will be linked online on the workshop webpage and will be presented at the workshop's poster session.
The ReGenAI workshop reviewing is double blind. Please do not provide any information nor website link that may identify the authors of the submission in the manuscript. Note that there
Are we including proceedings? The workshop will offer the opportunity to publish accepted papers as workshop proceedings. Authors of submitted papers will be asked whether they agree to publish their paper. Note that solely papers containing new research will be considered for publication.
Paper submission deadline: March 10 (11:59PM PST)
Notification to authors: March 31st (11:59 PST)
Camera ready submission: April 7th (11:59PM PST)
CMT Submission link: https://cmt3.research.microsoft.com/ReGenAI2025