Responsible Computer Vision
ECCV 2022 Workshop
Updates
10/21/2022: The accepted work posters are posted! The workshop will take place on Oct 23rd at the David Intercontinental Hotel in Tel-Aviv in Ballroom G. We have an in-person poster session in the morning and a virtual poster session in the afternoon (details will be posted soon for virtual session). Looking forward to seeing all of you!
08/17/2022: There was a bug in the CMT setup that was not allowing extended abstracts. This should be fixed now, thank you to those of you who pointed this out.
07/07/2022: After receiving feedback from many of you, we are not going to publish the short papers in the ECCV proceedings. This also allows us to extend the submission date to allow more submissions. Looking forward to reading all of the submissions!
About
Most modern computer vision models such as object and face recognition systems have tremendously improved user experiences in several real-world products such as Pinterest, Twitter, self-driving cars, and Google Photos. Such models typically heavily rely on large-scale training data to achieve good performance. Recently, it has been observed that most popular open-source datasets are skewed towards some subgroups over others [1, 2, 3] and that the resulting models serve groups that are less represented (e.g., people of darker skin tones, gender minorities, and images from non-Western countries) poorly. This often leads to a trained (and deployed) model that does not work for every user equally and can amplify existing societal biases [4].
Through this workshop, we want to initiate crucial conversations that span the entire length of the computer vision pipeline. This begins with the challenges in constructing privacy-aware large-scale datasets that capture geographical, cultural, demographic diversity in data and annotators in a scalable and efficient manner. Going beyond responsible data collection, there are numerous open research problems around learning robust, fair, and privacy-preserving feature representations and model interpretability. For instance, how do we train large-scale models that prevent the inadvertent use of sensitive information such as gender, age, or race of a person? What mechanisms should we enforce to prevent the inference of such sensitive information? Can we transfer the knowledge from the data-rich (head) classes to the data-poor (tail) classes to learn fairer models? What different forms of training objectives can help with bias mitigation? What visualization and analytical tools should we employ to understand why a model performs better on certain subgroups than others?
Some of these questions are closely tied to various research problems in the field of computer vision at large, such as weakly supervised learning, active learning, low-shot learning, domain adaptation, meta-learning, adversarial learning, federated learning, and many more. However, we believe these topics have not been sufficiently explored in the context of building fair models that work for everyone.
Thus, we strongly believe that a healthy discussion in the computer vision community towards building fair and responsible computer vision models is increasingly crucial right now. Our workshop aims to facilitate that discussion. A key goal of this workshop is to provide a common forum for a diverse group of computer vision researchers and practitioners in the industry and in academia to brainstorm and propose best practices towards building computer vision models that work well for everyone in a safe, responsible, and fair manner. Specifically, we hope to:
Spread awareness of the societal implications due to algorithmic biases in the deployed computer vision models.
Initiate an in-depth discussion on the general legal, privacy, ethical, and policy implications to
Consider while constructing representative datasets and training models by AI practitioners.
Identify best practices in building large-scale privacy-aware computer vision datasets and models in a fair and responsible manner.
Establish evaluation methods, tools, and frameworks for identifying and measuring the fairness and interpretability of computer vision models.
Propose effective, state-of-the-art solutions to mitigate different forms of algorithmic biases.
Through this workshop, we hope to create a culture of computer vision research that addresses the aforementioned goals of responsible AI right from the outset of model development.