Responsible Computer Vision

CVPR 2021 Workshop

June 20, 2021

Updates

Youtube playlist

About


Most modern computer vision models such as object and face recognition systems have tremendously improved user experiences in several real-world products such as Pinterest, Twitter, self-driving cars, and Google Photos. Such models typically heavily rely on large-scale training data to achieve good performance. Recently, it has been observed that most popular open-source datasets are skewed towards some subgroups over others [1, 2, 3] and that the resulting models serve groups that are less represented (e.g., people of darker skin tones, gender minorities, and images from non-Western countries) poorly. This often leads to a trained (and deployed) model that does not work for every user equally and can amplify existing societal biases [4].


Through this workshop, we want to initiate crucial conversations that span the entire length of the computer vision pipeline. This begins with the challenges in constructing privacy-aware large-scale datasets that capture geographical, cultural, demographic diversity in data and annotators in a scalable and efficient manner. Going beyond responsible data collection, there are numerous open research problems around learning robust, fair, and privacy-preserving feature representations and model interpretability. For instance, how do we train large-scale models that prevent the inadvertent use of sensitive information such as gender, age, or race of a person? What mechanisms should we enforce to prevent the inference of such sensitive information? Can we transfer the knowledge from the data-rich (head) classes to the data-poor (tail) classes to learn fairer models? What different forms of training objectives can help with bias mitigation? What visualization and analytical tools should we employ to understand why a model performs better on certain subgroups than others?


Some of these questions are closely tied to various research problems in the field of computer vision at large, such as weakly supervised learning, active learning, low-shot learning, domain adaptation, meta-learning, adversarial learning, federated learning, and many more. However, we believe these topics have not been sufficiently explored in the context of building fair models that work for everyone.


Thus, we strongly believe that a healthy discussion in the computer vision community towards building fair and responsible computer vision models is increasingly crucial right now. Our workshop aims to facilitate that discussion. A key goal of this workshop is to provide a common forum for a diverse group of computer vision researchers and practitioners in the industry and in academia to brainstorm and propose best practices towards building computer vision models that work well for everyone in a safe, responsible, and fair manner. Specifically, we hope to:


  • Spread awareness of the societal implications due to algorithmic biases in the deployed computer vision models.

  • Initiate an in-depth discussion on the general legal, privacy, ethical, and policy implications to

  • Consider while constructing representative datasets and training models by AI practitioners.

  • Identify best practices in building large-scale privacy-aware computer vision datasets and models in a fair and responsible manner.

  • Establish evaluation methods, tools, and frameworks for identifying and measuring the fairness and interpretability of computer vision models.

  • Propose effective, state-of-the-art solutions to mitigate different forms of algorithmic biases.


Through this workshop, we hope to create a culture of computer vision research that addresses the aforementioned goals of responsible AI right from the outset of model development.