Call for Papers

2 Track Options

We invite contributions on the topics listed below in 2 formats:

  1. extended abstracts (1 page excluding references) and

  2. short papers (up to 4 pages excluding references).

For both submission formats, we discourage the authors from submitting any supplementary material and we can't guarantee that this material will be reviewed.

The CMT paper submission system will allow authors to indicate the track (short paper OR abstract) during submission.

Extended Abstract

Rolling acceptances until May 27, 2021.

Short Paper

These will be published in the Workshop Proceedings at CVPR 2021 (prior workshop proceedings available here).

Timeline:

  • CMT portal opens: Feb 1, 2021

  • Submission deadline: March 10, 2021 March 25, 2021 11.59 PM PST

  • Author notification: April 5, 2021 April 10, 2021

  • Camera-ready for accepted papers: April 19, 2021

Submission Instructions

All submissions should be in the anonymized CVPR 2021 format, and submissions will be subjected to the double-blind review process. Submitted papers should not be published, accepted, or under review elsewhere. Appendices may be attached past the page limit, but reviewers are not required to read them.

Submissions for both abstracts and short papers can be done at https://cmt3.research.microsoft.com/RCV2021/.

Topics

This workshop will broadly address three primary aspects of responsible artificial intelligence in the context of computer vision: fairness; interpretability and transparency; and privacy. Specifically, we have the following topics:

  1. Fairness

    • Causes and societal implications of bias in ML models.

    • Beyond data and modeling: Societal causes for bias in ML models.

    • Societal implications due to ML models predicting sensitive attributes.

    • Standardizing AI Ethics across the research and industrial community.

    • How should researchers identify the relevant risks and harms posed by their work?

    • Incorporating ethics into AI education.

  2. Challenges and solutions in data sourcing and collection

    • Data augmentation methods, eg: using GANs to augment data.

    • Weak, semi-, and self-supervised techniques for representative data collection.

    • Challenges in sourcing geographically diverse data.

    • Ensuring fairness in annotations.

    • Active learning techniques to construct representative and fair datasets.

    • Exploring any other alternative sources of data.

  3. Evaluation tools and metrics to identify bias and measure fairness

    • Tooling to visualize, analyze, and report fairness.

    • Metrics to measure disparate treatment for various ML tasks (eg: detection, segmentation, etc.).

  4. AI Fairness: Modeling techniques

    • Meta-learning approaches to ensure algorithmic fairness for tail classes.

    • Domain adaptation methods for tail classes.

    • Adversarial learning to be blind to sensitive attributes.

  5. Interpretability and Explainability

    • Practical modeling and evaluating techniques of interpretable and explainable large-scale machine learning systems.

    • Visualizing feature representations in deep neural networks for fairness.

  6. Privacy and Fairness

    • Privacy-aware data collection approaches.

    • General policy, legal, user-privacy implications while gathering annotations of sensitive attributes such as gender and age (not specific to any particular organization’s position).

    • Privacy-aware learning techniques to prevent the use of sensitive information.

    • Mechanisms to prevent sensitive information inference.

    • Federated and decentralized privacy-preserving algorithms.