Workshop on Fairness Accountability Transparency and Ethics

in Computer Vision at CVPR 2019

Important Dates

  • Paper submission deadline: April 19, 2019
  • Notification to authors: May 6, 2019
  • "Camera ready" deadline (uploading PDFs on our website, non-archival): May 27, 2019
  • Workshop Date: June 17, 2019
  • Workshop Time: 8:30am-12:30pm
  • Workshop Venue: Hyatt Regency Hotel, 200 S Pine Ave, Long Beach
  • Workshop Room: Seaview A

Overview

Computer vision has ceased to be a purely academic endeavor. From law enforcement [1], to border control [2], to employment [3], healthcare diagnostics [4], and assigning trust scores [5], computer vision systems have started to be used in all aspects of society. This last year has also seen a rise in public discourse regarding the use of computer-vision based technology by companies such as Google, Microsoft, Amazon and IBM. In research, works such as [6] purport to determine a person’s sexuality from their social network profile images, and [7] claims to classify “violent individuals” from drone footage. These works were published in high impact journals, and some were presented at workshops in top tier computer vision conferences such as CVPR [8].

On the other hand, seminal works such as [9] published last year showed that commercial gender classification systems have high disparities in error rates by skin-type and gender, [10] exposes the gender bias contained in current image captioning based works, and [11] both exposes biases in the widely used CelebA dataset and proposes adversarial learning based methods to mitigate its effects. Policy makers and other legislators have cited some of these seminal works in their calls to investigate unregulated usage of computer vision systems [12].

We believe the vision community is well positioned to foster serious conversations about the ethical considerations of some of the current use cases of computer vision technology, and thus hold a workshop on the Fairness, Accountability, Transparency, and Ethics (FATE) of modern computer vision in order to provide a space to analyze controversial research papers that have garnered a lot of attention. Our workshop also seeks to highlight research on uncovering and mitigating issues of unfair bias and historical discrimination that trained machine learning models learn to mimic and propagate. We welcome submissions from broadly defined areas, and will have speakers discussing the ethical considerations underlying some of the most contentious recent research papers.

References

  1. Clare Garvie, Alvaro Bedoya, and Jonathan Frankle. The Perpetual Line-Up: Unregulated Police Face Recognition in America. Georgetown Law, Center on Privacy & Technology, 2016.
  2. Steven Levy, Inside Palemer Lucky’s Bid to Build a Border Wall, Wird, June 6 2018, https://www.wired.com/story/palmer-luckey-anduril-border-wall/
  3. HireVue, https://www.hirevue.com/
  4. Gulshan, Varun, et al. "Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs." Jama 316.22 (2016): 2402-2410
  5. Paul Mozur, Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras, New York Times, Jun 2018, https://www.nytimes.com/2018/07/08/business/china-surveillance-technology.html
  6. Wang, Yilun, and Michal Kosinski. "Deep neural networks are more accurate than humans at detecting sexual orientation from facial images." (2017).
  7. Singh, Amarjot, Devendra Patil, and S. N. Omkar. "Eye in the Sky: Real-time Drone Surveillance System (DSS) for Violent Individuals Identification using ScatterNet Hybrid Deep Learning Network." arXiv preprint arXiv:1806.00746 (2018).
  8. CVPR Workshop on Efficient Deep Learning for Computer Vision, CVPR 2018, http://openaccess.thecvf.com/CVPR2018_workshops/CVPR2018_W33.py
  9. Buolamwini, Joy, and Timnit Gebru. "Gender shades: Intersectional accuracy disparities in commercial gender classification." Conference on Fairness, Accountability and Transparency. 2018
  10. Lisa Anne Hendricks, et al. "Women also Snowboard: Overcoming Bias in Captioning Models." Proceedings of the European Conference on Computer Vision (ECCV). 2018.
  11. Zhang, Brian Hu, Blake Lemoine, and Margaret Mitchell. "Mitigating unwanted biases with adversarial learning." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES) (2018)
  12. Kamala Harris, Cory Booker, Cedric Richmond, Sept 2018, Letter to the Federal Bureau of Investigation, https://www.scribd.com/embeds/388920671/content#from_embed