Overview & Current Syllabus (Fall 2017)

Trevor Darrell, Alyosha Efros; {trevor,efros} eecs.berkeley.edu

David Fouhey{dfouhey} eecs.berkeley.edu

Pre-requisite for the course: CS280 graduate computer vision or active research effort on related topic with permission of instructor.

Weekly Schedule

 DateTheme Organizer 
 August 29 Deciding the Themes David
 September 5 *Supervised, Multimodal Kate
 September 12 Video, Humans, & Actions Andreea
 September 19 Adversarial Examples, Interpretation Karima
 September 26 Language & Vision Ashish, Jared
 October 3     Domain Adaptation, Few Shot Learning Coline
 October 10
 Architectures, Reasoning, Attention Jasmine
 October 17 GANs + Pictures Alan
 October 24 Applications (Medical etc.) Michael
 October 31 3D Vision Zee+Peter
 November 7 In-the-wild, open world, continual learning Jeff
 November 14 CVPR Day Nobody! Write your papers
 November 21 RL Samvit, JD
 November 28 Vision & Robotics, Simulators, Physics Zhe
 December 5 Network Compression, Compute ZH
 December 12 Arxiv Grab Bag Everybody

Location and time

(August 28 only once) Newton room, Room 730 Sutardja Dai Hall; (September 5 onwards) Cory 337B 

Tuesday 10:30 sharp (not Berkeley time) - 12am


This course covers computer vision and machine learning techniques for object and activity recognition, as well as new emerging directions and learning techniques. Emphasis will be placed on recent techniques based on layered perceptual representation learning, a.k.a. "deep" learning. Recognition of individual objects or activities (the coffee cup on your desk, a particular chair in your office, a video of you riding your bike) or generic categories (any cup, chair, or cycling event) is an essential capability for a variety of robotics and multimedia applications.  This course reviews methods from the recent literature (past 6-9 months) that have achieved success on such challenge problems, and may also consider the techniques needed for real-time interactive application on robots or mobile devices, e.g. domestic service robots or mobile phones that can retrieve information about objects in the environment based on visual observation.  This class is based exclusively on readings from the recent literature, including those appearing at the CVPR, ICCV, ECCV, ICML, NIPS, and ICLR conferences.