Overview & Current Syllabus (Spring 2018)

Trevor Darrell, Alyosha Efros; {trevor,efros} eecs.berkeley.edu

David Fouhey{dfouhey} eecs.berkeley.edu

Pre-requisite for the course: CS280 graduate computer vision or active research effort on related topic with permission of instructor.

Weekly Schedule

 DateTheme Organizer 
Jan 223DDavid & co.
Jan 29AttentionAravind
Feb 5Domain AdaptationVitchyr
Feb 12DrivingSayna
Feb 19No class -- Holiday---
Feb 26GraphicsKate
Mar 5Meta-learningVitchyr
Mar 12Robotics and/or RLJustin
Mar 19InterpretabilityAshish
Mar 26No class -- BAIR retreat--
Apr 2Low-shot LearningSayna
Apr 9Naming ThingsJustin
Apr 16Self/unsupervisedAravind
Apr 23Arxiv Grab BagEverybody
Apr 30SimulatorsEric 
May 7Videos & ActionsAllan
May 14Egocentric VisionMike 


Location and time

Cory 337B 

Mondays 10:30 - 12am

Agenda

This course covers computer vision and machine learning techniques for object and activity recognition, as well as new emerging directions and learning techniques. Emphasis will be placed on recent techniques based on layered perceptual representation learning, a.k.a. "deep" learning. Recognition of individual objects or activities (the coffee cup on your desk, a particular chair in your office, a video of you riding your bike) or generic categories (any cup, chair, or cycling event) is an essential capability for a variety of robotics and multimedia applications.  This course reviews methods from the recent literature (past 6-9 months) that have achieved success on such challenge problems, and may also consider the techniques needed for real-time interactive application on robots or mobile devices, e.g. domestic service robots or mobile phones that can retrieve information about objects in the environment based on visual observation.  This class is based exclusively on readings from the recent literature, including those appearing at the CVPR, ICCV, ECCV, ICML, NIPS, and ICLR conferences.