Overview & Current Syllabus (Spring 2017)

Prof. Trevor Darrell, Prof. Alyosha Efros; {trevor,efros} eecs.berkeley.edu

Dr. David Fouhey{dfouhey} eecs.berkeley.edu

Pre-requisite for the course: CS280 graduate computer vision or active research effort on related topic with permission of instructor.

Weekly Schedule

 DateTheme Organizer 
 Jan 23  Planning--
 Jan 30Vision & Language, Sequence ModelingLisa Anne
 Feb 63D Shubham 
 Feb 13Feature Learning Richard 
 Feb 20Holiday (No Class) -- 
 Feb 27Deep RL  Coline 
 Mar 6Other modalities Andrew 
 Mar 13ICCV and Visit day 
 Mar 20Robotics  Abhishek
 Mar 27Spring Break (No Class) -- 
 Apr 3Learning to Learn Ke 
 Apr 10Object Detection Ronghang 
 Apr 17CuriosityPulkit 
 Apr 24Arxiv grab bag--
 May 1   Arxiv grab bag-- 
 May 8 Vision & Graphics  Jun-Yan, Tinghui 
 May 15   Is it all just memorization?Evan
 May 22Loss functionsPhil


Location and time

Newton room, Room 730 Sutardja Dai Hall, Mondays 10-12am

Resources

We will use Piazza. Please sign up for CS294-43 at http://piazza.com 

Finalized reviews will be published on theberkeleyview which is a public blog.

Please read how we expect participation in this course: https://sites.google.com/site/ucbcs29443/review-pipeline 

Agenda

This course covers computer vision and machine learning techniques for object and activity recognition, as well as new emerging directions and learning techniques. Emphasis will be placed on recent techniques based on layered perceptual representation learning, a.k.a. "deep" learning. Recognition of individual objects or activities (the coffee cup on your desk, a particular chair in your office, a video of you riding your bike) or generic categories (any cup, chair, or cycling event) is an essential capability for a variety of robotics and multimedia applications.  This course reviews methods from the recent literature (past 6-9 months) that have achieved success on such challenge problems, and may also consider the techniques needed for real-time interactive application on robots or mobile devices, e.g. domestic service robots or mobile phones that can retrieve information about objects in the environment based on visual observation.  This class is based exclusively on readings from the recent literature, including those appearing at the CVPR, ICCV, ECCV, ICML, NIPS, and ICLR conferences.