Vision in Practice on Autonomous Robots
Call for Contributions
Mobile robots have matured tremendously in recent years, resulting in platforms that are very capable in a number of situations. A few examples include industrial robots such as Baxter by Rethink Robotics, general purpose platforms such as Atlas by Boston Dynamics, or the recent DARPA Robotics Challenge winner, DRC-HUBO by Rainbow Robotics. To become integrated into our workplace and society in general, robots need better perceptual capabilities than what is currently being deployed. Recent developments in computer vision are promising. Data driven solutions have demonstrated significant advances to a wide variety of problems, providing robustness to lighting, changing perspective and camera motion. Unfortunately, deploying these solutions to robots continues to involve its own set of challenges. Hardware is often limited, the online data sets are not representative of robot computer vision problems, and, often, the camera view from the robot does not contain adequate features for classification. This workshop aims to bring together computer vision and robotics researchers to discuss issues related to deploying computer vision on an autonomous robot.
We particularly encourage submissions that have successfully been demonstrated on autonomous robots. In addition to papers describing completed work, we also encourage papers that describe work in-progress.
Possible topics include, but are not limited to:
- Unsupervised, semi-supervised, or Reinforcement Learning on Robotics Platforms
- Object Detection and Recognition
- Active perception
- Real-time Perception
- Navigation and SLAM
- Temporal and/or contextual reasoning
- Scene understanding from robotics platforms
- Autonomous Robotic Surveillance and Tracking
- Biometrics on Robotics Platforms
- Understanding Hand Gestures from Autonomous Robots
- Computer vision using limited resources typical on autonomous robots
We are soliciting full papers (maximum 8 pages, excluding references) following the ICCV format (see http://iccv2017.thecvf.com/submission/main_conference/author_guidelines for Word and Latex templates). The review process will be double blind with at least two reviewers per paper. Papers that are longer than 8 pages, not in English, are not anonymous, or do not use the ICCV submission template will be rejected without review.
All submissions will be handled electronically via the conference’s CMT Website:
Submission -- July 21 (hard deadline; no extensions will be given)
Review Results -- August 11
Camera Ready Copy -- August 25
Workshop Date -- October 23 Afternoon
Speaker 1: Yezhou Yang, Arizona State University
Title: Active Perception Beyond Appearance, and its Robotic Applications
Abstract: The goal of Computer Vision, as coined by Marr, is to develop algorithms to answer What are Where at When from visual appearance. The speaker, among others, recognizes the importance of studying underlying entities and relations beyond visual appearance, following an Active Perception paradigm. The talk will present the speaker's efforts over the last several years, ranging from 1) hidden entities recognition (such as action fluent, human intention and force prediction from visual input), through 2) reasoning beyond appearance for solving image riddles and visual question answering, till 3) their applications in a Robotic visual learning framework as well as for Human-Robot Collaboration. The talk will also feature several ongoing projects and future directions among the Active Perception Group (APG) with the speaker at ASU CIDSE.
Biography: Yezhou Yang is an Assistant Professor at School of Computing, Informatics, and Decision Systems Engineering, Arizona State University. Before joining ASU, Dr. Yang was a Postdoctoral Research Associate at the Computer Vision Lab and the Automation, Robotics and Cognition (ARC) Lab, with the University of Maryland Institute for Advanced Computer Studies. His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language as well as high-level reasoning over the primitives for intelligent robots. His research mainly focused on solutions to visual learning, which significantly reduce the time to program intelligent agents. These solutions involve Computer Vision, Deep Learning, and AI algorithms to interpret peoples’ actions and the scene’s geometry.
Speaker 2: Zoran Duric, George Mason University
Title: Study and Simulation of Human Functional Movements
Abstract: In this talk I will describe our efforts on understanding human functional movements. In particular, I will talk about gait analysis and upper extremity functional movement. Gait is strongly constrained by anatomy and physiology and gait patterns are strong correlates of physical health and overall well being. We have been working on designing computer vision methods for obtaining reliable segmental motion data, which can distinguish one individual from another and identify abnormal motion patterns. The benefit of our approach is that it can be used to observe people in their natural environments. Our goal is to extend this work so that the relevant forces could be calculated.
The upper extremity functional movements are mostly constrained by purpose and function. In our work, we have been analyzing prehensile grips and movements used in activities of daily living. For this purpose, we have used surface electromyography and accelerometry. We have found that to model muscle activity the electromyography data is insufficient. I will discuss the importance of information from several sensory modalities for this work and will present results we have obtained.
Biography: Zoran Duric is an Associate Professor at the Department of Computer Science, George Mason University (GMU) in Fairfax, Virginia. He received his M.S. in Electrical Engineering from the University of Sarajevo, Bosnia and Herzegovina, in 1986, and his Ph.D. in Computer Science from the University of Maryland (UMD) at College Park in 1995. From 1982 to 1989 he was a Member of the Research Staff at the Division for Vision and Robotics, Energoinvest Institute for Control and Computer Science, Sarajevo. From 1995 to 1997 he was an Assistant Research Scientist at the Machine Learning and Inference Laboratory at GMU and at the Center for Automation Research at the UMD. From 1996 to 1997 he was also a Visiting Assistant Professor at the Computer Science Department of GMU. He joined the faculty of GMU in the Fall of 1997 as an Assistant Professor of Computer Science. He has published over 80 technical papers on various topics including computer vision, information hiding, and video processing. Most recently the focus of his research has been the study and simulation of human movement. He is a Deputy Editor of the Pattern Recognition Journal and a member of the Editorial Board of the IEEE Transactions on Intelligent Transportation Systems.
14:00 - 1415 -- Introduction
Authors: Somdyuti Paul and Lovekesh Vig (TCS Research)
Authors: Andrea Manno-Kovacs and Levente Kovacs (Hungarian Academy of Sciences)
Authors: Jakob Suchan (University of Bremen) and Mehul Bhatt (University of Bremen, and Örebro University)
1500 - 1600 -- Invited Talk: Yezhou Yang
1600 - 1615 -- Break
Author: Marco Melis (University of Cagliari) and Ambra Demontis, (University of Cagliari) and Battista Biggio (Pluribus One) and Gavin Brown (University of Manchester) and Giorgio Fumera (University of Cagliari) and Fabio Roli (University of Cagliari)
Authors: Huajun Zhou and Zechao Li and Chengcheng Ning and Jinhui Tang (Nanjing University of Science and Technology)
1645 - 1700 -- Learning to Segment Affordances
Authors: Timo Lüddecke and Florentin Wörgötter (University of Goettingen)
1700 - 1800 -- Invited Talk: Zoran Duric
1800 -- Adjorn
Dr. Keith Sullivan (keith.sullivan at nrl.navy.mil), Dr. Ed Lawson
Naval Center for Applied Research in Artificial Intelligence, Naval Research Laboratory, USA
Dr. Eric Martinson
Toyota InfoTechnology Center, USA
Zoran Duric, George Mason University
Brian Hrolenok, Georgia Institute of Technology
Esube Bekele, Naval Research Laboratory
Jyh-Ming Lien, George Mason University
Josh Harguess, Space and Naval Warfare Systems Command
Yuke Zhu, Stanford University