Projects‎ > ‎

Computer Vision for the Visually Impaired


We propose to develop a wearable, electronic image acquisition and processing system (or visual enhancement system) to guide visually impaired individuals through their environment, providing information to the user about nearby objects of interest, potentially dangerous obstacles, their location, and potential paths to their destination. 

This project aims to develop a computer vision-based (stereo vision or RGB-D camera) navigation system for the visually impaired. This system is expected to enable the visually impaired to extend the range of their activities compared to that provided by conventional aid device, such as white cane. In order to extract orientational information of the blind users, we incorporate visual odometry and feature based 3D SLAM into our system. We build a vicinity map based on dense 3D data obtained from computer vision system, and perform path planning to provide the visually impaired with 3D traversability on the map. The 3D traversability analysis helps subjects steer away from obstacles in the path. A vest-type interface consisting of four microvibration motors delivers queues for real-time navigation with obstacle avoidance. Our system operates at 15Hz, and helps the visually impaired improve the mobility performance in a cluttered environment. 

           [1] Young Hoon Lee, Gérard Medioni, RGB-D camera based wearable navigation system for the blind 
                Computer Vision and Image Understanding 2016 (Accepted)
[2] Young Hoon Lee, Gérard Medioni, Wearable RGBD indoor navigation system for the blind (Oral paper) 
     ECCV 2014 Assistive Computer Vision and Robotics Workshop, Zurich, Switzerland, Sep 2014.
[3] Young Hoon Lee, Gérard Medioni, A RGB-D camera Based Navigation for the Visually Impaired 
     RSS 2011 RGB-D: Advanced Reasoning with Depth Camera Workshop, Los Angeles, Jun 2011.

[1] Best Research Demo Award2nd Annual Ming Hsieh Department of Electrical Engineering Research Festival, USC
[2] Finalist, 1st Cornell Cup USA, presented by Intel.

We address the problem of staircase detection, in the context of a navigation aid for the visually impaired. The requirements for such a system are robustness to viewpoint, distance, scale, real-time operation, high detection rate and low false alarm rate. Our approach uses classifiers trained using Haar features and Adaboost learning. This first stage does detect staircases, but produces many false alarms. The false alarm rate is drastically reduced by using spatial context in the form of the estimated ground plane, and by enforcing temporal consistency. We have validated our approach on many real sequences under various weather conditions, and are presenting some of the quantitative results.

[1] Young Hoon Lee, Tung-sing Leung and Gérard Médioni, Real-time staircase detection from a wearable stereo system
      ICPR 2012 (Oral paper), To appear, Tsukuba, Japan, Nov 11-15, 2012