2010/2011 Projects

2010

Download the class final report, and check the blog post

Navigate around obstacles to reach an identified speaker
Using speech recognition, the robot will identify and localize a specific speaker and move toward that target, avoiding obstacles identified from the visual scene. The robot will stop moving once it has reached the target, represented by a monochrome sphere. This project consists of three primary modules, a signal integration system, and a reward system.
  • Sound
The auditory module focuses on two tasks: speaker identification and sound localization. In the first task, the robot needs to be able to identify between multiple, concurrent speakers. In the second task, the robot needs to localize the source of the sound relative to itself in space. Combined, these features will allow the robot to issue motor commands about where to go to reach the target speaker.
Owners: Byron, Sohrob, Andrew
  • Vision
The visual module focuses on obstacle detection and avoidance. Using optic flow methods to identify potential obstacles will allow the robot will issue motor commands about how to circumnavigate immediate blocks. The robot will also handle basic target identification through the use of monochrome glowing sphere.
Owners: Ben, Sean P
  • Motor
The motor module focuses on moving the robot around space given commands form the auditory and visual systems. Part of this task will be using motor babbling to adapt and learn how to interpet inputs from the other systems into intended, correct movements.
Owners: Jeremy, Sean L, Sam
  • PPC
The post-parietal cortex system represents the integration of the sensorimotor signals
  • Reward
A reward system will be implemented to enable reinforcement learning.

2011
The goal of the project was to enable a robotic platform to learn to avoid aversive stimuli and approach appetitive stimuli. A reinforcement learning procedure using trace conditioning was implemented toward this end. A visual tracking system was developed to track red and blue objects, enabling hard-wired behaviors to occur based on the reinforcement learning. Since the training of the real robot was deemed to take tens of thousands of real-time hours, a virtual environment was created to enable the robotic platform to be run, and learn, in simulation. The testing environment included the autonomous robot plus a remote-controlled robot fitted with either a red or blue ball, denoting it as an aversive/appetitive stimulus, respectively. Results are shown qualitatively as learned approaching/avoiding behavior and quantitatively as time between constants under the two different behaviors.

Read the blog post 1blog post 2, and the report
Ċ
Massimiliano Versace,
Jan 5, 2011, 4:45 AM
č
Movie12.mov
(10159k)
Samantha Michalka,
Dec 2, 2010, 2:47 AM
č
Movie13.mov
(7765k)
Samantha Michalka,
Dec 2, 2010, 12:42 PM
Comments