Navigate around obstacles to reach an identified speaker
Using speech recognition, the robot will identify and localize a specific speaker and move toward that target, avoiding obstacles identified from the visual scene. The robot will stop moving once it has reached the target, represented by a monochrome sphere. This project consists of three primary modules, a signal integration system, and a reward system.
The auditory module focuses on two tasks: speaker identification and sound localization. In the first task, the robot needs to be able to identify between multiple, concurrent speakers. In the second task, the robot needs to localize the source of the sound relative to itself in space. Combined, these features will allow the robot to issue motor commands about where to go to reach the target speaker.
Owners: Byron, Sohrob, Andrew
The visual module focuses on obstacle detection and avoidance. Using optic flow methods to identify potential obstacles will allow the robot will issue motor commands about how to circumnavigate immediate blocks. The robot will also handle basic target identification through the use of monochrome glowing sphere.
Owners: Ben, Sean P
The motor module focuses on moving the robot around space given commands form the auditory and visual systems. Part of this task will be using motor babbling to adapt and learn how to interpet inputs from the other systems into intended, correct movements.
Owners: Jeremy, Sean L, Sam
The post-parietal cortex system represents the integration of the sensorimotor signals
A reward system will be implemented to enable reinforcement learning.
The goal of the project was to enable a robotic platform to learn to avoid aversive stimuli and approach appetitive stimuli. A reinforcement learning procedure using trace conditioning was implemented toward this end. A visual tracking system was developed to track red and blue objects, enabling hard-wired behaviors to occur based on the reinforcement learning. Since the training of the real robot was deemed to take tens of thousands of real-time hours, a virtual environment was created to enable the robotic platform to be run, and learn, in simulation. The testing environment included the autonomous robot plus a remote-controlled robot fitted with either a red or blue ball, denoting it as an aversive/appetitive stimulus, respectively. Results are shown qualitatively as learned approaching/avoiding behavior and quantitatively as time between constants under the two different behaviors.