Software Architecture



For the decision of the goal position of our navigation system, we gather user input using the multi-channel Microphone array and after that we conduct Speech Recognition. Next, the sound sources are determined if they are reflections (Reflection Detection). Finally, the goal is set depending on if the localized sound sources are not reflections.

Kinect and a laser scanner are used to capture the point cloud model data of the 3D environment and to provide a 2D cross section of the environment respectively. The data from both the devices along with the IMU sensor are used to perform SLAM and reliable navigation in unknown and unstructured environment.

While navigating, our module will create the 2D and 3D occupancy grid, which is used to plan the global and local trajectories from the starting point to the goal position. The global trajectories will be based on the global costmap and the local trajectories will be related to the local costmap, which will decide the velocity commands to be sent to the base controller. This controller also controls the linear actuator that controls the actuator height.

We use a single-board computer as the brain of the robot. This is where all the processing will take place. This board controls the operation of the two sub-boards viz motor controller(Arduino Mega 2560) and the arm micro-controller unit (CM9.04 Dynamixel Servo controller). The controller controls the position of all the 3 Dynamixel AX-12A servos that aid the robot manipulator to grasp objects successfully.

Once the robot returns to the user, we perform the speech synthesis operation and use the USB2.0 single piece speaker and the LEDs to report the task status.