We expect our system to be able to correctly handle a given mission file and execute the specified commands. We want the system to be able to handle turning wheel valves, shuttlecock valves, and breaker switches with high accuracy. Our standard of performance measure will be with respect to how many times we are able to correctly execute a given action at a certain station. At the breaker station, we will take a sample of how often we were able to recognize the switch positions correctly (performance measurement for our CV subsystem) and how often we were able to manipulate the switch accurately to a given position.
We will judge the performance of individual subsystems based on their accuracy and reliability.
Given a mission file at the initial position, our ShipBot will parse the file and maintain a scheduler for missions from the high performance computer. Using the line distances data from TOF sensors, the computer is able to localize where the ShipBot is on the testbed, plan paths for where and how it should go as scheduled. Base motors will then be controlled to move in the desired direction. Aided with RGBD images from the stereo camera, computer is able to process the stream of images to recognize if ShipBot is in front of the desired device bench through computer vision. If so, the device type and position should be recognized through the camera, and a trajectory, arm motion, will be planed through the computer. Then, the micro-controller will drive the arm joints and end effector to perform the mission. An extra camera on the end effector will feed the updated target position back to the system and a feedback loop thus is created to improve mission performance.
Once the current mission is done, the ShipBot's state goes back to the mission controller and should perform the next mission given by the mission scheduler.
Our system uses two onboard computers to interface with actuators and sensors. The high performance computer is a NanoPi M4 single board Linux computer that capable of performing heavy computer vision processing tasks. The other computer is a STM32F4 family board responsible for controlling low-level components such as motor controller, TOF sensors and end effector. Figure 2 shows the detailed interfacing and power distribution of our electrical system.
To reduce development time and efforts we decide to make our codebase ROS compliant. This will give us good software compatibility and ease of integration. The micro-controller software stack provides a nice abstraction layer for high level algorithms to work with. For example, time of flight raw data samples are triangulated on STM32 board to provide the global position of robot with respect to the guard rail frame. This offloads computation from onboard computer and also provides an easy interface for obtaining location of the robot.
Figure 3 is a detailed interaction map between each module. Note we also highlight amount of work required to get each module up and running.
Refer to Concept Design and Final Report.