This was accomplished by Kevin by using the router to connect the Raspberry Pi with the Intel NUC over ROS. The results are shown in the photo below, which shows the Raspberry Pi camera image on the left and Kinect image on the right. Samantha helped out by setting up ROS and the raspicam node on the Raspberry Pi, while Kevin set up ROS on the NUC.
Rviz display with the Raspberry Pi Camera image on the left and the Kinect image on the right
This was accomplished by Tong. He used the iai_kinect2 ROS package and was able to display the point cloud data in rviz as shown in the photo below. The top left image is from the Raspberry Pi Camera, the bottom left camera is the regular RGB image from the Kinect, and the center image is the point cloud generated from the Kinect data.
Rviz display with the Raspberry Pi Camera image on the top left, the Kinect's RGB image on the bottom left, and the Kinect's Point Cloud in the center.
This was accomplished by Osvaldo, Samantha, and Kevin. Kevin retrieved the parts, while Osvaldo and Samantha tapped the 80/20 so that we could screw it all together.
80/20 Base Chassis Frame
This was accomplished by Kevin and Tong. Osvaldo helped build the arm which is shown in the image below. Samantha also helped wire the arm. Meanwhile Kevin and Tong used the hebiros ROS package to control the HEBI arm and move it to specified joint angles.
Initial HEBI Arm Constructed
Osvaldo finalized the base CAD model along with Kevin which is shown below. Kevin ordered the majority of the parts, which is shown in the purchasing spreadsheet below.
Finalized Locomotion Base CAD
Parts Purchasing List
John and Osvaldo fabricated beautiful motor mounts for our base DC motors, which are shown in the image below.
Fabricated Base DC Motor Mounts
Osvaldo also worked on finalizing the CAD for the robot and generated a video displaying the robot's degrees of freedom, which is shown in the video below.
Simulation of our Full CAD Model
Below are photos displaying the different subsystems in our current CAD Model.
Updated Locomotion Design
Updated X-axis Gantry Design on Turntable
Updated Z-axis Gantry Design
Updated Arm Design
Close-up View of Granular Jammer
Updated Full System Assembly
Osvaldo and John worked on fabricating the base plate and then constructing the base with omni wheels and motors. Their efforts are illustrated in the photos below.
Locomotion Base Plate Machined
Locomotion Base Constructed
Locomotion Base Constructed Side View
After Osvaldo and John finished fabricating the base, Kevin worked on controlling the motors with simple Arduino code. Unfortunately, Kevin could not get a desktop power supply working with the robot, so the demonstration was conducted on the tabletop with the robot tethered to 2 lab power supplies and propped up in the corners by cardboard motor boxes. The results are shown in the video below.
Base Locomotion Test
Tong worked on the object detection using the Kinect and a neural network on Tensorflow. His initial results are shown below. In addition, he started implemented some basic object detection using OpenCV, which is also shown below.
Initial Object Classification using Neural Nets
Initial OpenCV Object Detection Results with a Shuttlecock Valve
Initial OpenCV Object Detection Results with a Breaker Box
Samantha worked on the inverse kinematics solution and got it working in Matlab. Her simulation is shown in the figure below.
Initial Arm Inverse Kinematics Simulation in Matlab
Shown below are our complete slides from our midsemester presentation, which we presented in class to our fellow peers, TAs, and instructors.
While it wasn't pretty, we were able to demonstrate the robot moving under manual control and running completely off the onboard 12V battery. In addition, we positioned 2 ultrasonic sensors on the sides of our robot to localize the robot within the test bed using the guide rails. Below are pictures showing our fully mobile robot.
Initial Base Wiring
Initial Base Wiring with Ultrasonic Sensors
Tong was able to use OpenCV to draw bounding boxes around the objects at each station and then determine the orientation of each object at a station. His results are shown below.
Bounding Boxes around Breakers
Bounding Box around Shuttlecock Valve
Bounding Boxes around Orange Valve with Orientation of Green Marker
John and Osvaldo were able to get a working prototype of the granular jammer working with the 12V air pump. An image of the prototype is attached below.
Granular Jammer Prototype 1
Osvaldo worked hard over spring break to fabricate the majority of the custom X and Z Gantry components. His efforts are illustrated in the photo below.
Initial Fabrication of X and Z Gantry
Kevin and Osvaldo worked out the approximate placement of each electronics component in the base before spring break. Then Osvaldo completely modeled the position of each electronic component in CAD to ensure the best fit. The photos below demonstrate the preciseness of the manufacturing between the CAD render and the actual robot.
Rendering of Base Wiring and Top Plate with Turntable
Fabricated Top Plate with Turntable
Initial Base Wiring with Ultrasonic Sensors
Tong improved the object detection by collecting more data on each of the objects, labeling them, and then inputting them into the neural network to train it to be better and more robust to lighting changes. His results are shown in the video below.
Realtime Object Detection Test
Finally, Tong also implemented the depth sensing of a target object using the depth map obtained from the Kinect by combining it with the color segmentation object detection he implemented earlier with OpenCV. His results are again highlighted in the images below.
Depth Map (Left), Target Part in Color Image (Middle), Location of Target Part (Right)
Osvaldo and John were able to complete the fabrication of the robot during the course of this week and we were able to demonstrate our entire physical system, which is shown below in the pictures and video.
Manually Spinning Turntable
Robot Fully Constructed without Wiring (Side View)
Robot Fully Constructed without Wiring (Front View)
Unfortunately during the demo, we were unable to showcase the turntable actually moving because the stepper motor driver was not properly configured. However, we were able to remedy this shortly after the demo and show everything working in the video below.
X Gantry, Z Gantry, and Turntable Test
Unfortunately, the LED flashlight that Kevin had made previously was not a great match for the Raspberry Pi camera, so we decided to design a new Raspberry Pi camera mount with LEDs integrated within the mount to provide illumination to the Raspberry Pi camera. Thus, we did not give Tong a chance to test his CV algorithms with LED illumination and were unable to complete this task this week.
Unfortunately, at the time of the demo, the robot was in two pieces as shown below because Kevin was unable to finish wiring the entire robot in time for the system demo. Thus, we were unable to demonstrate autonomous base motion to a desired XY position in the test bed using the ultrasonic sensors.
Half-completed Wiring
Again, Kevin was unable to finish wiring the entire robot in time for the demo, so we were unable to complete this task as well this week. Kevin did not want to rush anything because he wanted to do a good job once and not have to fix any mistakes that would arise from rushed wiring. Attached below are some close-up photos of his in-progress wiring job. He was able to complete wiring the battery to the fuse and using the power button to switch on a relay to turn on the robot. In addition, he began soldering a lot of wires to an Adafruit Semi-Permanent breadboard, which will be used to robustly consolidate and connect wires from various motor controllers and sensors around the robot to the Arduino Mega.
Initial Wiring of Battery to Fuse and Power Switch
Semi-permanent Breadboard for Wiring to Arduino Mega
As Tong was unable to test his CV algorithms with LED illumination, he instead focused on porting our robot's CAD model into rviz so that we could properly visualize our robot with its sensors and vision. This will help us in the future with debugging our robot's planning and motions. A screenshot of rviz with the robot model and a point cloud from the Kinect is shown below.
Full Model of Robot in Simulation with Point Cloud
Unfortunately, Kevin finished wiring just in time for the system demo, and thus, he was unable to properly program the autonomous base motion of the robot to a desired XY position in the test bed. In addition, some wires got stuck in the wheels during the demo, which prevented the robot from moving properly. He will be able to complete this by the next system demo.
Unfortunately, as Kevin had finished wiring the base right before the demo, he did not provide Tong with sufficient time to tune his computer vision algorithms. Thus, although the LED lights were functional, they actually made the CV worse because they overpowered the ambient light and the values for the threshold could not handle the pure white light. Now that the LED lights and wiring are completed, Tong should have no issue completing this task by the next system demo.
John was able to 3D print a new mount for the granular jammer and attach the balloon to it with the help of Kevin. We were able to demonstrate it properly actuating the valves and breakers during our demo. Currently, the balloon at the end is too big, but we are definitely planning on swapping it out with a properly sized balloon in the future.
Second Iteration of Granular Jammer End-effector
Unfortunately, as the wiring was finished just in time by Kevin before the demo, he was unable to program the controller for the X and Z gantry in order to properly control the gantry and arm. However, he did get the limit switches to work, so he will be able to easily complete this task by the next system demo.
Kevin was able to finish wiring up the robot except for putting protective wire mesh around the wires that lead to the top half of the robot, because we still needed to debug some issues stemming from the turntable and would need to take apart the robot again, at which point the wire mesh would have to be cut and redone. Thus, Kevin decided not to put on the wire mesh until the team was certain no more mechanical modifications would be made to the robot. A few photos of the wiring job is shown below.
Almost Finished Wiring Base
Almost Finished Wiring Base
Almost Finalized Top Wiring
Osvaldo was able to design a new Raspberry Pi camera mount with LEDs surrounding the Raspberry Pi Camera. Then John was able to print it in the Makerspace. Unfortunately, the 3D printer broke down while it was printing our piece, so it did not turn out as well as we were hoping, but it was still usable for our system demo. Osvaldo cleaned up the failed print a bit and then Kevin wired all of the LEDs on it, screwed in the Raspberry Pi, and attached it to the Hebi motors. It turned out pretty well as seen in the photos below. We are planning on creating a second iteration of the mount by the next system demo.
Raspberry Pi Camera Mount Model
Failed 3D Print of the Raspberry Pi Camera Mount
Raspberry Pi Camera Mount Model With LEDs
First Iteration of Raspberry Pi Camera Mount with LEDs