The ROBOCAM project is about physically building a 5-DOF robot camera arm that is capable of active subject tracking. This project is inspired by industrial robotic arms like MILO and BOLT, which are used by professional camera crews for shooting movies and commercials. Using our team's combined software and hardware expertise, we will produce a functional, miniature robot arm that can film visually impressive shot sequences while actively tracking a moving target.
ROBOCAM is built off of a purchased off-the-shelf phone stand, which we augmented with many 3D-printed parts and actuated as a robotic arm via 3D-printed servo attachments at the joint locations. Our sensing capabilities are centered around computer vision, computing the required inverse kinematics by way of information gathered from a camera stationed upon the end effector. As the target moves, the robotic arm actively tracks and follows the movement of a nonstationary primary subject.
This project is mechanically interesting because it involves extending the existing frame of a phone stand into a robot arm, which makes the entire project significantly more affordable and feasible within the timeline. While many independent hobbyists have created their own robot arms from scratch, ours is one of the first to build off of an existing frame. Additionally, our project is interesting at a software level because it offers active subject tracking. Others, on the internet, have built arms that can be sent to specific global poses by user input, but none have involved CV-based tracking like ours will. The ROBOCAM project by nature explores in depth many of the principal topics we have covered in this course, namely vision, manipulator kinematics, motion planning, and controls. Furthermore, this project is novel because it is specifically a hardware project - beyond just a simple Sawyer simulation, our design criteria are fundamentally focused on the construction of a physical robotic camera arm with real-time motion tracking capability.
For our project, our sensing capacity came from our use of a Raspberry Pi camera to track a desired target. Additionally, in order to have out camera robot successfully track a desired target, we needed to provide a rapidly-updating set of motion plans for it to execute, which was our key planning component. Lastly, in order to execute the motion plan as stated, the robot had to rotate its servos to specification, and therein lies our central actuation component.
Hobbyists have built completely 3D-printed robot arms, as well as some arms powered by servos jerry-rigged to frames. However, none of these arms have the camera tracking control that our project will use. At the highest end, professional videographers use tools like the \$10,000+ MILO and BOLT robot arms to achieve very precise camera movements, following a pre-defined path. However, to the best of our knowledge, these arms do not offer customizable target tracking for the end user.
As previously described, camera-affixed robotic arms in and of themselves are not unique and have been built and used in many contexts by people everywhere; however, another aspect of what gives our project real-world relevance is that ROBOCAM must also tackle the nontrivial problem of affordability. From basing our arm off of a very inexpensive off-the-shelf phone stand to structuring our vision capabilities around an Raspberry Pi camera, our work is in contrast to other similar projects which generally rely on custom-made mechanical parts with costly servos and a high-end camera to mimic the range and quality produced by MILO or BOLT. By prioritizing reproducibility and affordability, our project's outputs are widely accessible.