Learning Module #1
Franco and Tanner created lessons regarding the website GitHub and the broader concept of version control. The lesson explains that the primary purposes of these tools are to share, maintain, and distribute code. Additionally, Franco and Tanner led a hands-on activity. They helped other students create accounts, an example repository, and then committed changes to the repository. They also briefly covered Visual Studio Code and github.dev to open repositories and make changes, in order to explain version control.
Learning Module #2
Franco and Desmond worked on Computer Aided Design (CAD) and then 3D printed models that they made or downloaded online. They spent time learning about how to design and export 3D objects, to then slice the models and export G-code for the 3D printers. They 3D printed several objects including an infinity cube with joints, which they found online. They also made some setting changes to the 3D printer due to it experiencing many problems when trying to print things out.
Learning Module #3
Franco and David worked on implementing a prompt-based image segmentation model to locate and identify objects in an image with their respective pixel locations. They used CLIPSeg which combines a multimodal language-vision model with a segmentation head to perform image segmentation tasks. This model is a zero-shot model which means it is capable of performing segmentation on objects it has not seen before based on the prompts.
Learning Module #4
Franco and Sienna created a 3D printed RC Tank. They used a Raspberry Pi 5 to control the motors. The RC Tank is controlled remotely by a laptop's keyboard. A script running on the laptop records the arrow keys pressed, and sends these keys to the Raspberry Pi using sockets with ZeroMQ.
Learning Module #5
Franco and Charlie used Open3D and Depth Estimation models to convert images to 3D Point Clouds. They also adapted their project to be able to take in multiple images, and recreate scenes from multiple angles, giving a 3D perspective on the environment from any angle. At the end of their project, they researched some other open source projects on GitHub related to 3D Reconstruction and got a working implementation of DUST3R, which takes multiple images and reconstructs a 3D environment, with much higher accuracy than their program.
Learning Module #6
Franco and Jack programmed a robot arm to follow and respond to hand movements using MediaPipe with a hand pose tracking pipeline. When the user's hand closed, the robot's gripper would also close; if the hand shifted left, the robot arm would move left. This allows for seamless control of the robot arm without the need for any equipment—simply seeing the hand with a camera is enough.
Learning Module #7
Franco and Dylan aimed to develop an autonomous navigation system from scratch that used real-time depth estimation to avoid obstacles in a 3D environment by creating a 2D grid of obstacles. The system employed an OrangePi 5 and a Google Coral TPU to accelerate processing speeds, allowing for real-time depth estimation on a lightweight power-limited FRC robot.