Teleop: Joe, Tyler
Project component integration: Jaden, Joe, Saket, Tyler, Atharva
Over the past week, our project has taken significant strides toward realizing a fully integrated solution that promises to revolutionize the way high-risk pregnant women manage household chores. Following the successful individual development and testing of our project components, our focus shifted to the seamless integration of these elements into a cohesive autonomous action sequence. This critical phase of our project involved the meticulous combination of our advanced navigation system, which now adeptly explores all saved areas of a home, with our sophisticated object detection and retrieval capabilities that specifically target toy cleanup. The integration of these components allows Robo-Assist to dynamically adapt to any home environment, showcasing our commitment to creating a versatile and user-friendly solution.
We've made substantial progress in enhancing our teleoperation (teleop) system. The system has been refined to ensure reliability and ease of use at all times, with an emphasis on allowing users to effortlessly switch between modes. We have also added functionality to the various buttons that are on the autonomous tab of the interface ("clean the room" and "go to home"). With these changes, we have fully updated the interface to include the functionalities we aimed to have at the start of the quarter. This improvement means that all controls are accessible at any moment, providing users with unparalleled flexibility in directing Robo-Assist's actions according to immediate needs or preferences. This week's efforts have been pivotal in transforming Robo-Assist into a truly adaptive and responsive aide, capable of autonomously navigating and tidying up a home while granting users complete control over its operations. The culmination of these advancements marks a crucial milestone in our journey, bringing us one step closer to offering a groundbreaking solution that enhances the quality of life for our targeted user group.
Below, we have a demo of the robot picking up a toy and returning it to the box. We implemented this functionality by detecting an aruco marker, calculating the desired pose the robot needs to be in, performing the logic on the robot to grasp the toy on the ground, detecting the toy box marker, navigating to the toy box, and finally putting the toy into the toy box.Â