As we begin to wrap up this project, we have been able to make some humungous strides with respect to our project and we are here to provide some updates on the progress we have made the past couple of weeks:
UI: Tyler
Navigation: Saket/Atharva/Joe
Object Retrieval: Tyler/Saket/Joe
Literature Review: Jaden/Tyler
In Week 8, our project saw significant advancements. We enhanced the UI to feature different destinations, enabling the robot to autonomously navigate and clean specified areas. This update includes the addition of saved poses within the UI, offering users the flexibility to program the robot for tasks such as picking up items. Our progress with Aruco markers has been significant in that we've achieved the capability to detect these markers, navigate to the markers, and pick up the toys on the ground if it identifies toys, approach the toybox if it sees the toybox, or return to its home base if no markers is detected. We hope to finish the autonomous portion of our task in the coming week.
UI: In our project series, we've made some great strides in making our robot easier and more convenient to use. We added a simple dropdown menu that lets users quickly send the robot to previously saved places. Then, we made it possible for users to save new locations from the teleop control screen. This means they can update the robot's cleaning locations with ease. Plus, we added a feature that lets users save certain robot positions in manual mode. They can easily select these saved positions from a dropdown menu, making repeated tasks easier to do. All these updates mean that working with our robot is now a lot more straightforward and user-friendly, marking a big step forward in our project.
Navigation: We were able to make significant strides when trying to get the robot to travel from one part of the room to another when given a coordinate on the map to travel to. For the past couple of weeks we have been attempting to send the robot to a location in the room and have the robot find a path and autonomously travel there. Once we figured that motion, we then were able to create a list of saved locations within the room that is integrated with the UI so a user can save a new location and then select on of the saved locations to have the robot move there. In the next few weeks we hope to be able to integrate the object retrieval and the navigation together.
Object Retrieval: This was by far the most difficult part of the project. So our main objective for this project is to be able to pick up an object or toy and drop it off in a basket/bin and clean up the room. So for the past couple of weeks, we have been trying to focus on detecting a toy with the aruco marker and correctly pick up the toy. So after much tuning and debugging, we have been able to successfully pick up an object that is laying on the ground. In the next week we will be attempting to combine this behavior with detecting the toy box and dropping the object off at the bin.
Literature Review: We reviewed literature to inspire our testing and evaluation. We also made sure we were not forgetting the core purpose of the robot so the target user, high risk pregnant women, could use the robot to effectively work around the troubles of pregnancy. When considering new features, we looked at literature to decide how useful a feature was and tested the robot as if we were a user.
This is a video showing the working version of the Aruco marker detection system on our autonomous robot. It gets into a loop where it recognizes and navigates to the toy, then the toybox, and back to the toy again. This shows that the system is running consistently and is capable of detecting different markers and navigating to both.
This video shows our ability to give the robot a destination and the robot will navigate through the environment to the destination.
This video showcases our ability to navigate to a marker and pick up a toy from the ground.