Tony: During week 8, I finished post 11, enabling us to save poses on the robot and replay them. Some poses I saved include grabbing objects as well as retrieving objects, which is useful since that is a big premise of our project. The next steps involve figuring out the poses needed to grab objects with the new gripper, and how to combine that with visual information from the camera.
Link to code: https://drive.google.com/file/d/1x3LusiUA_p1FWIPFF7pk_wobYp6ftGfu/view?usp=sharing
https://drive.google.com/file/d/1_hEm1i56dABkGHL8Egtdski0V3myNMow/view?usp=drive_link
Diana: During Week 8, I worked on Post 14 and the hardware. I stored the robot's position on the map and aligned it with the ArUco marker, but I haven’t been able to replay the poses yet. The poses include the three key locations in our project: the initial pose, the table, and the ArUco marker. Going forward, I need to properly implement pose replaying and integrate object detection, marker alignment, and navigation. It has been difficult to grasp the kitchen utensils used in our project(knife, fork, and bowl) with the current gripper. So, I designed and 3D printed an auxiliary tool for the gripper. This tool works quite well for both small, thin objects like knives and forks, and larger objects like bowls.
Evan: For the past week I worked on the perception capability of the robot and specifically the object recognition using YoloV5. I modified the given deep-perception script to enable the robot to only recognize the objects we want, to subscribe to custom topics that can trigger it to compute the poses of every recognized objects, and to process the poses and save them into a file for further use. My next step will be to further test the accuracy and see how we can improve that. And connect this part with the navigation and manipulation tasks.
A complete codebase for this task: https://drive.google.com/drive/folders/1s1dvq8lXJW9E2c264apbVb61hO4Bm43_?usp=sharing