Colostomy Care Robot



The robot is designed to remove colostomy bags and dispose them safely 

Post 21Project Report

colostomy-care-final-report.pdf

Post 20:   Teamwork Reflection

We have adopted the strategies from Post #4 for the most part, but since we do not need to create any additional hardware for the Stretch robot, Steven has taken the role of working on the User Interface instead. Varad worked on the perception side to detect the aruco marker and Mathew worked on the programming by demonstration part.

One thing that worked really well for our team was meeting 3-4 times a week to work on the project. Since we were often coding and testing on the robot together, we could easily recognize problems that we might encounter with our solution and discuss them right away. Conducting multiple tests on the actual robot in different scenarios helped us identify problems in our code, which was beneficial in creating a robust solution that worked most of the time. We always worked together, ensuring that everyone could learn from different domains rather than just focusing on their designated tasks. This approach helped everyone become familiar with the robot's system.

We believe that we should have tried filling the bag with water or some other substance, to get much closer to a real-world scenario. Developing a solution for that would have been much more realistic.

Post 19:   Reflection

Team:

a) What was most fun was controlling the robot for the first time with the tele-op.  

b) What was most useful was going through the tutorials as we didn't have experience with ROS prior, so getting a good foundation was important. 

c) What was not useful was the documentation being insufficient frequently throughout the learning process, making roadblocks a frequent issue and forcing us to scour the web for solutions. 

d) What would have been useful, but was missing was a more consistent navigation software, as we found there to be inconsistencies in the robot's navigation even when moving to static locations. 


Steven:

a) The most fun about this project is being able to learn how to operate a robot and develop it in such a way that it can be automated for parts of its tasks. Working with my team has also been fun.

b)  The most useful about the course is having Maya, Vinitha, and Noah around to answer questions that we have, no matter if it is technical issues or project-related problems, they are all effective at answering the questions and guiding us in the right direction.

c) What was not useful were some of the tutorials on hello-robot's website, for example, moveit, which was not working, and was told to skip it after trying to get it working for a day.

d) Having a well-documented repo for testing, would be helpful from the start. For example, any troubleshooting that may be needed for new students who are using stretch. For example, the first few weeks have been figuring out what's not working in the hello-robot's tutorial as they are not up to date. It might be helpful to tell students to use the HCRLab's repos from the start, and make sure that repos are working as expected from the start, such that students can spend more time working on the actual project.


Varad: 

a) Working in a team and exchanging ideas and solutions for specific problems was the most enjoyable part. I learned many different ways to solve problems. Therefore, every time we had brainstorming sessions, they were incredibly productive.

b) The most useful assistance came from our TAs, Vinitha and Noah. They frequently checked in with our group to see if we had any doubts or questions, making it easy for us to seek their help. Their methodical approach to solving errors, especially those we encountered in ROS, was particularly useful. Whenever Vinitha helped us resolve an issue, I made sure to note down the debugging process. This strategy greatly assisted our group, enabling us to resolve subsequent problems without further assistance.

c)What was not useful: I think some of the moveit tutorials were not directly related to our project directly.

d) Improvement: I think maybe more ROS structure should be explained how we should maintain each stack and those stacks should be maintained and deployed on robot. 



Matthew:

 a) What was most fun was getting to practice designing a product, the robot, and having to think about how are system could be most applicable/useful to a patient. Getting to think from a different perspective was very interesting. 

b) What was most useful was meeting with the team in person frequently. This enabled everyone to be on the same page throughout our development process, and we could get immediate feedback from ideas. 

c) What was not useful was the limitations of the robot in navigation, there were degrees of error which we found to be substantial enough that lead our robot to be unable to be successful at times without tele-operation. 

d) What would have been useful, but was missing was a greater cooperation between student teams. It would have been nice if the different teams interacted/communicated more. This would have been helpful in lifting each other up/learning different approaches to problems. 

Post 18:   Script



Script for Colostomy Care Robot

Post 17:   Week 8 Progress

 17 a) We completed the navigation stack for our project, the user will able to set the robot to navigate to different locations: patient, bin, and initial location of Stretch-2. The user will be able to send Strectch-2 to these locations using the web interface. We also implemented Stop Navigation and Resume Navigation buttons so that if something goes wrong in the locomotion user can stop the Stretch-2 whenever required.

Currently, the navigation is not perfect and may require additional parameters tunning.

We have a fixed initial pose for the robot, this method allows us to send the 2D estimate pose for the navigation stack without requiring user input of the location.


17 b) We are working on the user interface for our project. We are planning to make it more friendly for the user.


17 c) Currently, our group is working on removing the bag using the Programing by Demonstration  concept, where there are three sequences of poses:

Post 16:   Project Proposal 

Robotics_Capstone_Project_Proposal.pdf

Post 15:   

Environmental

Positive: The caregiver will not have to drive to the patient's home, they can teleoperate the robot to perform tasks instead. This will reduce carbon emissions from cars.

Negative: Additional electricity will be used to charge the robot.  If the robot is broken, its parts may become additional waste instead of being recycled.

Societal

Positive: A societal benefit would be greater autonomy in those with colostomy bags, providing greater psychological wellness for patients, and thus allowing patients to participate in society to a greater extent. 

Negative: Since the robot may be using its camera continuously, it will bring up privacy concerns.

Economic 

Positive: An economic benefit of a colostomy care robot is the creation of different jobs whether that be robot production, robot maintenance, and competing colostomy care robot companies. 

Negative: An economic detriment of a colostomy care robot would be the loss of roles that caretakers can have in caring for patients with colostomy bags. 

Post 14:   

To be implemented [Yellow]:


Minimal Viable Product:

Stretch goals:

Post 13:  


1) We will need to map the environment/area where the patient's bed/chair is located so that the robot will be able to understand the layout of the environment. With the help of mapping libraries, we can achieve this. Once the environment has been mapped, the robot can localize itself within the map. The localization technique allows the robot to use sensor data to estimate its position within the map. This information will be useful for planning the robot's movements.


2)For planning, the robot will need to generate a path from its current position to the location of the patient. This can be achieved using global path planning. We can also try out different planning algorithms like A*, Breadth-First Search, Depth-First Search (DFS), and Dijkstra's algorithm.


3)We are planning to develop a web interface that will have a button to call a robot. With the help of mapping, we will know the coordinates of the patient's bed/chair (assuming that it is constant), so we can send those coordinates to the robot, and the robot can go there in the proper orientation.

Post 12:

481C Research Papers - Summary

Post 11:

Programming by Demonstration:

In this video, we recorded different poses of the robotic arm and the gripper to ensure the gripper grasped the bag properly and removed it gently. The robot started at the initial position, which was the home position.


1)We slid the arm down so the gripper could grasp the bag.

2)We moved the robot's wrist to properly align it with the bag.

3)We closed the gripper.

4)We gently moved the wrist to remove the bag from the body.

5) We returned the arm to the home pose to prepare for throwing the bag away.




My Movie 2.mov

Programming by Demostration( CODE)


My Movie 3.mov

Post 10:

The perception capabilities we implemented for the Stretch robot to perceive the colostomy bag for removal necessitated the attachment of an aruco marker to the top end paper extension of the colostomy bag. This enabled the Stretch robot to identify the position of what it needed to grasp. 



Post 9:

Target Solution:

The simplest feasible perception system for our needs is to be able to identify the person and the bag without any additional markers. For object detection, we will look into using You Only Look Once (YOLO), a real-time object detection system that creates a bounding box around the object based on the image. The data we need to collect are images of the colostomy bag, with it in different orientations and from different perspectives, when attached to a person or not.

Some Libraries:

The training procedures include:

1. Collecting images, labeling them, and drawing boxes around the colostomy bag.

2. Having the dataset split into 80-10-10 for training, validation, and testing.

3. Training the YOLO model and evaluating performance.


Minimum Viable Fallback Solution:

Modify the environment for perception: 

An alternative approach we can take is to use AR markers to mark the colostomy bag as well as the person. This option is quite feasibile, the AR marker will be the location where the arm will grab the colostomy bag.

Human-in-the-loop/interactive perception:

Another alternative approach we can take is the use of a human to create a bounding box around the patient as well as the colostomy bag.  We can use Segment Anything from Meta to identify features from the live-feed of the camera, and let the user select the colostomy bag. This will require a user interface that shows the live-feed, and allows selection at real-time.

Post 8:

Perception: The robot will need to be able detect the patient, the colostomy bag, and the trashcan. The robot must be able to detect the patient and bag very precisely so as not to injure the patient when removing the bag. Moderate sensitivity is necessary when detecting the trash can so as not to make a mess when doing disposal. 


Manipulation: The robot will need to physically interact with the colostomy bag. 


Navigation: The robot will need to precisely position itself relative to the patient when removing the bag, then navigate towards and position itself relative to the trashcan for disposal. 


Interaction: The patient communicates to the user of their location for colostomy bag removal. The robot will indiciate to the user that it is on route, and when task is completed.

Environment: We may need to modify the robot's arm so that it can successfully grab the colostomy bag. 


Post 7:


IMG_0431.mp4

This low fidelity sketch shows input for the user to control the movement of the base of the Stretch robot. It also shows that our interface displays various angular and linear velocity information for our user. 

Post 6:      


3873e6ad-11c3-464f-a2d9-7483015e97f8.MP4

Post 5:


481C Research Papers

Post 4:

Our team is comprised of three members, each being assigned roles for the quarter. Matthew Chung has been assigned the role of ROS guru and User Interface guru. In these roles Matthew will focus on a deeper understanding of ROS and the use of ROS tools. Additionally, Matthew will have a focus on the accessibility of the product to the user, paying attention to how a user will feel comfortable using the product.

Steven Lok has been assigned the role of Hardware guru and Design and fabrication guru. With these roles, Steven will be responsible for understanding the hardware of the robot, as well as alternative systems like 3d printing. Additionally, Steven will work to visualize ideas that the group may have, as well as be proficient in any 3d printing software tasks that may arise. 

Varad Dhat will be assigned the role of Perception guru and User research lead. These roles mean that Varad will be responsible for the understanding of sensor data that the robot may detect, and how to use it. Additionally, Varad will be focused on maintaining contact with our primary user throughout the duration of development, and analyzing data that may come from it. 

Weekly webpage updates will be completed through collaborative meetings done as a team. We will also hold weekly team meetings where the decisions for that week can be discussed, conflicts between members can be moderated by the other member through which cooperation can be achieved, and updates can be announced.  


Some goals that we all share to get out of the class are teamwork skills, ROS proficiency, deploying our code and getting it run on hardware efficiently, and solving a good problem for those that may be overlooked by wider causes. In terms of ensuring that there is not an imbalance in the growth team members throughout the quarter, we have decided to have every team member both have ROS installed on their personal computers, as well as the use of the class desktop when working through the project. This ensures that no one is left behind as team members will always be present for work sessions, having the opportunity to ask questions to fellow members when necessary if there is a roadblock. We have created a discord which enables us to communicate remotely, thus we will have avenues of communication, when necessary in terms of the project. 

Post 3: 

After completing the labs to Tuesday with the express purpose of learning ROS, we found ourselves surprised by the depth that exists within ROS. There were a lot of new concepts that we had to learn from the start, like ROS Topics, and more familiar ideas that we have had similar experiences prior to, like ROS Parameters. There was a feeling of being at a mission control center with all the different windows holding different information that we had open, which was fun and exciting. What we found most challenging was the setting up of the environment. We struggled a lot with this issue, having to resort to looking up different install commands from dubious sources on the internet due to unsatisfactory documentation. Getting from step to step when learning ROS was difficult when one step depended on a step 5 tutorials prior, and figuring out the issue could be unclear at times. We can see that we must be vigilant in our practice from now on, and are hopeful in becoming more proficient with time. 

Below are some pictures of the learning experience we had through various ROS tutorials.

 

Post 2: 

Sketch Slution

Storyboard

After creating the sketch and storyboard for the project, we have shown it to our target user, and gained feedback from them.

Feedback: 


 Post 1: 

Our group came up with various ideas, but we narrowed them down to three

Idea 1: A robot should be able to pick up items off the ground, classify them as valuable or garbage, and then put them accordingly.

Idea 2: Our robot will aid with the application of kinesiology tape for older persons or athletes who are experiencing back pain.

Idea 3: A robot will assist a patient who has undergone surgery for colon cancer in changing their colostomy bag.

So we have decided to work on the 3rd idea.

Target Users: The primary user would be patients who have recently undergone colostomy surgery and are having difficulty changing their colostomy bags on their own. The target user group could be described as individuals who have had a colostomy due to a medical condition such as cancer, inflammatory bowel disease, or other gastrointestinal conditions. They may be recovering from surgery or undergoing ongoing treatment for their condition. The target user group may also have varying levels of mobility and may require different levels of support and assistance when it comes to managing their colostomy bags.

Problems faced by users: Mostly the challenges are faced by the old age people throughout their treatment but also for individuals who are facing this for the first time for some initial days. Some of the challenges are like : 

1) Physical discomfort: Changing a colostomy bag may cause physical discomfort due to the sensitive nature of the area. The stoma (the opening on the abdomen where the bag is attached) can be sensitive, and some people may experience pain or discomfort during the process.

2) Odor: Colostomy bags can produce unpleasant odors, which can make changing the bag an unpleasant experience for some people. They may need to take extra precautions to prevent the odor from becoming overwhelming.

3) Skin irritation: The adhesive on the colostomy bag can irritate the skin around the stoma, causing redness and irritation. This can make it uncomfortable to change the bag and may require additional care to prevent skin irritation.

4) Anxiety: Some individuals may feel anxious or embarrassed about changing their colostomy bag in public or around others, which can make the process more challenging.

Stretch-based Solution: A robot-based method called the Stretch mobile manipulator may be able to assist persons with colostomies in overcoming some of their challenges, particularly those relating to their restricted movement and the discomfort they feel when changing their colostomy bags. For the time being, we are working to develop a teleoperated (semi-autonomous) solution. Using both cameras and combining their data will enable the robot to arrive at the desired location accurately to remove the bag. With the aid of the app, a user only needs to tell the robot that it's time to change out the bag. The patient will be asked to gently peel the bag off of their skin because it will be difficult for the robot to remove it from the skin directly. As we peel off the bag is then held between the robot's (customized) gripper, preventing the robot from breaking the bag while pulling. Then gently the robotic arm will be retracted so that the bag comes off the patient's body. 

Step 1: The patient will transmit instructions via an app to the robot or possibly to a remote operator of the stretch.

Step 2: Before the robot enters the room, we try to map(SLAM algorithm) it out such that controlling it won't be a difficult chore for the operator.

Step 3: Using a depth camera, the robot can identify colostomy bags(object detection algorithm), and a camera mounted on the robotic manipulator will help us determine the exact location from which the gripper should grasp the bag. Although we intend to complete these tasks autonomously, for the purposes of prototyping, we intend to maintain semi-autonomous control. 

Step 4: The robot will carefully grip the bag and retract its robotic arm before safely tossing the bag in the trash can.

This is our rough plan and strategy; things will alter when we encounter more and more plans while trials.

We want to execute these jobs on a human mannequin for the first few trials, and we will provide videos of these tasks to patients and doctors for comment, so that users are engaged in our project and our solution can meet the needs of all stakeholders and target users.