CSE 481C - Team Big Stretch 🥱
Spring 2023 Robotics Capstone Project
Donovan Kong - Ashley Mead - Leah Robison - Sol Zamora
Blog Post #21
![](https://www.google.com/images/icons/product/drive-32.png)
Blog Post #20
While we feel as though teamwork generally went well, there were some fairly dramatic shifts from the roles that we had talked about in post #4. While Donovan & Ashley maintained their roles as managers of the team, we found that a hardware/fabrication guru was unnecessary, so as a result we shifted roles around significantly. The new roles were the following:
Donovan: UI guru, manager
Ashley: Navigation guru, Research lead, ROS guru, manager
Sol: Perception guru
Leah: Documentation & Communication
Other than the shift in plans, we did generally follow the guidelines we set out for ourselves in post 4; that is, we all learned ROS to some degree, we all pitched ideas with regards to our approach to implementing the robot, we did some peer review/pair programming during the implementation, and we all worked on other parts of the robot as necessary to ensure that we met certain deadlines. Alongside our weekly meetings, we also actively communicated remotely (thru Discord) in order to ensure that we could arrange times to meet outside of the assigned capstone time and to keep everybody on the same page with regards to the robot's progress.
The things that worked well as a team were that our combined technical expertise allowed us to efficiently implement parts of the robot and work thru the tutorials at a good pace. Generally what worked well as a team was our ability to communicate and keep the rest of the team "in the loop" with regards to our progress. What didn't go as well as we had hoped was our ability to meet in person, work on the robot codebase remotely on our own time, and more generally meeting deadlines. We found that towards the end we were not too well organized and had to rush in order to finish the implementation of our robot.
Blog Post #19
Group
a) Making our own programs to move the robot, like making the programming-by-demonstration tool and MVP, were the most fun.
b) Code examples from the tutorials and the TA's help were useful.
c) The debugging extended periods of time in the beginning was not the best and we would have liked if the system was set up a little better so that there would be fewer system-related bugs (e.g., wrong version, package not installed).
d) More code examples/snippets to work from and better tutorials would have helped us learn get on track faster.
Donovan
a) The part that was most fun for me was being able to teleoperate the robot, as well as making some of the tools like our navigation stack to send our robot to a series of points
b) The TA's were extremely useful throughout the entire course
c) Some of the tutorials were not very useful, as they felt super incomplete and vague at times
d) More code examples and/or snippets would have been extremely useful, as well as more detailed documentation particularly for rospy.
Ashley
a) I especially enjoyed teleoperating the robot (with the Xbox controller) and writing our own programs that made the robot move autonomously!
b) I found the code examples in the tutorials helpful in learning how to apply the tutorials to our use case. The TA's were very helpful in getting us unstuck.
c) The errors in the tutorials themselves weren't helpful, like the errors in the code examples.
d) I think more detailed instructions as to what tools are intended for us to use for certain assignments would be helpful.
Leah
a) I thought it was fun to interact with the robot and create video presentations for our project. I also enjoyed coming up with ideas for different tasks for Stretch and designing how to implement them.
b) I felt that the various examples and code tutorials were helpful. The TA's were also very helpful.
c) We ran into some issues with setting up that we did not cause which took up some time.
d) It would have been helpful to have more information specific to our project, like more directly useful code examples.
Blog Post #18
Blog Post #17
a) Ashley and Donovan started work implementing our navigation solution, which involves mapping out the room and writing ROS code in order to send a message moving the base between points on our map. We plan on manually setting pre-determined points on our map that represent the locations of the target user and the medicine that the target user may want. Evidence for this is in our github repo: https://github.com/kongdonovan/cse481c-teleop-interface
b) Sol and Donovan worked on the programming-by-demonstration tool. They worked on storing and reading position values from a JSON file and then moving the robot to those positions. Evidence for this is in post #11.
c) Ashley worked on the ROS Diagram. Evidence for this is in post #14.
Blog Post #16
Project Proposal:
![](https://www.google.com/images/icons/product/drive-32.png)
Blog Post #15
Positives
Economy:
Can increase productivity, particularly in assembly lines
Can take away menial tasks from humans, allowing them to focus on more challenging (and potentially profitable) work
Society:
Can allow for humans to do less work, allowing for emphasis on the creative arts and other cultural endeavors
Environment:
As a result of being able to perform menial tasks, robots can work with higher efficiency in environmentally-conscious tasks (like recycling centers), increasing efficiency and by extension benefitting the environment
Negatives
Economy:
Robots have the potential to take away jobs from humans, resulting in higher unemployment especially for unskilled labor
Robots are rigid, which means they can't necessarily do complex tasks (yet)
Society:
For assistive robots, some people may have more contact with them than with real humans in certain instances which means special considerations must be kept in mind to ensure that there is no undue hardship due to lack of human contact
Environment:
In the event that robots break, it may not be the easiest to recycle them which leads to (potentially) more waste
Extending on the idea that a robot is rigid, if a robot does not know what to do they could potentially do things that harm the environment
Blog Post #14
Communication:
The User Interface will communicate with the rosbridge_websocket using roslibjs
The rosbridge_websocket will communicate with other nodes by publishing or subscribing to topics they use
The rest of the nodes will communicate with each other by publishing or subscribing to topics they use
Stretch Goal:
Saving a user-specified user_pose and bringing the medicine bottle back to the user's pose (rather than using a pre-programmed user pose)
Use an ArUco marker to align to the medicine bottle
Placing the medicine bottle in the user's hand instead of on a table
Delivering a water bottle after it delivers the medicine
MVP functionality:
Everything except user_pose in the diagram is MVP functionality
Stretch should be able to receive the client's requested medicine, pick up the requested medicine, then deliver the medicine bottle to the user.
Already Developed:
Stretch_2 camera
Todo:
User Interface: Donovan
rosbridge_websocket: Donovan
navigation_node: Ashley
pick_up_medicine: Sol and Leah
drop_off_medicine: Sol and Leah
Blog Post #13
![](https://www.google.com/images/icons/product/drive-32.png)
When it comes to the navigation capabilities of our robot, we broadly have three things we want the robot to do: Navigate to the medicine, pick up the medicine, and navigate to the user with the medicine (and drop the medicine). We likely will be using global mapping and planning for the bulk of our robot's navigation, since we need to know broadly where the medicine is and how to get from the medicine to the end user (which will be pointed out on the global map). Once we get to where the medicine is, we may need to consider using servoing in order to handle the fine movements associated with needing to grasp a medicine bottle.
Blog Post #12
Technology to Support Aging in Place: Older Adults' Perspective, by Shengzhi Wang, Khalisa Bolling, Wenlin Mao, Jennifer Reichstadt, Dilip Jeste, Ho-Cheol Kim, and Camille Nebeker. Published in MDPI Open Access Journals in 2019, https://www.mdpi.com/2227-9032/7/2/60.
The study focuses on improving the design process for producing assisted living products that can help older adults. The researchers studied adults living in senior housing communities and recorded their responses to various technological terms and concepts. By the end, they found that most seniors had low technological literacy, valued their privacy, and would be willing co-design technologies moving forward.
Development of a hospital service robot for transporting task, by W.K. Fung, Y.Y. Leung, M.K. Chow, Y.H. Liu, Y. Xu, W. Chan, T.W. Law, S.K. Tso, C.Y. Wang. Published in IEEE in 2004, https://ieeexplore.ieee.org/abstract/document/1285647.
The study looks to improve the way in which hospital transportation robots traverse through clinics. The researchers argue that in many cases it becomes a health hazard to place landmarks throughout hospital hallways, and that as such they can't be used for developing hospital robots that require landmarks to localize themselves. Hence, the researchers developed a new hospital robot that is able to localize and orient itself within a hospital by using the fluorescent lights in the ceiling as natural landmarks. This study proved to be very successful, with the robot being actively used in a hospital starting in 2001, and continuing to operate there until the study was published in 2004.
An overview of technology-assisted nursing care in Elderly Care, by Ekta Ghimire. Published in Arcada in 2022, https://www.theseus.fi/bitstream/handle/10024/753939/Ghimire_Ekta.pdf?sequence=2&isAllowed=y
This Thesis project studies the advantages and limitations for using robots to assist in nursing in elderly health care. The method used was a literature review of ten articles. The author found that elderly people like having a humanoid robot that may be entertaining, to assist them with their daily activities including self-care, or to get data from the internet. However, robots are expensive and slow-processing, and can only do simple tasks. Additionally, the elderly people tend to find the robots interesting initially, but then they lose interest over time.
Preventing Medication Errors in Hospitals through a Systems Approach and Technological Innovation: A Prescription for 2010, by Jacquelyn and Frederick Crane. Published in Hospital Topics in 2010, https://drive.google.com/file/d/1j8KqYnxqIiRbgh0ynwJes6Ww6djXMJ05/view?usp=sharing
This paper discusses medical errors made in hospital settings and potential solutions. The method used is a review of other research papers. The authors say that medical errors are not related to incompetent healthcare providers, rather, they are related to issues in the healthcare system. Specifically, they said that patients' medical information is scattered across different providers in different locations, medical orders are handwritten and often misunderstood or not followed, and, most relevantly to our project, most medication errors occur during drug ordering. One solution they suggest include failure mode effects analysis (FMEA), which is a method of identifying errors and proposed solutions. They also suggest integrating electronic medical records as well as decision support systems. Decision support systems allow for improved communication with clinicians, access to medical knowledge technique monitoring, automated calculations, patient information sharing, and error tracking and reporting. Another solution they suggest is bar coding technology - a nurse can scan the barcode on the medication on the medication and then scan the barcode on the patient's wrist band to ensure that the patient is getting the correct medicine in the correct dosage. Lastly, they talk about automated dispensing machines (ADMs) and robotic dispensing technology (the Robot-Rx system). A robot would scan bar-coded medications and package them in ADMs for delivery to patients.
Designing a Social Robot to Assist in Medication Sorting, by Jason R. Wilson, Linda Tickle-Degnen, and Matthias Scheutz. Published in Tufts University in 2016, https://hrilab.tufts.edu/publications/wilson2016icsr.pdf
This paper discusses an experiment involving a robot for assisting people who have Parkinson’s disease with sorting medication. In the experiment, people were asked to perform the sorting task with the robot and make two specific errors, then rate their experience with the robot under different categories. The robot was able to detect incorrectly sorted pills and provide appropriate feedback. The participants had a positive emotional experience with the robot but felt that the robot was weak in providing physical support.
A list of household objects for robotic retrieval prioritized by people with ALS, by Young Sang Choi, Travis Deyle, Tiffany Chen, Jonathan D. Glass, and Charles C. Kemp. Published by IEEE, 2009. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5209484
This paper summarizes the findings of a study that the researchers performed regarding the types of items that ALS patients tended to need to get most. The study involved having caregivers and patients photographing objects that they could not reach, and conducting interviews along the way to gauge the types of objects that they were dealing with. The study concluded that prescription bottles were the 4th most pertinent item for possible robotic retrieval for people with ALS.
Developing a mobile robot for transport applications in the hospital domain, by Masaki Takahashi, Takafumi Suzuki, Hideo Shitamoto, Toshiki Moriguchi, and Kazuo Yoshida. Published by Robotics & Autonomous Systems, 2010. https://www.sciencedirect.com/science/article/pii/S0921889010000680
This paper discusses the efforts that researchers took with regards to designing a robot that could transport goods in a hospital. They went into their findings regarding the types of challenges they had to work with, including object detection and collision/person avoidance, and handling the uncertainty that comes with moving things in a hospital. They found that using several methods of object detection and collision avoidance (detailed in the paper), they found that their algorithm and methods were able to successfully make it such that the robot would avoid static and dynamic obstacles while accomplishing the goal of getting from point A to B.
Combining deep learning for visuomotor coordination with object identification to realize a high-level interface for robot object-picking, by Manfred Eppe, Matthias Kerzel, Sascha Griffiths, Hwei Geok Ng, and Stefan Wermter. Published by IEEE, 2017. https://ieeexplore.ieee.org/abstract/document/8246935
This paper goes over a study that looks into how deep learning with convolutional neural networks can be used for object detection, and how this system was deployed on a robot for certain grasping tasks. This process involved isolating an object, removing “distractions” around the object in the eyes of the CNN, and generating a grasp trajectory in order to accurately be able to grasp the object. The researchers conclude by showing that this system works with varying degrees of success depending on the type of object, with an overall success rate of over 70%.
Beer, J. M., Smarr, C. A., Chen, T. L., Prakash, A., Mitzner, T. L., Kemp, C. C., & Rogers, W. A. (2012, March). The domesticated robot: design guidelines for assisting older adults to age in place. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction (pp. 335-342). https://dl.acm.org/doi/pdf/10.1145/2157689.2157806
The researchers conducted a questionnaire on how open older adults are to having an assistive robot in their home and their opinions surrounding the idea, so that robots could be designed to meet their needs. The researchers looked into what tasks older adults would like to be assisted with and why they have those preferences. The older adults said they benefited from robot functionality that "[compensates] for limitations, [saves] them time and effort, [completes] undesirable tasks, and [performs] tasks at a high level of performance." They expressed concern for the robot damaging the environment, being unreliable or incapable of doing the task, doing tasks that the older adult would rather do themselves, and the robot taking up too much storage space. One design consideration the paper suggested was the ability for the user to customize the robot to perform the task in a certain way.
The Role of Healthcare Robots for Older People at Home: A Review, by Hayley Robinson, Bruce Macdonald, and Elizabeth Broadbent. Published by the International Journal of Social Robotics, 2014. https://www.researchgate.net/publication/271661264_The_Role_of_Healthcare_Robots_for_Older_People_at_Home_A_Review
This study was conducted to evaluate the current home care system for older people in hopes to delay nursing home admission, specifically in how robots are contributing. Various robots that are currently being used for home care were studied, comparing their physical appearance, like shape and height, and functionality. It was concluded that most robots are being used as solutions to problems rather than being a way to prevent the problems from occurring. While this is helpful, it would be beneficial to develop more robots that can educate older people on possible issues and how to prevent them.
Home care professionals' experiences of successful implementation, use and competence needs of robot for medication management in Finland, by Ritta Turjamaa, Mojtaba Vaismoradi, Satu Kajander-Unkuri, and Mari Kangasniemi. Published in Nursing Open in 2022. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10006617/
This journal investigated what home care professionals think about using robots to manage medication for older people. 62 professionals working in older people's homes were interviewed on their experiences and views. The participants stated that medication robots are helpful but should be steadily introduced to the home before they begin giving medicine. Until the technology becomes more developed, professionals would also need to oversee the robots to make sure that they are working correctly.
Blog Post #11
Below is our video for the programming-by-demonstration tool (the video shows both programming the robot and sending the robot to a specific pose). We chose to utilize a command-line interface since we believed it would be easier to interface with for the purposes of our project. In the video, we show the robot in its "retracted" state and its "extended" state, which is meant to simulate the robot with its arm retracted while transporting the medication and its arm extended in order to pick up and drop off medicine. This fits into our project because we need the robot to be able to safely transport the medicine that the user requests and to pick up the medicine that the user requests.
![](https://www.google.com/images/icons/product/drive-32.png)
![](https://www.google.com/images/icons/product/drive-32.png)
Blog Post #10
For our project, we needed a way to differentiate between different medicine bottles in order to ensure that we do not deliver the wrong medicine to our end user. To that end, we used aruco markers with specific ID's corresponding to different medicines and implemented functionality on the stretch in order to differentiate between markers. The screenshot to the left shows how the robot is able to see a custom marker we created.
Blog Post #9
We will address the required technical capabilities in the following manners:
Detecting objects to fetch
The target solution is for the robot to process the camera feed and be able to differentiate the correct medicine container from other objects—we could use the OpenCV python library to achieve this. We'll also have to train the robot to distinguish text on medicine bottles, so that it grabs the correct type of medicine requested.
Alternatively, we can attach QR codes to the medicine containers the distinguish them both from other objects as well as each other. This could also be achieved with the OpenCV library, as it comes with functionality for detecting QR codes.
Should all else fail, we can allow the users to take manual control of the robot to pick up the medicine via teleoperation.
Detecting the user to bring the item to
The target solution is for the robot to detect the user's face. To achieve this, we can also use the OpenCV library: it includes functionality for this.
Alternatively, the user could wear a patch on their shirt containing an ArUco Marker for the robot to detect.
Lastly, the user could assist the robot by creating a bounding box around the ArUco Marker they are wearing. In order to do this, the user would need a device which receives the live feed from the robot's camera and which transmits feedback to the robot.
Detecting objects to avoid/navigate around
The target solution is for the robot to detect depth and corners such that the robot can navigate around an environment without colliding into other objects. The OpenCV library includes functionality for Harris Corner detection, which we can use to accomplish this. We may need to train the robot with images containing standard household walls/furniture to account for edge cases where the corner detection fails.
Alternatively, we can place ArUco Markers on corners so help the robot see what areas to avoid. In practice, this would work as follows: ArUco Markers are placed on the corners of a box; the robot sees these markers and draws lines connecting each of them; these lines define the boundaries of the area accessible to the robot, so the robot knows not to cross these boundaries; thus, the robot avoids colliding into the box.
Lastly, we can allow the user to manually teleoperate the robot and drive it to and from the medicine cabinet. This would require the user to have access to a teleoperating interface, as well as live feed from the robot's camera, in order to manipulate the robot from any location.
Blog Post #8
The following are technical capabilities and environmental modifications our system will need.
Perception:
Objects that Stretch can fetch - medium precision when approaching, high precision when grabbing
QR codes that will be on fetchable objects - high precision when reading the QR code
The user to bring the fetched object to - high precision when approaching and giving the object
Walls/furniture to navigate around to get to the object and back to the user - medium precision while moving to avoid collisions
Manipulation:
Stretch will pick up the correct object to fetch
Stretch will let go of the object when the user takes it
Navigation:
Stretch will navigate to the fetchable object area designated by the user
Stretch will position itself in front of the user after fetching an object
Environment:
The user will specify an area that is both within Stretch's reach and always navigable to where fetchable objects will be kept
Objects will need to be able to be picked up by Stretch to be fetchable
QR code stickers will be placed on fetchable objects for Stretch to identify
Fetchable objects will be placed far enough apart for Stretch to grab without knocking over others
Blog Post #7
For our web interface, we wanted to incorporate full movement functionality on the robot. This involves moving the robot forward/backwards, turning it, raising/lowering/extending/retracting the arm, and having control of the gripper. On our web interface, we have a small text box indicating whether the robot is connected or not, and numerous buttons that correspond to movements on the robot. A picture of our envisioned interface is linked below.
Videos of our web interface controlling our robot are linked below (they may be out of sync because I had to record them separately on my phone and on the computer, but trust me it works :) also don't worry about the errors -- things still work somehow). Our approach to making the web interface involved writing a REST API that we'd load and run onto the robot in order to invoke functions from the stretch body API. We'd then call each API endpoint on every button click in order to make the robot move.
![](https://www.google.com/images/icons/product/drive-32.png)
![](https://www.google.com/images/icons/product/drive-32.png)
Blog Post #6
Video Prototype:
![](https://www.google.com/images/icons/product/drive-32.png)
Ashley's grandmother had an overall positive reaction and said, "Great job, robot!" for fetching the correct medicine.
She asked if the robot could lend her its arm so she can hold on to it to steady herself when she gets up from a chair.
She also asked if it could pick up her walking cane and suggested that the cane could be wrapped in a non-slippery sleeve so that the robot can grip it better.
She also asked if it could help her find her keys and suggested tagging the keys with something that sends a signal to the robot while it searches.
Blog Post #5
We started a literature review on topics that we found relevant to either the implementation, feasibility or demand for our product. Below is a set of research papers we found. To recap, the goal of our project is to create a mobile manipulator that is able to pick up specific medication(s) for a target user on demand depending on what the user wants (e.g. they can choose to have the robot pick up a specific medication), and deliver that specified medication to the end-user.
Blog Post #4
For our project, we chose to distribute roles accordingly:
Donovan: Fabrication guru, hardware guru, manager
Ashley: User Research lead, perception guru, manager
Leah: UI guru, ROS guru
Sol: Documentation & Communications
Concretely, we agreed that everyone should be responsible for learning ROS generally, with everyone spending extra time diving into their specific areas of expertise depending on the roles they have. Throughout the quarter, everyone is going to be responsible for pitching implementation ideas based on their areas of expertise, as well as contributing to other areas of the project based on need. To ensure everyone takes away something from the capstone, we're requiring that everyone learn ROS basics, and we'll also do some peer review on code for major portions in order to ensure that everyone understands what individual components of our project are doing. We plan on having weekly meetings to go over our work as well as specific goals for that week in order to make sure we're all on the same page.
Blog Post #3
Here's some screenshots of our work getting through the ROS tutorials. We struggled a bit trying to get everything installed properly, but what gave us the most headache was trying to keep track of which terminal was doing what.
Blog Post #2
The following are our preliminary designs for our Stretch medicine delivery task.
Sketch
Storyboard
We showed the sketch and storyboard to Ashley's grandmother, whose feedback was mostly positive. She added that it would be nice if Stretch could also bring snacks, for example crackers, while bringing medication to the user.
Blog Post #1
User: Elderly people with limited mobility who want to age in place.
Challenges user faces: Difficulty moving to medicine to take it. This can be due to back problems or balancing issues for example.
Stretch Solution: The Stretch manipulator can identify the appropriate medication from a shelf-like storage location (e.g. medicine cabinet) and bring it to a table near where the client is sitting.
A client we can get feedback from is Ashley’s grandmother, who has limited mobility and is taking medication for her memory and back issues, along with other medical complications.