TRAN HA THU

BACKGROUND INFORMATION ON THE PROJECT

Our main task at A*STAR was to brainstorm ways robotic arms can aid with our daily lives, in activities such as making a cup of coffee. When thinking of the idea of trash disposal with our robotic arm, which was the idea we ultimately based our project on, we were thinking about how trash was abundant in the world, and how most people were unwilling, or unknowingly disposing it improperly, causing problems such as lack of cleanliness, or filling landfills as a result of items not being recycled.

AI tool WADE being used in Let's Do It Foundation's project(Image Credit: https://www.letsdoitworld.org/wp-content/uploads/2019/06/detection4.jpg)

We were inspired by a recent project in 2018 by the Let’s Do It Foundation, which incorporated the use of a robot arm that utilized the AI tool WADE that allows for the identification and mapping of waste much faster and more accurately than people can. The project won the 2018 UNESCO-Japan Prize on Education for Sustainable Development. As such, we were inspired to code a robot arm with a similar concept, to detect trash, pick it up and dispose of it.

ACTIVITIES DONE AND THE PROCESS

Our Workplace

The A*STAR SCEI office is located in the North Connexis Tower on Fusionpolis Way, near one north MRT.

When we first arrived, our mentors at A*STAR briefed us on the type of robot arm we’ll be using, our task, the coding languages required and how to get started. We first started by relearning Python 3, as our coding experience in Year 2 was with a different version of Python, and we have forgotten most of it. We also started research on the OpenCV module of Python, as we intended to incorporate computer vision into our robotic arm while it performs its task. Our mentors provided us with a few links to help us get started, but we were trusted to learn on our own without much supervision.


We also started brainstorming about what we wanted the robot arm to do in the first week. Initially, we thought of four different ideas. Firstly, we thought of getting the robot to draw. Then, we thought of using the robotic arm to mix ingredients and make cake batter. We also thought about using it to massage people, but our mentors thought that it was unfeasible as it was an industrial robot and thus might not be safe, so we eliminated that idea. Our last idea was to use it to pick up trash and throw it. In the end, we settled on the idea of using it to throw trash, as we found it to be the perfect way to incorporate computer vision to detect the trash’s position.


In the following week, we started working on our code. We first practiced with the basic code that enables the robot arm to operate. When we were familiar with the way the movement commands worked, we started to work on a program that enabled the robot to move along a path specified by us. After this, we worked on a program that enables us to use computer vision to detect whether our object was trash and determine its position to enable the arm to pick it up. We did this by using the OpenCV module of Python. We watched tutorials and searched on GitHub in order to find an object detection system that could work, and we settled on YOLO object detection. By the end of the second week, we were able to get the robot to detect the position of our trash (a bottle) and pick it up and throw it.


After that, we decided to collaborate with another group which was working on creating a neural network for object detection. We found a dataset online that had trash sorted into different categories namely paper, plastic, glass, metal and other types of trash. We worked together to label all the images and then sent the labelled images to the other group so they could work on creating the object detection system for trash. This helps us to be able to detect more types of trash, and not only bottles, as the YOLO object detection system we have been using had a very limited database. However, the problems that we faced is that the program that they created only classifies the trash, but it does not detect it. Since we do not have enough time to create a trash detection algorithm, we decided that we will just use the YOLO object detection system.

This is an image of the arm we got to work with at the organisation. It can be controlled both via a physical control panel, as well as via our own URscript code.

Our final product

At the end, we were able to code a robotic arm with a mounted camera that enables it to detect a bottle’s position with a grid system. The arm is then able to move to the object’s position and pick it up, before moving to a specific position where the bin is to drop the object and dispose it.

Demonstration

of the

trash - clearing

robotic arm

3 skills/ content knowledge learnt:

1. I learnt about using OpenCV for computer vision. OpenCV is a module available to Python and it enabled my group to be able to detect the positioning, as well as classifying the object. We also researched into the YOLO object detection system, which involved detecting the object and classifying it using a limited neural network. In addition to this, we also learnt about URscript, which allowed the movement of the robot.

2. I learnt more about Python 3. Although I have had experience with Python before, both in school in 2017 as well as on my own, it was with a different version of Python, Python 2.7. Both versions are part of the same language, but they have mild differences, for example the code to request for user input in Python 2.7 is raw_input(), while in Python 3 it is input(). It has also been a while since my last use of this language, and thus I revised it.

3. I learnt how to work independently. As our mentors had their own projects at their jobs, we were trusted to learn and research independently, while presenting short reports of our work every week. Our mentors guided us, such as suggesting ways we can do certain things with the robots, as well as providing us with links to learn on our own. However, all the learning had to be done independently. I feel that this will help me in the future, as it enables me to become more productive.

interesting aspects of my learning

1. An interesting realistic aspect of the job is how long it takes to perfect a code. Even though we planned to code the robot arm to do a very simple task, it took us quite a while to be able to make it work properly. This can sometimes be very frustrating, as debugging a code takes a lot of time and effort to research as well as looking through every single line.

2. Another interesting aspect is how the job requires us to be resourceful and improvise. When I initially came here, I thought that that was not very important, as I assumed that the only problem we had to worry about was the programming of the robot. However, we faced a few other problems along the way. One example is the camera for the object detection. The camera screwed onto the robot arm could not be used as it required special drivers and complicated setup that we were unable to perform. Thus, we had to improvise by taping a Logitech webcam to the arm. Another example is the position detection of the object. Initially, we wanted to utilize Aruco markers in order to detect the position of the object. However, most of the tutorials online were for a different language (e.g. C++), and the open source codes either were unreliable, or they were unsuitable for our project. As such, we decided to use a 4x4 grid system instead. The camera will detect which box in the grid the object is in and determine the position of it instead.

1 takeaway for life

One takeaway from this experience is learning about a healthy working environment and healthy work habits. In A*STAR, the employees were extremely keen to learn from each other. They would also invite speakers to come and give presentations to the entire office and would ask the speaker questions in order to learn. For example, during my WOW! experience, the organization invited Professor Jianxi Luo from SUTD to present on innovation to the employees. The employees were encouraged to bring along their lunch to enjoy while listening to his presentation, and they were extremely attentive during his presentation. A few of them also asked the professor further questions on his presentation and innovation. From this, I learnt that good working habits involve willingness to learn, and a good working environment is one that allows everyone to learn from each other. As not all workplaces host this kind of activities for their staff, this a learning experience that is unique to this organization.