Our WOW! attachment has finally come to an end! We were assigned 2 projects. The aims of our projects are to develop and train AI systems to detect garbage on street pavements. This will allow the vacuum robot (either Panthera or Htetro) to clean the pavements to ensure a clean and green Singapore. We were given about 4 weeks to work on both of these projects. During these 4 weeks, we went to SUTD to discuss ideas, conduct our research, obtain datasets and to train our AI systems. We were able to work with up to 4 mentors from SUTD - Dr Mohan, Mr Balakrishnan.
Project 1, PTAWaste, involved using the Machine Learning approach to improve the Panthera. The Panthera is a self-reconfiguring autonomous pavement sweeper. We had to train AI Models to classify common waste Items found along park connectors. We also had to write a journal discussing our research methods and software used. Project 2, HTETrash, also involved using the Machine Learning approach to improve the Htetro. The Htetro is a self-reconfiguring autonomous floor cleaner. We had to train neural networks to classify common trash items found on hawker center floors. We also had to write a journal discussing our research methods and software used.
The actual robot was already designed and 90% built by the engineers in SUTD prior to our WOW attachment. We just had to train the AI system/neural network of the robot. This is the inner skeleton of the Panthera Robot.
These are some pictures/data we collected to label. The "SLOW" sign was labelled as "No Pick Up" while the chips on the floor in the other picture were labelled as "Pick Up"
We had 4 main tasks - Collection of data (by taking pictures of garbage on pavements), Labeling of the Images, Training of the AI and writing of a conference paper. We started off by brainstorming what objects could be picked up by the Panthera Robot. We had to know the size of the vacuum hole so we can determine the maximum sizes of the garbage that can be picked up. Once we reviewed that, we classified garbage commonly found on pavements into 2 categories - Pick Up and No Pick Up.
The next step was to take pictures to gather data. We brought commonly found garbage such as plastic straws, wrappers, drink cans, etc from our own homes and we placed them on a pavement to take pictures of them. We also had to take all these pictures under different conditions such as taking them during the day and during the night, etc. This would make the results more accurate. For garbage that cannot be picked up by the robot, we decided on larger items such as big branches. We also had to take into account items that are not garbage that the Robot might detect as garbage. These include pavement signs and bicycles (parts of them). Thus, we had to take pictures of those and label them as “No Pick Up” For the training of the data set to be effective, we needed around 1000 images in each category.
After collecting our images, we had to label these images into the 2 categories as previously mentioned. To label them, we used a Python Software called LabelImg.py. This would allow us to label the objects in each image by drawing a square box around them. We had to repeat this process each time for all the 2000 images we collected.
Once the labeling of the images is done, we had to feed the data-set into the training algorithm to train the neural network. This process could take up to 1 day. After the training is completed, we can test the network with new images to check if it will recognize the types of garbage accurately. In our case, we did not see the results we expected on the first try. There was a very low accuracy percentage so we had to make some changes. We accounted the loss of accuracy to our images and data-set. We thought that the training did not work properly as we did not use images of authentic garbage found on pavements. We also felt that we had to take more images in order to increase the accuracy of our Robot. Thus, we had to repeat this whole process of data collection, labeling, etc again. On our second try, we received much better results with accuracy rates going as high as 98%.
Since our results were great, our mentors approved this final AI model to be used in their robots. Now, we had to start on our conference papers, detailing our methodology, etc. This entire process had to be done for each of the 2 robots (Panthera and Htetro)
Summary of process:
*Repeat steps 1-3 until accuracy is high enough before proceeding to step 4
This was our planning process and brainstorming of the ways to collect our data and what types of data to collect.
We have all heard of this quote before - “Choose a job you love and you will never have to work a day in your life ”. Through this attachment, I got to experience it first-hand. I was always intrigued by programming and Artificial Intelligence. When I had this opportunity to work with SUTD on just that for a whole month, it felt like a breeze.