OLIVER KONG TIM LOK

OUR Attachment

We were assigned the task to work on the software of two different floor cleaning robots - Panthera and hTetro, by working on the object classification of both of their cameras.

Robot 1 - Panthera

Panthera is a pavement sweeping robot that is able to change its width in response to incoming pedestrians and changes of pavement width. Currently, it is able to respond to pedestrians via a separate machine learning algorithm and hence contract to allow them to pass. It has a minimum width of 60cm and maximum width of 170cm.

Specific to Panthera, we needed to train an algorithm to recognise the different types of objects on a pavement, and group the detected objects into items that can be picked up by the robot, and items that cannot be picked up by the robot. The first category consists of objects like leaves and small pieces of garbage, while the second category consists of large pieces of trash like cardboard boxes, and non-trash items like trees, cracks and road signs. The rationale for this is to ensure the robot is able to tell when an area is clean, and hence move on to the next section of the pavement, while ensuring that it also does not attempt to pick up any large non-trash items.

Robot 2 - hTetro

Second is hTetro, a floor cleaning robot in 4 separate square modules that can reconfigure itself into shapes that resembles Tetrominos in the game of Tetris.

It is currently able to reconfigure, manoeuvre and calculate the size of the area it needs to clean and the appropriate shape for it to shift into. Our task is to train the algorithm to detect the objects that cannot be cleaned by hTetro so that the robot can avoid them.

To complete both tasks, we took photos of trash and non-trash items and used LabelImg software to label them for the two separate machine learning algorithms to train on. In addition to that, we assisted in the scientific reports on https://www.overleaf.com. In total, we completed 2 reports with journals with assistance from mentors Mr Bala and Mr Mohan.

We worked in the pantry area of Building 3 of SUTD, as it was close to the office of Mr Bala, giving us a convenient way to look for help had we needed any.

The pantry area we worked at

OUR Process

Firstly, we collected images to train the machine learning algorithms on. For both algorithms, we took trash from home and gathered items from around the SUTD campus, and laid them on the ground in various positions. After that we took multiple angles circling the trash. For non-trash items, we took them around our homes during the weekends.

Training image for Panthera (Cracks - Non-trash)

Labelled images for hTetro (Seaweed - Trash)

Next, using a labelling software called LabelImg, we labelled the objects for the machine learning algorithm to attempt to recognise. Altogether we labelled approximately 1700 photos for both robots in their final trainings.

While waiting for the machine to train (which takes approximately 8 - 10 hours), we worked on the introduction to two journals we we assigned to do. These two journals detailed the usage of machine learning in Panthera and hTetro to recognise trash. We used the platform called Overleaf, which requires LaTeX formatting and hence a little background knowledge in coding.

When the machine learning algorithm has finished training, we would check the results and see if it can detect trash and non-trash items with a little bit of reliability.

When the algorithm could not recognise items, we would have to identify the issue behind our labelling, and redo the labelling. For the Panthera robot this meant that we labelled the photos 3 times over.

We then picked the best result photos to use in the journals.

The machine learning algorithm's own output (Panthera)

My TakeAway

This attachment program taught me to never be afraid to ask for help - we are all constantly learning from one another better and more efficient ways to complete the task. When our mentor, Mr Bala, told us that the naming of the images were wrong and that we had to redo everything, I felt exhausted and hopeless - until we asked him for help - in which he gave us a code to run that could iterate through and rename all the files automatically.

On top of that, I also learnt how to be resourceful. When examining the code Mr Bala gave us, we found out that we could reuse the code to assist us in counting the number of non trash and trash labels we had in total - something we needed to do to ensure a balance of the labels. Hence being upfront in asking for help and admitting that I need help is something that I learnt in this attachment.

At times, the labelling of the photos got repetitive and mundane - this taught me to persevere through as the final product of seeing the machine learning algorithm that we trained be able to recognise what objects it can pick up and what objects it cannot pick up - - almost like a human - was immensely rewarding. This gave me more drive to persevere in my future pursuits - the reward can be greater than what is expected

I learnt some of the technical aspects of AI - and how training an AI works. Loss indicates how wrong the output of the AI is as compared to the labels we gave it - it will then adjust its algorithm to be closer to the answer.

It was a tad bit of a "culture shock" when Mr Bala told us that he sleeps in his office overnight occasionally. When there is alot of writing and coding to do he takes out his mattress and sleeps on the ground in the office. It shows us about how hectic this work life is and that entering this field would require making some sacrifices.

Lastly, I learnt that language barriers are easier to overcome than I think. Mr Bala's accent is very thick, and talking to him ofter required both of us to repeat what we had said over and over again. But over time we got used to each other's accents and could communicate easier, forming a close bond.

Photo on final day of the attachment