We were assigned two different tasks; to work on the software of two different floor cleaning robots - Panthera and hTetro, by working on the object classification of both of their systems.
Panthera is a self-reconfiguring pavement sweeping robot that is able to reconfigure its width in response to pedestrian density and pavement width ("self-reconfiguring": meaning the robot is able to autonomously adjust its own shape or size).
Currently, it is able to respond to pedestrians via a separate machine learning algorithm to contract and allow them to pass. Specific to Panthera, we needed to clearly define the cleanliness threshold of the pavement, basically a measurement of how clean the pavement is.
(View the video to see the Panthera in action)
To do this our task was to train the robot to differentiate between objects (on the pavements) that it can pick up, and cannot pick up so the robot can detect whether a pavement is clean.
hTetro is a floor cleaning robot in 4 separate square modules that can reconfigure itself into shapes that resembles Tetrominos in the game of Tetris. It is currently able to reconfigure, manoeuvre and calculate the size of the area it needs to clean and the appropriate shape for it to shift into.
(view the video to see the hTetro in action)
Our task was to train the algorithm to detect the objects that cannot be cleaned by hTetro so that the robot can avoid them.
To complete both tasks, we took photos of trash and non-trash items and used the labelImg.py software to label them. For the Panthera, objects were split into the classes "Pick Up" and "No Pick Up". For the hTetro, objects were split into the classes "Can Clean" and Cannot Clean". Afterwards, we passed the labelled images throught the network to teach it. In addition to that, we co-authored the scientific reports regarding our experiments and research done. In total, we worked on 2 reports with journals under the mentorship of our mentors Mr Bala and Dr Mohan.
Our process for the training our Artificial Intelligence system went as follows
To obtain trash to get photos of, we gathered trash that was found in the SUTD office, and from our homes. We were deliberate in taking our photos, getting many different angles as possible of the trash, on as many different kinds of pavements as we could find. Each time we snapped a few shots, we also rearranged items in the photos or distorted the shapes of the objects (by crushing/bending them), to achieve more variation. The labelling of photos is done to tell the AI what each object should be classified under, so it can learn the right things. We tried to standardise our labelling methods, such as what objects we should classify under which class, and how big the "bounding boxes" are for each object. ("bounding box" referring to the box that encases the object on the photo, see below for examples). We used the SSDMobilenet framework for our AI. To do this, we simply placed all our labelled images in a folder and ran the program that our mentor shared with us. The program would train the AI, then attempt to detect and classify objects in photos given to the program. From this, we can find out if there's any issues with the detection and classification and deduce whether there is anything we can do to improve the training results, whether it's taking better photos, labelling the photos better, or using a different AI framework.
Here are some photos of us at work.
Here are some of the photos that we used for the training of the artificial intelligence systems.
3 Content Knowledge/Skills learnt
2 Interesting Aspects of the Experience
1 Takeaway for Life
Group photo our mentor and us