The purpose of this competition was to propose an idea for a robotic system and try to accomplish a self chosen task. In our case we decided to try and create a hide and seek set of behaviors for our robot to be able to try and win a round of Hide and Go Seek Home-base.
The GMapping ROS package provides laser-based SLAM (Simultaneous Locaalization and Mapping). With this package it is possible to create a 2-D occupancy grid map to use with the navigation stack. It does this by converting each laser scan into odometry transform frames, and uses these to estimate the robots pose within the map frame. There are many parameters that can be tuned to unique situations.
AMCL implements a KLD-sampling Monte Carlo localization approach to estimate the robots current pose within a laser-based map. Upon initilization, AMCL distributes a particle filter through the map according to the input parameters. An interesting note is that AMCL can't handle laser data that moves with respect to the base, because upon startup AMCL latches the transform between the lasers frame and base frame to use in localization.
Question: Using GMapping to create a map to use during the competition, the "Hide" behaviour would be able to localize within this map, find a suitable hiding position and try to make it to the home-base without being caught. The "Seek" behaviour would be able to localize within the map, try to navigate the map and find the "Hiding" robot and catch them by coming within 0.5 meters of the other robot.
Hypothesis: With tuning of localization parameters, robust traversal of the map, proper robot detection, capable hiding behaviour and proper communication and logic between all the behaviours and robots we should be able to successfully localize, detect the opposing robot and chase or evade the respective robot.
Figure 1: Difference in localization methods between Odometry and AMCL. Image source: http://wiki.ros.org/amcl
We had two robots continuously talking with each other by having one robot set as master and the other robot connecting to the master and setting up all it's nodes on the roscore of the master robot.
Localization: We decided to implement the localization behaviour by utilizing AMCL's global localization function to disperse the particle cloud throughout the map. We than had our robot proceed to slowly rotate 180 degrees, move forward about half a meter, rotate 180 degrees again and move forward about half a meter again (while avoiding running into any walls). This would allow our robot to successfully and very reliably localize with the map.
Map Traversal: We randomly set a series of way-points within the inside of the map for our robot to traverse through. Once our robot was localized it would generate a way-point and begin to traverse it's way towards it. This behaviour would pause whenever the chase behaviour was activated.
Detection: We initially went with trying to use colour detection but we could not tune it enough to be reliable to easily and reliably detect a skirt of paper surrounding the robot. As our result our last implementation tried using ar detection and just having multiple AR tags around the Hiding robot. While the detection was reliable in the detection of an AR tag it was difficult to keep track of an AR tag as they were constantly being rotated and it would not be a consistent tracking of the AR tag
Chasing: The robot would detect where the center of the opposing robot was and using that it would try and rotate itself to face the opposing robot while charging forward at it. It would maintain this behaviour until it it either got within 0.5 meters in which case it won by "catching" the opposing robot or until it lost track of the other robot or the opposing robot reached home-base
Localization: We decided to implement the localization behaviour by utilizing AMCL's global localization function to disperse the particle cloud throughout the map. We than had our robot proceed to slowly rotate 180 degrees, move forward about half a meter, rotate 180 degrees again and move forward about half a meter again (while avoiding running into any walls). This would allow our robot to successfully and very reliably localize with the map.
Map Traversal: We preset a series of way-points within the inside of the map for our to use for hiding. Once our robot was localized it would it would randomly choose a way-point and begin to traverse it's way towards it.
Return To home-base: The robot would try to make it to home-base after it has stopped in a hiding position and the "Seeker" robot had started to search the map for it. It would use the fastest path to the home-base unless the "Seeker" robot got in the way and in that case the robot would attempt to go around it to get to the home-base.
We were able to get our programs working and communicating enough to have them complete multiple iterations of rounds of both hiding and seeking on two separate robots at the same time.
We had runs were both the seeker was able to find and catch the hider and rounds were the hider was able to get to the home-base. We found the hiding behaviour was favoured to win the round as it had a chance to just avoid the seeker robot all together and the seeker robots detection wasn't reliable enough to consistently detect the hiding robot even when they were close on occasion.
Our initial plan of updating the costmap for both robots resulting in a sort of hot and cold behaviour to have our robot's attempt to find and avoid each other. We ran into an issue having the final costmap recognize these changes so we decided to not further perusing using cost map updates. We ended up sending each robot's pose to each other, this would be used to determine if they were within 0.5 meters of each other and it would also affect the "Hiding robot's behaviour if it got to close. The systems themselves worked well enough within to be able to properly interact with each other in the expected manner. While individual behaviours (like the robot detection) didn't work as well, that was less of a fault of the methodology and idea behind the behaviour but rather an issue with fine tuning the behaviours parameters to perform optimally on a consistent basis.
The purpose of this competition was to accomplish a task that incorporated previous knowledge but also ventured into new territory. We were able to successfully implement a multitude of behaviors and combine the behaviours in a cohesive manner into our two different robots and as a result we were able to have the "Hider" and the "Seeker" face off against each other multiple times to test how well the behaviours performed. Unfortunately we were not able to fine tune each of the behaviours enough for them to perform optimally on a consistent basis. As a result, while running our robots against each other we saw our robots behave in a sub-optimal way. This result matched what we thought we could accomplish within our hypothesis, even though it was not as robust of an implementation as we would have liked it to be, and not the original implementation we attempted to use either. We could improve both behaviours with more fine tuning and testing over a series of tests.