The purpose of this competition was to be able to localize and search through a previously mapped area using ACML. Within this area was 5 randomly placed AR codes or U of A logos that we needed to be able to detect, stop within a meter of and proceed to dock at each of the images only once.
The GMapping ROS package provides laser-based SLAM (Simultaneous Locaalization and Mapping). With this package it is possible to create a 2-D occupancy grid map to use with the navigation stack. It does this by converting each laser scan into odometry transform frames, and uses these to estimate the robots pose within the map frame. There are many parameters that can be tuned to unique situations.
AMCL implements a KLD-sampling Monte Carlo localization approach to estimate the robots current pose within a laser-based map. Upon initilization, AMCL distributes a particle filter through the map according to the input parameters. An interesting note is that AMCL can't handle laser data that moves with respect to the base, because upon startup AMCL latches the transform between the lasers frame and base frame to use in localization.
Question: Using GMapping to create a map to use during the competition, are we able to localize within this map, detect multiple AR codes and U of A logos and proceed to dock at each one.
Hypothesis: With tuning of localization parameters, robust traversal of the map, proper image detection, capable docking behaviour and proper communication and logic between all the behaviours we should be able to successfully localize, detect and dock with multiple AR and U of A logos.
Figure 1: Difference in localization methods between Odometry and AMCL. Image source: http://wiki.ros.org/amcl
We utilized 4 separate behaviours to accomplish this competition with one central node to facilitate communication and behaviour switching.
Localization: We decided to implement the localization behaviour by utilizing AMCL's global localization function to disperse the particle cloud throughout the map. We than had our robot proceed to slowly rotate 180 degrees, move forward about half a meter, rotate 180 degrees again and move forward about half a meter again (while avoiding running into any walls). This would allow our robot to successfully and very reliably localize with the map.
Map Traversal: We set a series of way-points along the sides of the map for our robot to traverse through. Once our robot was localized it would find the nearest way-point and begin to traverse through the way-points in a set order. This behaviour would pause whenever the docking behaviour was initialized and would resume after the docking behaviour had finished.
Detection: Tag detection was done using orb feature detection. The algorithm used input pictures of the UA logo and AR tag, and with a given threshold decided whether a logo was detected or not. A significant challenge was that the UA tag needed significantly more corresponding feature points (approx. >60), while the AR tag was detected with approximately 20 points. This resulted in the UA tag being inaccurately detected as an AR tag sometimes. A large challenge in the detection algorithms were some interesting behaviours that resulted when opencv and ROS were used together.
Docking: Our docking behaviour was straight from our demo 5 implementation with only slight modifications to adapt to the rest of the code we were using. The changes including recording our current pose and position once we had detected an AR tag or U of A logo. We would rotate 90 degrees to face the logo and than proceed with the normal docking behaviour. Once docked the robot would return to it's pose and position before the docking began and would begin to proceed with traversing the map looking for more AR tags or U of A logos. Once we recorded the position we stopped at we noted that as a place to not dock at again by checking if we were within a certain range of previously docked positions.
Control Center: We used a central control node to subscribe and publish information to and from all of the nodes we created. The control center determined when to switch between the different behaviours our robot had implemented based off the information being sent to the Control Center.
The competition consisted of two rounds per team. Each round points were scored for successfully detecting and stopping with a meter of the AR tag or U of A logo, and docking was worth additional points for the amount of lines you were between. Since we were successfully able to localize and detect AR tags or U of A logos for both runs as well as successfully docking on the first run. This resulted in us getting 1st place as we were able to accumulate the most points.
Our first run we ran into 2 issues. The first issue was it thought it detected an Ar tag after it had finished localizing, it proceeded to move forward and try to find the AR tag. Once it realized it was to close to a wall it proceeded to return to it's initial position and continue traversing the room from there. The second issue was the robot resetting it's return position (the position it needed to return to after docking) to None after completing or failing to complete a docking procedure. This led to the robot to continue to try to move forward as it didn't properly interact with a None type within path planning. This was due to a simple bug we promptly fixed.
Our second attempt had the issue of our robot not turning the proper amount when attempting to dock at an AR tag (it turned about 70 degrees instead of 90), this led to the robot coming up to close to the wall and proceeding to stop close and parallel to the wall. Since the robot was to close to the wall any movement sent to AMCL would fail since any close position was too expensive to move through in the costmap, resulting in the robot not moving at all as the AMCL behavior couldn't generate a path.
While each of the 4 behaviours (localizing, traversing a map, detecting a certain image and docking) isn't too dificult on their own, having them work together in a concise and efficient manner is very difficult. We were able to accomplish properly implementing all the behaviours together to create a functioning system to successfully compete in the competition. What we were not able to accomplish with integrating all the behaviours was finding and fixing all the bugs that would arrive from edge cases presented when all the behaviours began to interact within one system. This is a result of actions of one behaviour resulting in unforeseen circumstances in another behaviour which would lead to a certain edge case arising and causing our program to malfunction. If we had additional time or were able to begin working on the competition sooner we believe would could have found and fixed some of the bugs that arised throughout the competition and had a even more successful run as a result.
The purpose of this competition was to demonstrate the implementation and combination of 4 different behaviours. We were able to successfully implement each behavior and combine the behaviours in a cohesive manner into our robot and as a result we were able to have it complete the competition with the most points. Unfortunately we were not able to weed out and fix all of the bugs present from combining the 4 behaviours. As a result, during the competition some of the bugs were encountered and caused unintended consequences. This result matched what we thought we could accomplish within our hypothesis, even though it was not as robust of an implementation as we would have liked it to be. We could improve our overall score by ironing out some of the different bugs to allow our robot to follow the intended combinations of behaviours more reliably and successfully.