PURSUIT
EVASION
The purpose of this competition was to implement two separate behaviours on the TurtleBot, pursuit and evasion. The objectives included improving the wander behaviour of our robot through motion control, to implement following behaviour while avoiding collisions, and to practice competition strategy and preparation. The goal of the pursuing robot is to successfully follow the evader for 60 seconds without losing track of him or colliding with any external object. The goal of the evading robot is to lose the pursuer in the same time frame.
Robotics is the intelligent connection of perception to action (Brady, 1985)
One way of implementing robotic behaviour systems is splitting the behaviour into specific states, such as a sense, plan, act (SPA) type architecture. The robot can then switch between each state as new information is taken in and adapt in real-time according to the planned behaviour.
The sensing state takes in all sensor data, vision, range, touch, depth, etc., and publishes the desired topics to be used in the planning algorithms. The planning state takes in the published sensor data and decides what to do with the information. This is where all of the decisions on how to react to the environment are made and different behaviours are implemented. This stage is essentially the "brain" of the robot and is where competitive strategy and feedback control is to be taken into consideration. Once a decision is made, the control signal is published and the robot moves into the acting state where the desired behaviour is implemented. The actuators receive the control signal published in the planning state and control the physical movements of the robot.
The TurtleBot is a differential drive robot that moves in two dimensions, which means that both the linear and angular velocities can be commanded. Proportional Derivative (PD) feedback control corrects the current linear and angular velocities according to the current error, which is the difference between the desired and actual set-point. PD control accounts for both correction proportional to the error and for the rate of change in error. Velocity ramps ensure that robot motion is smooth, during both acceleration and deceleration. This is important to prevent jerky behaviour when the PD feedback control system adjusts the robots current trajectory. An additional integral component can be added to the control system, which helps removes steady state error that accumulates in a pure PD controller.
Question: We are expecting our experiment to answer how well a simple usage of sensors combined with a properly tuned PID controller can follow a robot that is actively trying to evade us in an open environment. And on the flip-side of that, how successful of a evading strategy can we find and use to evade a robot trying to keep track of us.
Hypothesis: With enough tuning of our PID controller, range of sensor data we are using, and how close we want to follow the other robot; we can create a successful FollowBot that can accurately follow an evading robot. For evading, if we use a combination of simple but fast, and complicated maneuvers to test multiple aspects of the following robots tracking system and hopefully be able to find a point where they cannot accurately track our robot through certain maneuvers.
We decided to implement the pursuing behaviour as a state machine that implements a simplified SPA architecture. Our algorithm continuously takes in LaserScan data then implements a PD control that maintains the evader at a constant distance (0.65 meters) in front our robot. Three states are implemented, 'Follower', the pursuit robot is initialized when the user presses 'x' on the logitech controller, 'Follow,' where the actual pursuing behaviour happens, and 'Winner', which is activated once the robot has successfully pursued the evader for 60 seconds.
The bulk of our algorithm lies within the 'Follow' state in which a simple PD controller is implemented. If the evader's linear error (too close or too far away) or angular error (evader is turning left or right) is outside the set threshold, then the robots linear velocity and/or angular velocity will be adjusted proportionally using the gains specified and determined in development.
The evading behaviour is also implemented as a state machine. We used a sequence of maneuvers to try and test all capabilities of the bot following us. First off we had a fast forward motion to just try and see if they could keep up at high speeds in a straight line. We had the forward motion transition into a circle but first turning away from the wall that stopped it, than attempt circle around the robot to test it's pure rotational tracking. Halway through the circle it loops back to make a quick S maneuver to test both the rotational tracking and forward tracking. After the S it goes back to going forward and this loop continues until we evade the other robot or time runs out.
The competition consisted of four rounds per team (two as pursuer and two as evader), in which the competitors were randomly selected. Overall our robot placed second in the competition. Our robot was able to successfully evade both pursuers, but failed to follow either evader for a full minute. In the first run as the evader, we were able to follow the robot for a few seconds before our robot incorrectly identified the wall directly ahead as the object it was to follow, resulting in stopped movement. The second round was more successful, the robot was able to track the evader for 23 seconds (video below), which was the longest time in the competition.
Second run in role of pursuer
It is substantially more difficult to program a robot that can successfully track and pursue an object, than one that can simply evade. It is interesting to note that in the competition, all evaders (except one, where the program failed to run), were able to lose their pursuer within the allotted time frame and usually quite effortlessly. If the evader is moving faster than the pursuer, it can easily maneuver around and out of range of the pursuers sensors, effectively winning the round.
Within our evader program we implemented strategic moves to lose the pursuer under the assumption that the pursuer would not lose track of our robot. The main move we implemented is where the evader would lead the pursuer to a wall, then quickly circle around him, go straight and then turn around in an 'S' type fashion. We anticipated this would have been a complicated enough maneuver to lose the pursuer along one of the curves. In practice, following was a complicated enough task that the evader did not need to perform strategic evading maneuvers and simply driving close enough to a wall was enough to confuse the pursuer. If we were to redo this competition we would cap the evader's top velocity to make it interesting and give the pursuer a chance.
In our first run as pursuer, the robot quickly mistook the wall straight ahead as the object to track. This may have happened because the evader made a quick turn at the beginning resulting in the wall being the closest object and thus was initialized wrongly as the object to track. In algorithm development, we did not test for the case where a wall would be in the LaserScan range at the start position, which was the case in the competition. In the second run, we were able to successfully initialize the correct object (evader) to pursue, and tracked it for a total of 23 seconds. We had noticed in development that the TurtleBot has a harder time moving backwards quickly than moving forward, leading to different gains in PD control according to whether the evader was moving away or towards our robot. We wrote the pursuing behaviour under the assumption that the evader will also be implementing object avoidance and would not drive directly towards our robot, which was not the case in the competition and unexpected. It is because of this assumption that our pursuer failed in the second run, as the evader moved quickly towards our robot and we were unable to move backwards quick enough, resulting in a collision.
The environment plays a large role in implementations of robotic systems. The environment in which our programs were tested was different than where the competition was held, this may have played a role in the success of the algorithms. Lighting, shape of the space, floor friction, starting placement and many other environmental controls all contribute to how well an algorithm performs. The success of an algorithm can vary wildly in different settings and this likely played a role in our failure as pursuer.
The purpose of this competition was to implement two separate behaviours on the TurtleBot, pursuit and evasion, incorporating motion control, object avoidance and competitive strategy. From the results we conclude that it is more challenging to implement effecting following behaviour than evasion, specifically the tracking component of pursuit behaviour being the main challenge. The first part of our hypothesis that through tuning PID control, and effective use of sensor data we would be able to successfully follow the evader is partially supported by our results, specifically supported when tracking does not fail. Results could be improved by incorporating better tracking methods, perhaps with visual servoing or edge detection for object recognition, where the pursuer is able to distinguish the evader from its surroundings. The second part of our hypothesis where we hypothesize that pursuers would be unable to follow certain maneuvers and speeds was supported by the results. Overall results of the competition show that evading another robot is a rather simple task in comparison with following. In future experiments it would be interesting to cap the maximum velocity of the evader to allow for analysis of which strategic maneuvers are more successful than others. It would also be imperative for future experiments to test the programs in the competition environment, or to collect more information (ie. bag file), to accommodate for environmental differences and improve results.