Bootbot is a project aimed at the control and planning of a kick for a five-link underactuated bipedal walker. The ability of a robot to manipulate its surroundings plays a large role in determining how efficiently it can execute an autonomous search-and-rescue operation. We present the framework to enable a bipedal robot to target and kick a dynamic object while maintaining balance.
Our simulation is implemented in MATLAB and uses ode45 to integrate the dynamics. Videos showing successful kicks of an elastic ball with different initial conditions can be viewed in "Results."
The guiding concept of this project is a capable and robust first-responder-robot capable of supplanting and supporting human operators in dangerous scenarios. In order to perform as effectively as a human would, our robot needs to be able to manipulate a dynamic environment using its body and limbs. Our first step at towards this end goal is BootBot. The purpose of BootBot is to simulate a robot that can sense, track, and kick a simulated ballistic object flying towards the robot. This is a particularly interesting project because our robot, as an underactuated system, must actively retain balance while kicking. Additionally, it also needs to predict where the ballistic object will travel, and it needs to kick that object when it arrives.
For our project to be successful, it must meet the following design criteria:
Maintain balance of robot at all times
Sense the motion of a ballistic object
Impart force upon a ballistic object with the robot's leg
As a first step to approach this challenge, we constrained ourselves to a simulation of a five-link robot in a 2-dimensional plane. This simplification allows us to form a basis of control theory that can be eventually extended to 3D. The usage of point feet allows us to avoid finding the center of pressure of the foot on compliant ground, but it also renders the robot more unstable due to under-actuation. The transition to 3D would see the introduction of vertical torques upon impact with the ball, as well as ballistic motion and tracking in three dimensions.
We performed this simulation in MATLAB, since its ODE toolbox allows for robust integration of dynamic mechanical systems. We used a series of different controllers, each one's strengths corresponding to a separate phase of the kick. These different controllers increase the robot's overall robustness, since each is able to reject expected disturbances in directions that they would arrive in each phase. Since in our simulation, all links are modeled as rigid and all motors as perfect, additional precautionary measures would need to be taken in the transition from digital to physical in order to protect the hardware.
In this project, we used Lagrangian dynamics in MATLAB to model the robot, the ball, and the surrounding environment. The robot is composed of a point mass at the hip and 5 rigid links with mass. The mass of the robot, in total, is 35 kg. The five-link robot is under-actuated; it has motors which act at both hip joints and both knee joints.
External influences, such as the 3-kg ball and the floor, were modeled to be compliant objects that exerted forces on specific points of the robot structure. In particular, the floor acted as a viscoelastic surface with a coefficient of friction of 0.8.
In order to hit the ball, the robot must be able to estimate the ball's trajectory. This is accomplished by a simulated LiDAR sensor. The LiDAR sensor outputs measurements of ranges and angles added with sensor noise. To simulate the angle and range measurements, a simple circle-line intersection algorithm is used to determine ranges with a fast vectorized operation. The simulator also runs at a much faster rate than a reasonable sensor sampling rate, so a fixed delay of 0.01s between intervals was enforced to produce a 100Hz sampling rate.
Since our robot controller cannot realistically have access to the true state of the ball, a realism barrier is enforced where the controller can only access the noisy LiDAR measurements.
The LiDAR measurements are then used to estimate the position of the center of mass of the ball. This is accomplished with another fast vectorized algorithm as described in [5].
The position estimates are used in a rudimentary predictor-corrector state estimation algorithm to estimate the ball's velocity and position. First, the velocity of the ball is measured by numerically differentiating the position with each time step. Then a velocity is predicted from the previous time-step's velocity by modeling the ball as a simple ballistic object. A weighted average of the measured velocity and the predicted velocity is used to give the final estimated velocity. The predictor-corrector estimator allowed us to obtain decent state estimates despite noise. The ball state estimator is robust to multiple edge-cases, such as the ball entering or exiting the range of the sensor.
We chose to divide the kicking task into discrete steps and subtasks so that we could apply different control controllers specialized for each substask. In total, we used three separate controllers: (1) Joint PD, which allowed us to reach a desired state by directly actuating our joints, (2) Contact Force Optimization, which allowed us to perform center of mass (COM) tracking, and (3) Task Space Control, which allowed us to perform tracking of the foot end effector.
Note that between steps 3 (pose) and steps 4 (kick), our LiDAR sensor is utilized to estimate the position and velocity of the ball and plan a kicking trajectory for the foot so that it will meet the ball along the given path at a desired time.
During our initial Balance and Lean phases, Contact Force Optimization serves as a force control method that drives the COM to a desired location by applying constrained ground contact forces [1]. Not only is this important for keeping our robot from falling over, but it also allows us to shift the weight of the whole system onto one foot and switch from a two-legged stance to a one-legged stance. In addition, the controller is able to robustly reject external disturbances as long as the commanded contact forces satisfy the constraints.
The controller first uses PID feedback to command a wrench about the COM to bring it to a desired position and the torso to a desired angle. A feedforward term is also used to offset the total weight of the robot. Once the wrench is defined, we generate a grasp map that maps the foot contact forces to the wrench. Because this matrix is tall, multiple pairs of contact force solutions can exist, so we run an optimizer that minimizes joint torques while constraining the foot contact forces to lie within the friction cone and only exert positive normal forces on the ground. If no solution is found, this indicates that the robot will either lift a foot from the ground or will undergo sliding. In these cases, we just apply our feedforward term. Finally, we use a Jacobian transformation generated from our foot positions to map our pair of desired contact forces to motor torques.
During the kick, the most important goal is to control the trajectory of the swing foot (end effector). As the end effector is not one of the states of the robot, it makes sense to instead transform the joint space dynamics of the robot into the operational space dynamics [2] [4]. This transformation utilizes the Jacobian of the end effector with respect to the original robot states. Lambda, mu, and p are analogous to the inertia matrix, the Coriolis vector, and the gravity vector from the joint space dynamics, respectively.
The first task that this controller priorities will be the position and velocity of the swing foot as a function of time. The target position will be fed into the controller as a parameter. This results in a different trajectory for each target position.
We implemented PD control in the Operational Space Controller to ensure that the swing foot reaches a desired position and velocity along the trajectory chosen. Using a Jacobian transformation on the resulting generalized force leads to the input torques to the knee and hip joints.
Note that the controller up to this point only controls the trajectory of the swing foot, as we have only implemented one task in the controller. In order to control the rest of the robot by implementing more tasks, we can work in the null space of the first task. By working in the null space of the previous task, subsequent tasks do not interfere in the operational space of the previous task. This second task will control the rest of the robot's joints to ensure that the stance leg and torso do not collapse down towards the ground.
Together, these two tasks will allow the robot to kick with the swing leg while maintaining balance on its stance leg.
Our robot was successfully able to complete the task of kicking a moving ball while retaining balance. Below are three configurations of our robot in action:
In conclusion, our finished solution successfully met our design criteria. Our robot senses the ball with the LiDAR, estimates its ballistic trajectory, and kicks the ball in-flight. It does so robustly, tracking multiple initial configurations of ballistic motion and retaining balance for each one.
We faced various challenges when working in MATLAB. The main difficulty was that the ODE solver, our simulator, sometimes had uneven time steps and would interpolate backwards. This is undesirable for components of our controller that relied upon numerical integration and differentiation. We ended up resolving this issue by adding safeguards into our controller and state estimator and also compensating by interpolating previously estimated values.
Another one of the major difficulties we encountered was determining how to define our null space feedback for our task space controller. Since the null space term acts independently from the main task (end effector tracking), its objective is only achieved if it does not effect this primary task. The challenge that arose from the task space controller was maintaining balance during the kicking motion as the control of the first task does not issue commands to the joints that do not directly affect the swing leg. This would result in the downward collapse of the robot even though the swing foot followed the desired trajectory towards the target. We decided to define our null space controller to dampen the motion of the stance leg and torso joints so that the robot would maintain pseudo-balance during the kick. However, if the kick was particularly forceful, these joints would tend to drift, which can be a future issue depending on the mass of the ball that we are kicking.
Additionally, our robot remains relatively stationary for our kick – it currently needs the ball to come to it and to be in some semblance of kicking range when our kicking procedure executes. Further improvements to our project would see our robot be able to adjust its horizontal positioning by walking to the site of planned contact. This would allow the robot to kick stationary targets not initially in its kicking range and to optimize ball placement in relation to its body to manipulate the ball in a certain way. We also currently do not simulate ball rotation or tangential slip and do not have multiple foot paths for a single target, so adding both of these features would allow us to put spin on the ball and kick it in different directions.
Hi, I'm Jared. I'm a 4th year ME undergrad. I'm interested in human-centered robotics.
Project contributions:
Implementation of robot dynamics
Implementation of reaction force and collision dynamics
My name is Eric and I am a 4th year ME undergraduate interested in legged and bio-inspired robotic applications.
Project contributions:
Implementation of Force Contact Optimization
Implementation of Path Planning
I am a 4th year ME undergraduate student interested in mechatronics design.
Project contributions:
Implementation of LiDAR sensor and ball state estimation
Implementation of simulator visualizer
I am currently a 4th-year ME/EECS undergraduate interested in robotics and design.
Project contributions:
Implementation of Operational Space Control
Implementation of overall control scheme