Implemented Angular Momentum based Linear Inverted Pendulum (ALIP) footstep planner on Apptronik's Apollo Robot. The planner predicts one-step ahead evolution of angular momentum about the contact point than it is to make a similar prediction for linear velocity, and hence it provides a superior quantity for feedback control. This planner reduces drift and is highly responsive to changes in desired center of mass velocity.
Developed a trajectory optimization framework that generates motions for agile legged robots to achieve dynamic motions such as running, jumping, and parkour by using Centroidal Dynamics and Full Kinematics model. By considering only the centroidal dynamics of the robot, this approach is able to capture the core dynamics of the system without having to contend with the robot’s numerous degrees of freedom.
A new control paradigm that uses angular momentum as a key state variable in the linear inverted pendulum model has opened a plethora of possibilities for the control of bipedal systems. This new paradigm, known as the ALIP model, has been validated in cases where the robot’s center of mass evolves in a plane. Walking up or down stairs or stepping onto or off an object may violate this assumption. Simulations on a 20-degree of freedom model of the Cassie biped show that the controller is able to achieve a periodic gait.
Designed a controller for a 2-link biped robot which is inspired by human walking which is more efficient. The controller has its core principles acquired from control law partitioning or feedback linearization method which is a classical control technique used to control fully actuated robots, but there is a slight variation incorporated here as the robot is fundamentally underactuated. Intelligent control Technique like Fuzzy Inference System is also implemented on the robot in fusion with the classical feedback linearization technique like control law partitioning.
Discovering the governing equations from scientific data becomes easier using data-driven approaches. Sparse regression enables the tractable identification of both the structure and parameters of a nonlinear dynamical system from data. The resulting models have the fewest terms necessary to describe the dynamics, balancing model complexity with the descriptive ability and thus promoting interpretability and generalizability. In this work, we design a custom autoencoder to discover a coordinate transformation into a reduced space where the dynamics may be sparsely represented. We combine the strength of the autoencoder for coordinate representation in a reduced state of the system and sparse identification of nonlinear dynamics (SINDy) for parsimonious models. We implemented this method on the planar pushing task and compared it against a globally linear, Embed to Control(E2C) latent space model. (Course Project)
This project presents a low computationally intensive deployment of ViTPose, which uses state-of-the-art Vision Transformers for a popular vision problem of pose estimation. The model, consisting of a Vision Transformer Network followed by a decoder, is implemented, and its parameters are adjusted with respect to a less image resolution of 128 x 128 and a smaller dataset. Particular design choices during the process are discussed, along with the decisions made, and an easily reproducible implementation is described. A final test accuracy of 32% is obtained on a dataset of around 110k images with a split in the train-validation-test set as 80-10-10. (Course Project)
This project aimed to demonstrate control system design, mapping, localization, and path-finding algorithms commonly used in mobile robotics. Our platform was the MBot Mini. First, a motor control system was created that, given the desired path and the robot’s current position data, could generate instantaneous velocity commands for the wheels. Then, a Simultaneous Localization and Mapping (SLAM) program was developed, allowing the robot to use the LIDAR sensor to map its environment and get more reliable position data. SLAM helps in better estimating the pose of robots in the world as compared to encoders and IMU. Finally, an A* path planning algorithm was implemented, and programs were written, allowing the robot to navigate around obstacles, explore its environment, and even localize on a map without knowing its starting position. (Course Project)
Designed a Model Predictive Control for autonomous car-like vehicles in which the model used for the prediction is obtained from a feedforward neural network in place of a mathematical model. The car's kinematic model is used in the controller for low-speed applications. A two-layer feed-forward network with sigmoid hidden neurons and linear output neurons is used for model generation. The network was trained with the Levenberg-Marquardt backpropagation algorithm. The model states the current position of the vehicle in terms of x and y coordinates from the last position coordinates of a vehicle along with the velocity and steering angle of the car. Using this model, the controller outputs the control signal, i.e., the velocity and steering angle of the car. Simulations were done in MATLAB to validate the results of the modeling and controller.
This project reviews various path following and trajectory tracking methods to track a reference point on the path which represents the rate of change of parameter of the trajectory. The aim is to minimize the Euclidean distance between the vehicle and the above-mentioned reference point. Moreover, the other concerns would be to reduce the cross-track error and the heading error. The methods like pure pursuit and its variations are described. The Stanley method of path following and other trajectory tracking methods like control Lyapunov Based Design and its modifications are analyzed. This paper has also looked into MPC(Model Predictive Control) and its variations for applying it to trajectory tracking. Simulations were done on the kinematic model of the car in MATLAB to find the best method to achieve the objective.
The Robocon 2018 problem statement required the autonomous robot to perform a variety of tasks independently and efficiently. To achieve this, we used various sensors and control algorithms for the accurate implementation and optimization of these tasks. The various tasks and their implementations were:
Loading of normal shuttlecock from manual robot.
Throwing the shuttlecock.
Loading of golden shuttlecock from manual robot