Research Interests:
Dynamics, Control, Deep Reinforcement Learning (DRL), Robotics, Machine Learning (ML), Image Processing, Smart Manufacturing, Optimization, Vibration, System Design, and Mathematically modeling, analyzing, and controlling of Thermal and Energy systems
In this project, a novel model of a Universal Omni-Wheeled Mobile Robot (UOWMR) was introduced. The UOWMR is capable of executing rapid turns, swiftly changing its driving direction, and following designated paths—whether sharply curved or smoothly contoured—while rotating about its vertical axis. These capabilities enhance the robot's suitability for energy-efficient navigation in dynamic environments and support the integration of various tools and devices (e.g., cameras, machine tools) mounted on its platform.
Comprehensive kinematic and nonlinear dynamic models of the robot were developed. The dynamics were formulated using Kane’s equations in conjunction with a novel generalized momenta method. Several control strategies were implemented, including linear controllers—Proportional-Integral-Derivative (PID), Pole Placement, and Linear Quadratic Regulator with Integral action (LQRI)—as well as a nonlinear controller based on Sliding Mode Control (SMC). These controllers were tested on a range of paths featuring both sharp and smooth curves. Performance comparisons between different controller and model combinations were conducted based on solver speed and tracking accuracy.
Universal Omni-wheeled Mobile Robots (UOWMRs) offer superior maneuverability in complex, unstructured environments and can navigate efficiently through tight spaces. However, leveraging these advantages with traditional navigation methods proves challenging due to modeling complexities and environmental uncertainties. Reinforcement Learning (RL) presents a promising solution for path planning in such scenarios, particularly where obstacle avoidance is critical. Despite its potential, the application of RL to omni-directional robots remains limited, primarily due to their high-dimensional action space, non-holonomic constraints, and complex kinematics. In this study, we present an omni-directional mobile robot system capable of navigating from any initial position in a workspace to a fixed target point while learning to avoid collisions. We adopt a curriculum learning strategy, gradually increasing environmental complexity—from obstacle-free spaces to environments with static and dynamic obstacles—to train a robust and adaptive navigation policy.
Initially, an RL agent was trained in a simple, obstacle-free environment to reach a designated target. The agent’s starting position was randomized at the beginning of each training episode to promote exploration and generalization of the navigation policy. A forward kinematic model of the UOWMR was developed and evaluated using various on-policy and off-policy RL algorithms to identify the most effective learning strategies. Hyperparameters such as learning rate and neural network architecture were systematically tuned through a series of experiments to determine the optimal configuration for obstacle navigation. Subsequently, static and dynamic obstacles were introduced to refine the agent's behavior policies. Performance comparisons between the proposed UOWMR system and a conventional Differential Wheel Drive Robot (DWDR) demonstrated that UOWMRs achieved superior maneuverability in complex environments.
Type 1 Diabetes Mellitus is a chronic autoimmune disorder characterized by the destruction of pancreatic β-cells responsible for insulin production, resulting in an absolute insulin deficiency. This deficiency impairs glucose regulation, leading to hyperglycemia and increasing the risk of acute and long-term complications. Consequently, individuals with Type 1 Diabetes require lifelong exogenous insulin therapy.
Continuous glucose monitoring (CGM) has become a critical component in the management of diabetes, offering near real-time insights into blood glucose dynamics. These data facilitate more precise and timely insulin dosing decisions. To advance the predictive capabilities of diabetes management systems, this study investigates the application of various machine learning methodologies—including deep neural networks, deep reinforcement learning, and ensemble regression models—for short-term blood glucose forecasting at a 30-minute prediction horizon.
The objective is to enable anticipatory adjustments to insulin delivery in response to predicted glycemic fluctuations, thereby improving glycemic control and reducing the incidence of hypo- and hyperglycemic events. Model performance was rigorously evaluated using a suite of quantitative metrics and demonstrated robust predictive accuracy across diverse glycemic scenarios.
Publications [1]
Amputees often face significant challenges in performing everyday activities. Currently, affordable prosthetic limbs with responsive functionality based on sensor input are not widely available in the market. This research aims to develop a low-cost, EMG-controlled prosthetic limb with one degree of freedom. The prototype prosthetic leg was designed using SolidWorks software, and its components were fabricated using a 3D printer. The assembled lower limb prosthesis was then controlled using an Arduino board interfaced with electromyography (EMG) sensors and stepper motors. Electromyography is a technique used to assess and record the electrical activity generated by skeletal muscles. It captures the electrical signals produced by muscle cell activity, which are triggered by brain signals. These EMG signals were processed by the Arduino microcontroller, which was programmed to interpret the limb’s angle and movement based on the input.
The processed signal was transmitted to a stepper motor via a motor driver, and the prosthetic limb was actuated using linear stepper drives to replicate human limb movement based on sensor readings. To reduce signal noise, a Kalman filter was applied to the EMG data. The developed prosthetic limb successfully mimicked natural movement with one degree of freedom. Since EMG signal thresholds vary from person to person, determining an average threshold value for each individual is essential for accurate performance.
Publications [1]
An obstacle avoidance system for a quadrotor UAV using an overhead-mounted camera is implemented through image processing techniques. While most current robotic systems rely on various sensors to detect obstacles and navigate safely, this approach eliminates the need for additional onboard sensors by utilizing a single overhead camera. In this method, images captured by the overhead webcam are processed using MATLAB. The images are first converted to grayscale, filtered to remove noise, and then transformed into binary black-and-white images. Objects within the images are identified by detecting their centroids, and the robot is visually marked with a square to distinguish it from other elements.
The pixel coordinates of each detected object, along with their respective areas, are used in the navigation algorithm. Navigation is guided by a depth-first search (DFS) algorithm, which uses the robot’s location to traverse toward a designated target within the binary image matrix. The project achieved successful results. The robot was able to navigate safely using only the overhead camera, without relying on any additional sensors. For quadrotor UAVs, the same algorithm was simulated using the Robot Operating System (ROS) and the Dronekit simulation platform, both running on the Ubuntu operating system. A Python script was used to link MATLAB with the simulation environment. The conclusion of the study indicates that this method is effective for obstacle avoidance in mobile robots within a specific range and can be adapted for quadrotors by applying the same parameters used for ground-based robots.
Publications [1]
In today’s world, automation and robotics are in high demand due to their direct impact on the rapid advancement of various industries. Additionally, product quality and flexibility have become key requirements. Robots offer an effective solution to reduce labor costs and meet increasing customer demands. Automation is particularly valuable for replacing human labor in performing complex tasks in hazardous or demanding environments.
Pick-and-place operations, commonly required in manufacturing, can be automated to improve efficiency. This study focuses on developing a system that can sort and place objects based on their shape and color using image processing techniques. The primary goal is to offer a practical solution for manufacturing processes that require sorting by shape, color, or both.
In this project, a low-cost robotic manipulator was developed from scratch. It features five degrees of freedom (5 DOF), is controlled by servo motors, and was fabricated using CNC machining. An electromagnetic end effector is used for handling objects. Shape and color detection algorithms were implemented using MATLAB 2016, with input from a high-quality USB webcam.
The robotic arm is controlled via an Arduino Mega board, which communicates with MATLAB over a serial connection. Due to the computational requirements of the algorithms, processing was performed on a computer. A graphical user interface (GUI) was also developed to allow for customization of the system's functions. The system successfully detected and sorted objects based on shape and color using the custom-built robotic manipulator.
Publications [1]
Road traffic accidents result in a high number of deaths and injuries in Sri Lanka, with driver drowsiness being a major contributing factor. Detecting driver fatigue through drowsiness monitoring is one of the most reliable methods to prevent such incidents. This research presents a "Real-Time Drowsiness Detection System" designed to assess the driver's alertness by monitoring eye activity. The system tracks the driver’s eye blinks and determines their level of drowsiness based on the duration the eyes remain closed. If the eyes stay closed beyond a specified threshold, the system identifies the driver as drowsy and triggers an alarm. The detection process employs the Viola-Jones algorithm along with the Hough Transform for iris detection. The system emphasizes fast and efficient data processing to ensure timely and accurate detection.
Publications [1]