1. Mondal, Md Safwan, Subramanian Ramasamy, and Pranav Bhounsule. "Risk-Aware Energy-Constrained UAV-UGV Cooperative Routing using Attention-Guided Reinforcement Learning " IEEE International Conference on Robotics & Automation 2025 (accepted).
Maximizing the endurance of unmanned aerial vehicles (UAVs) in large-scale monitoring missions spanning over large areas requires addressing their limited battery capacity. Deploying unmanned ground vehicles (UGVs) as mobile recharging stations offers a practical solution, extending UAVs’ operational range. This introduces the challenge of optimizing UAV-UGV routes for efficient mission point coverage and seamless recharging coordination. In this paper, we present a risk-aware deep reinforcement learning (Ra-DRL) framework with a multi-head attention mechanism within an encoderdecoder transformer architecture to solve this cooperative routing problem. Our model minimizes mission time while accounting for the stochastic fuel consumption of UAV, influenced by environmental factors like wind velocity, ensuring adherence to a risk threshold to avoid mid-mission energy depletion. Extensive evaluations on various problem sizes show that our method significantly outperforms nearest-neighbor heuristics in both solution quality and risk management. We validate the RaDRL policy in a Gazebo-ROS SITL environment with a PX4- based custom UAV and Clearpath Husky UGV. The results demonstrate the robustness and adaptability of our policy, making it highly effective for mission planning in dynamic, uncertain scenarios
2. Mondal, Md Safwan, Subramanian Ramasamy, and Pranav Bhounsule. "An Attention-aware Deep Reinforcement Learning Framework for UAV-UGV Collaborative Route Planning." International Conference on Intelligent Robots and Systems 2024.
Unmanned aerial vehicles (UAVs) are effective for large-scale surveying but are constrained by limited battery capacity. To optimize UAV endurance with mobile recharging stations via unmanned ground vehicles (UGVs), we introduce a deep reinforcement learning (DRL) framework with multi-head attention layers. This method dynamically determines optimal routes and recharging points for both UAVs and UGVs, surpassing existing heuristic and learning-based approaches in solution quality and runtime efficiency while adapting to real-time mission changes.
3. Mondal, Md Safwan, et al. "Optimizing Fuel-Constrained UAV-UGV Routes for Large Scale Coverage: Bilevel Planning in Heterogeneous Multi-Agent Systems." 2023 International Symposium on Multi-Robot and Multi-Agent Systems (MRS). IEEE, 2023.
UAVs are well-suited for large-scale surveying but are limited by battery capacity. To extend their operational range, we propose a deep reinforcement learning (DRL) framework with multi-head attention layers, which coordinates UAV-UGV routing and recharging strategies. This approach optimizes both vehicle routes and recharging points, outperforming heuristic and other learning-based methods in efficiency and solution quality, while dynamically adapting to mission changes in real-time.
4. Mondal, Md Safwan, et al. "A Robust UAV-UGV Collaborative Framework for Persistent Surveillance in Disaster Management Applications." 2024 International Conference on Unmanned Aircraft Systems (ICUAS). IEEE, 2024.
This paper presents a multi-agent framework that leverages asynchronous planning to optimize the collaborative routing of UAVs and UGVs, addressing challenges such as fuel constraints, speed differences, and recharging needs. The methodology is applied to a persistent surveillance task, demonstrating scalability and effectiveness across various team configurations. The framework's simulation results highlight significant improvements in route efficiency, making it a valuable tool for optimizing disaster response strategies.