Current author list:
Marios Xanthidis, Michail Kalaitzakis, Nare Karapetyan, Alex Johnson, Nikolaos Vitzilaios, Jason M. O'Kane, and Ioannis Rekleitis.
Improvements on computating power, path optimization techniques, computer vision, and drones accessibility have introduced the interesting problem of planning under a visual objective. Recent works have emphasized mostly on inspection or object tracking purposes, and they are mostly limited to only 1 objective at a time, by incorporating geometrical constraints guaranteeing visibility directly to the controls. Although these works might be more than sufficient for tracking or inspection of a realatively small object, they cannot be directly applied for mapping and exploration, due to the close-coupled visibility constraints and controls.
This work introduces a navigation framework called AquaVis that produces visibility-aware motion plans for Autonomous Underwater Vehicles (AUVs). Typical approaches to autonomous navigation solve the problems of state estimation and motion planning separately. In the underwater domain, visual features that enable effective state estimation can be sparse or even absent in parts of the environment. Thus, motion plans for AUVs should account for the need to keep those features visible throughout their execution. The proposed method produces motions enabling AUVs to efficiently reach their goals while avoiding obstacles safely and maximizing the visibility of multiple objectives along the path within a specified proximity. The method is sufficiently fast to be executed in real-time and is suitable for single or multiple camera configurations. Testing of the proposed method utilizing the Aqua2 underwater robot in simulation shows the significant improvement on tracking multiple points of interest, with low computational overhead and fast replanning times.
Reference:
Marios Xanthidis, Nare Karapetyan, Hunter Damron, Sharmin Rahman, James Johnson, Allison O’Connell, Jason M O’Kane, Ioannis Rekleitis. "Navigation in the presence of obstacles for an agile autonomous underwater vehicle." In International Conference on Robotics and Automation (ICRA), pp. 892-899. IEEE, 2020.
For the last 200,000 years, on a planet covered 70% by the sea, with only 5% of the oceans being explored by our species, underwater robots can be the best chance we have. Such robots should be fully autonomous with safety guarantees and robust behavior in cluttered, or even hostile, environments.
Navigation underwater traditionally is done by keeping a safe distance from obstacles, resulting in "fly-overs" of the area of interest. Movement of an autonomous underwater vehicle (AUV) through a cluttered space, such as a shipwreck or a decorated cave, is an extremely challenging problem that has not been addressed in the past.
We propose a novel navigation framework utilizing an enhanced version of Trajopt for fast 3D path-optimization planning for AUVs. A sampling-based correction procedure ensures that the planning is not constrained by local minima, enabling navigation through narrow spaces. Two different modalities are proposed: planning with a known map results in efficient trajectories through cluttered spaces; operating in an unknown environment utilizes the point cloud from the visual features detected to navigate efficiently while avoiding the detected obstacles. The proposed approach is rigorously tested, both on simulation and in-pool experiments, proven to be fast enough to enable safe real-time 3D autonomous navigation for an AUV.
Reference:
Marios Xanthidis, Joel Esposito*, Ioannis Rekleitis , and Jason M. O'Kane. "Motion Planning by Sampling in Subspaces of Progressively Increasing Dimension." Journal of Intelligent & Robotic Systems, pp. 1-13, Springer, 2020.
*Weapons and Systems Engineering Department, United States Naval Academy
In a world of progressively better computers, materials, algorithms, but also new challenging problems, complex redundant systems with many Degrees of Freedom (DOFs) such as redundant manipulators, mobile manipulators, heterogeneous groups of robots, or humanoids are becoming increasingly accessible and realistic. Especially given that the Holy Grail of robotics is developing a robot being fully capable of successfuly transact most tasks executed by the Human (a 244 DOFs mobile manipulator), highly actuated robots become a future necessity.
For such high-dimensional systems motion planning in a configuration space with an extremely large volume (even calculated as infinite for some systems), even very fast state-of-the-art sampling-based motion planners calculate a solution with a prohibited delay for real-time application, assuming that such a solution is found.
We introduced a new enhancement to the sampler of RRTs for accelarating the production of a solution for such systems called RRT+. We show that the technique can be applied easily to a wide variety of different sampling-based motion planners, and although very naive and simplistic, provides superior results by orders of magnitude with respect to both the corresponding planners before the enhancement and the fastest state-of-the-art competitive planners, such as KPIECE and STRIDE. The idea is to apply virtual linear constraints between the DOFs and restrict sampling in lower dimentional spaces of the configuration space that a solution might exist, and then progressively increase the dimensionality until a solution is found.
Extensive experiments prove the superiority of the method and real-time results were produced for challenging problems for the 14 DOF Baxter humanoid and a 50 DOF kinematic chain.
Reference:
Marios Xanthidis, Kostantinos J. Kyriakopoulos, and Ioannis Rekleitis. "Dynamically efficient kinematics for hyper-redundant manipulators." 24th Mediterranean Conference on Control and Automation (MED). IEEE, 2016.
Manipulators are one of the dominant classes of robots, that can alter physically their environment. Hyper-redundant manipulators is a special class of manipulators that have unique kinematic abilities ranging from being just more complex than an anthropomorphic manipulator to a robotic tentacle of even infinite Degrees of Freedom(DOFs). Due to structural issues occuring from the weight of such systems, they might not be applicable in areas were gravity is a dominant force, but real application exist underwater or in space.
Increasing the DOFs of a manipulator increases the manipulability, but at the same time it increases the computational load for calculating the forward and inverse kinematics. Increased computational load, especially on operations that are called many times from motion planners, is discouraging efforts for extending real life applications of such systems.
We proposed a novel formulation of the forward and inverse kinematics, so that the calculation of the forward and inverse kinematics of highly-actuated manipulators can be reduced exponentially to the number of DOFs, by adding virtual constraints that satisfy homogeneous behaviors that can be reduced to single transformation matrices.
Our approach, reduces drastically the computational load, increases gradually the manipulability and also it is fault tolerant to broken (immobile) joints.
Reference:
B. Joshi, M. Modasshir, T. Manderson, H. Damron, M. Xanthidis, A. Quattrini Li, I. Rekleitis, G. Dudek. DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization. In International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020.
In this work we propose a real-time deep learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves in a communicationconstrained underwater environment is essential for many applications such as underwater exploration, mapping, multirobot convoying, and other multi-robot tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training. An image-to-image translation network is employed to bridge the gap between the rendered and the real images producing synthetic images for training. The proposed method predicts the 6D pose of an AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV, and then the 6D pose in the camera coordinates is determined using RANSAC-based PnP. Experimental results in real-world underwater environments (swimming pool and ocean) with different cameras demonstrate the robustness and accuracy of the proposed technique in terms of translation error and orientation error over the state-of-the-art methods.
Reference:
Li, A.Q., Coskun, A., Doherty, S.M., Ghasemlou, S., Jagtap, A.S., Modasshir, M., Rahman, S., Singh, A., Xanthidis, M., O'Kane, J.M. and Rekleitis, I., 2016, September. Vision-based shipwreck mapping: on evaluating features quality and open source state estimation packages. In OCEANS 2016 MTS/IEEE Monterey (pp. 1-10). IEEE.
Mapping shipwrecks and other underwater man-made structures is essential for historical purposes and inspection to prevent or analyze disasters. Mapping 3-D structures efficiently with guarantees is still an open problem and it becomes far more challenging when underwater due to poor quality features used for state estimation and reconstruction.
In this work, we made a comprehensive analysis of different state-of-the-art open source SLAM packages, along with different feature detectors and descriptors, to provide a handbook for other researchers working in the underwater domain and want to make the most robust implementation choises for their applications. We provide results after our comparisons for indoor, outdoor and underwater domains (shipwrecks and coral reefs) along with some discussion on different directions that could be investigated so that Underwater SLAM can become more robust and readily applicable with less parameter tuning.
Reference:
Li AQ, Coskun A, Doherty SM, Ghasemlou S, Jagtap AS, Modasshir M, Rahman S, Singh A, Xanthidis M, O’Kane JM, Rekleitis I. Experimental comparison of open source vision-based state estimation algorithms. InInternational Symposium on Experimental Robotics 2016 Oct 3 (pp. 775-786). Springer, Cham.
Among the most popular and challenging problems in Computational Robotics, state estimation can be easily considerred a dominant one, not only because it solves a fundamental problem for every robot, but also because on all the other fundamental problems (motion planning), knowing the state of the robots is an expected input.
There is a lot work on the field for the past decades, and new SLAM packages are introduced almost every year, building uppon older ones or with completely novel approaches. Interestingly enough, there is the paradox: Even though new packages are being introduced very often claiming to solve the state estimation problem in specific datasets (mostly confined to featureful labs with posters and colorful carpets), yet the abscence of analysis on failure conditions and cases gives the impresion that SLAM is a solved issue, which is far from true.
Localizing and tracking a robot while mapping the environment just with cameras in real scenarios is still an open problem and our study attempts to show exactly this fact. We compared popular state-of-the-art SLAM packages in indoor, outdoor and underwater challenging datasets we collected with our robots, to produce a comprehensive analysis on failure cases, challenges that have not yet andressed in real environments where real robots are expected to operate, and to detect which packages are performing the best in the different domains.
Our work has the goal to be used as a guide to future researchers that want to quickly detect among which packages they should focus in order to achive an acceptable performance under the conditions their application is confined.
Reference:
Xanthidis, Marios, Alberto Quattrini Li, and Ioannis Rekleitis. "Shallow coral reef surveying by inexpensive drifters." In OCEANS 2016-Shanghai, pp. 1-9. IEEE, 2016.
Coral-reefs is an important part of every underwater ecosystem, and essential for hosting diverge ecosystems, but also protecting the coast lines from the worst effects of wave action and tropical storms. Due to climate change and sea polution, oceanographers are progressively strongly interested on monitoring such systems, and robots for sure could assist these efforts.
Robotics systems, though, are expensive and complicated systems. Therefore simplistic and inexpensive solutions should be prefered especially given that monitoring needs to happen over the vast area of the sea floor.
We proposed the use of a very effective and inexpensive passive system for monitoring coral reefs in shallow waters called Driftnodes: a floating tube device with camera facing at the bottom, IMU and GPS. The Driftnodes can record data (images, position and orientation) that we can later automatically process to reconstruct ocean floor mosaicing by fitting properly the collected images.
Interestingly, the drifters by passively flowing following the water currents are able to cover large areas, and their wavy osculation, that at first glance would actually reduce the quality of the photos, proved to let the Driftonodes cover a larger area than a static facing-down camera would do.
Reference:
Li, Alberto Quattrini, Marios Xanthidis, Jason M. O'Kane, and Ioannis Rekleitis. "Active localization with dynamic obstacles." In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1902-1909. IEEE, 2016.
Localizing a robot with a laser sensor in a known map is a well studied and still notoriously hard problem, that service robots should be expected to solve efficiently. The problem becomes far more challenging when dynamic obstacles (aka mostly humans) are in place - something expected in real scenarios - since there is no straight forward way to classify each measurement as a static or dynamic obstacle.
We introduced a new method for localizing a robot with a lazer sensor in a dynamic environment by utilizing a particle filter and a conservative policy for discarding particles with further expeted measurements than the real ones. After clustering in order to label the different distributions, the robot then is picking positions that would discard the larger amount of hypothesis so that by the end, only one distribution representing the real position of the robot would remain.
Experiments both in simulation and in real indoor environments with 12 turtlebot2 robots, showed that with our method the robot was able to localize in a large map cluttered with dynamic obstacles.
Reference:
Joshi, B., Rahman, S., Kalaitzakis, M., Cain, B., Johnson, J., Xanthidis, M., Karapetyan, N., Hernandez, A., Li, A.Q., Vitzilaios, N. and Rekleitis, I., Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2019.
Simultaneously Locating a robot And Mapping the environment is generally an open problem although few packages are able to solve it under specific conditions. Solving the same problem underwater, in an environment with:
natural color-loss,
a lot of featureless homogeneous surfases,
many dynamic objects, such as fish or water particles such as plankton, and
abnormal illumination patterns (in comparison to our everyday experience)
is a much harder problem that puses the limits of well established state-of-the-art computer vision methods.
In our study, we investigated the above intuitive statement experimentally, in real challenging conditions by also adding IMU to measure the improvement. In more details, at least 15 different packages (monocular/stereo/Monocular) were tested both in well established popular datasets and 9 we collected with our underwater robots and sensors.
The goal of our study is to be used as a guide to future researchers interested in applcations in the underwater domain and to pick the package best suitted for each case. As expected the use of IMU with Stereo cameras showed to have the best performance.