Principal Investigator
Co-PI
Dr.Jaime Pulido Fentanes
Co-PI (Industrial)
Advisory board
working on multi-sensor perception, mapping
Ahmed Abbas
working on robot localisation
Christopher Wild
working on terrain assessment
Jiahui Liu
working on anomalous object detection
Weihang Zhang
working on visual based navigation
Zhicheng Zhou
working on lidar-based global localisation
Paper: [PDF]
We have tested our mapping, localization, and navigation system on Mars-like scene (The West Wittering beach).
In this video, the multi-sensor mapper and NDT-MCL localsiation are tested in Sheffield campus. Failure cases are detected and fixed. The system is ready for field test.
Quarterly Deliverable - Q4 [PDF]
Annual review demo videos are delivered to the FAIR-SPACE Hub.
We have integrated NDT-map representation and NDT-MCL (Monte-Carlo Localisation) with submap implementation with the Jaguar platform. The robot is able to re-localise (in 6DOF) within a previous-build map robustly and smoothly in large-scale unstructured environments.
Task 2.2, and the major part of 2.3 (NDT-map implementation, NDT-MCL integration) are completed. Note Semantic mapping in Task 2.3 is under development.
Quarterly Deliverable - Q3 [PDF]
We released the navigation stack for long-term visual navigation. Our approach is able to operate from-day-to-night using a single map and can repeat complex manoeuvres.
Github: https://github.com/kevinlisun/jaguar_nav
Task 1.1, 1.2, 2.1 (multi-sensor installation, calibration, multi-sensor mapping) are completed.
Quarterly Deliverable - Q2 [PDF]
Visual-based navigation with various terrains. The second field test at Longshaw National Trust, Sheffield, UK.
Visual-based navigation in unstructured environments. The first field test at Tankersley, Sheffield, UK.
The low-level control and teleoperation is implemented. The visual navigation stack is integrated with ZED camera and Jaguar robot. Autonomous visual teach repeat (monocular and appearane-only) is demostrated in Romanwharf, Lincoln.
The hardware design of the multi-sensor perception system is finalised. Our perception system consists of an Ouster64 lidar, a ZED2 stereo camera, Xsens 750 IMU.
Quarterly Deliverable - Q1 [PDF]
Our robot localisation research is presented in ICRA 2020.
The proposed approach is able to localise the robot using a singal lidar sensor within 2 seconds in average in a large-scale environment (0.5 km2). [Presentation]