This project will advance the knowledge in the areas of development of direct methods for condition assessment (CA) of buried large water mains using advanced data collection and analysis techniques successfully used in other application domains ranging from robotics and machine learning.
Advanced livestock measurement technologies
This program aims to develop technologies for measuring lean meat yield of live animals on-farm, and of carcases in abattoirs. These technologies need to be cost effective for those plants utilising them, hence a selection of technologies will be investigated including highly precise, although more expensive “direct-measurement” alternatives such as dual energy x-ray absorptiometry (DEXA), and less expensive, although less precise “predictive-measurement” alternatives such as 3D imaging.
Robotic audition, in particular sound source localisation, can help a robot to improve Human-Robot-Interaction method by allowing the robot to focus on events and persons based on sound. Audition complements well other sensors such as vision because audio sensors are omni-directional, capable to work in the dark and not limited by physical structure occlusion (such as walls). Sound source tracking of one or multiple targets can aid tracking scenarios, in which location of dynamic sound sources can be estimated by the direction of the sound emitting targets. Other popular robotics application of sound source localisation include search and rescue and multiple robot cooperation.
Framework for Dense Environment Representations
Robots need to be able to operate safely in low-visibility conditions, such as at night or in the presence of smoke. This work proposes camera-based localisation methods that are resilient to such adverse conditions thanks to the selective combination of visual and infrared imaging. We evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low quality data, i.e. data most likely to induce large localisation errors. In this way, perceptual failures are anticipated and mitigated. Data sets collected with a UGV in a range of environments and adverse conditions include the presence of smoke (obstructing the visual camera) or extreme heat (saturating the IR camera), low light conditions (dusk) and night time with sudden variations of artificial light.
Smoke detection through Integrated LDA models
Early fire detection is crucial to minimise damage and save lives. Video surveillance smoke detectors do not suffer from transport delays and can cover large areas. The smoke detection on images is, however, a difficult problem due the variability of smoke density, lighting conditions, background clutter, and unstable patterns. In order to solve this problem, we propose a novel unsupervised object classifier. Single visual features are classified using a model that simultaneously creates a codebook and categorises the smoke using a bag-of-words paradigm based on LDA model. Our algorithm can also tell the amount of smoke present on the image. Multiple image sequences from different cameras are used to show the viability of the proposed approach.
The objective is to build large-scale 3D maps by building and indexing representations of places, using multiple mobile and fixed sensors with different vantage points multi-perspectives. The key idea is to first produce accurate sensor localisation and then use the data from each sensor to generate compact representations of the environment in terms of their geometric, physical (visual appearance), and statistical characteristics.
We are developing the estimation scheme for aerial and ground vehicles to cooperate for SLAM and moving target tracking. The main challenge is to build environment representations that integrate aerial and ground data. Using ideas of Hierarchical SLAM and Detection and Tracking Moving Targets we manage to obtain a graph representation of the whole system. Closing loops imposes the topological constraint to the Global level of our hierarchical representation. Target tracking is made in the Local level until a loop closure happens.
This work explores line segment landmark parametrizations in the context of monocular, EKFbased, 6-DOF simultaneous localization and mapping. A method for undelayed initialization (UI, otherwise named partial initialization) of straight lines in monocular extended Kalman filter (EKF) SLAM. UI allows low parallax landmarks, i.e., those that are remote or close to the motion axis of the camera, to contribute to SLAM from the first observation. This allows the exploitation of the full field of view of the camera up to the infinity range, which results in accurate localizations with very low angular drifts.
Whoever look at floor planes probably knows that line segments are a very expressive feature to represent human environments. Specially in the case of collaborative mapping for heterogeneous vehicles, features such as line segments make data association reliable A fast line segment tracker which does not make assumptions on the structure of the observed scene is needed. We adapted the RAPiD tracker to handle multiple line hypotheses to deal with the simple model of a single line segment. Our aim is that the SLAM helps the tracker providing a good prediction for the tracker and the line segment tracker helps the SLAM providing a long term tracked feature.
Navigation in mobile robotics involves two tasks, keeping track of the robot's position and moving according to a control strategy. In addition, when no prior knowledge of the environment is available, the problem is even more difficult, as the robot has to build a map of its surroundings as it moves. These three problems ought to be solved in conjunction since they depend on each other.
This thesis is about simultaneously controlling an autonomous vehicle, estimating its location and building the map of the environment. The main objective is to analyse the problem from a control theoretical perspective based on the EKF-SLAM implementation. The contribution of this thesis is the analysis of system's properties such as observability, controllability and stability, which allow us to propose an appropriate navigation scheme that produces well-behaved estimators, controllers, and consequently, the system as a whole.
Focusing speciffically on the case of a system with a single monocular camera, we present an observability analysis using the nullspace basis of the stripped observability matrix. The aim is to get a better understanding of the well known intuitive behaviour of this type of systems, such as the need for triangulation to features from different positions in order to get accurate relative pose estimates between vehicle and camera.
Considering a real-time application, we propose a control strategy to optimise both the localisation of the vehicle and the feature map by computing the most appropriate control actions or movements. The actions are chosen in order to maximise an information theoretic metric.
Under the supervision of Alberto Sanfeliu and Juan Andrade.