Past Research - 2009 - 2017

Multi-Robot Coordination and Cooperation - 2016 - 2017

Multi-Robot Coverage

We developed a framework which allows a team of heterogeneous robots to cooperate in order to accomplish a coverage task. This framework comprises of three main components. The first component is responsible of generating at real-time the next best position in the environment to be reached by each robot. The next best positions of the robots are generated according to a search strategy which is based on the Learning Real-time A* algorithm extended to robots with different capabilities (e.g., UAVs and UGVs). The second component manages both local perception of the robots and merging. Perception concerns with 3D point cloud acquisition, registration, segmentation and, finally, traversability analysis of the robot’s surrounding. The latter component handles the motions of the robots. This framework embeds the dynamics of the robots modeled by the physic engine of V-REP with both the perceptual and motion functionalities of the robots, developed under ROS. Finally, the framework includes a strategy for locally solve conflicts among robots as well as a functionality for robot render-vous

Teams of UGVs patrolling harsh and complex 3D environments can experience interference and spatial conflicts with one another. Neglecting the occurrence of these events crucially hinders both soundness and reliability of a patrolling process. This work presents a distributed multi-robot patrolling technique, which uses a two-level coordination strategy to minimise and explicitly manage the occurrence of conflicts and interference. The first level guides the agents to single out exclusive target nodes on a topological map. This target selection relies on a shared idleness representation and a coordination mechanism preventing topological conflicts. The second level hosts coordination strategies based on a metric representation of space and is supported by a 3D SLAM system. Here, each robot path planner negotiates spatial conflicts by applying a multi-robot traversability function. Continuous interactions between these two levels ensure coordination and conflict resolution. Both simulations and real-world experiments are presented to validate the performances of the proposed patrolling strategy in 3D environments. Results show this is a promising solution for managing spatial conflicts and preventing deadlocks.

Urban Search & Rescue Robotics - 2012 - 2016

During my experience in both NIFTI and TRADR projects, I had the chance to deeply understand the main problems to be addressed, when robots are involved, together with humans, in rescue missions. Closely working with fire-fighters, I better understood what the capabilities of a robot should be, in terms of autonomy, in order to support rescue responders. According to this experience, we worked on increasing the level of autonomy of a rescue robot. We also had two in-field experiences, where we developed a robotic system to assess damage to historical buildings, and cultural artifacts located therein, first in July 2012, after the earthquake in Mirandola, in the Emilia-Romagna region, Northern Italy, and second in September 2016, after the earthquake in Amatrice, Lazio Region, Central Italy.

Mirandola, Emilia-Romagna region, Northern Italy, July 2012

Amatrice, Lazio Region, Central Italy, September 2016

Three-dimensional Autonomous Navigation of Articulated Tracked Robots for Urban Search & Rescue - 2013 - 2016

We developed a framework for real-time 3D motion planning and control of tracked robots, for autonomous navigation in harsh environments. This framework is based on a semantic representation of the environment. This representation is used to plan feasible 3D paths for the robot, toward a target position. The physical execution of the path is delegated to a decoupled controller, responsible of both generating velocity commands and adapting the robot morphology during the tracking task. We extended the controller with a statistical model assessing the touch of the articulated mechanical components of the robot with the traversed surfaces. This model is used to both correct the estimation of the robot morphology and to ensure that the robot has a better traction onto harsh terrain. We also worked on improving the semantic representation of the environment with information about traversability. We finally improved the path planning algorithm to also take into account dynamic obstacles.

High Level Robot Control and Cognitive Robotics - 2015

We developed a robotic system that employs high-level control in order to operate in a real-world setting where the main task is human-assisted exploration of an environment. In this system, we integrated multi-modal perception from vision and mapping with a model-based executive control. In this framework, action planning is performed using a high-level representation of the environment that is obtained through topological segmentation of the metric map and object detection and 3D localisation in the map. This work contributes in providing an effective method to build a logical representation of the low-level perception of the robot and, compile the robot perception into knowledge and, finally, integrate this knowledge with a model-based executive control. The overall system enables the robot to infer strategies in so generating parametric plans that are directly instantiated from perception.

We also worked on a new approach for robot cognitive control based on modelling robot stimuli, the stimulus-response and the resulting task switching or stimulus inhibition. The proposed framework contributes to the state of the art on robot planning and high level control as it provides a novel view on robot behaviours and a complete implementation on an advanced robotic platform.

Human-Robot Interaction and Collaboration - 2013

We described a novel framework to learn the actions, intentions and plans of a firefighter executing a rescue task in a training car accident scenario. This framework is based on a model of human-robot collaboration. This model interpreted the collaboration as a learning process mediated by the firefighter prompt behaviours and the apprentice collecting information from him to learn a plan. Novel is the use of the Gaze Machine, a wearable devise which can allow to gather and convey visual and audio input from the firefighter while executing a task. In this work we described the process through which such the information, delivered by the Gaze Machine, is transformed into plans.

Deep learning for Robot Perception, Active Recognition and Control - 2016

This research concerns the training and testing of a Convolutional Neural Network (CNN) to assess the traversability of the terrain for safe autonomous navigation of an articulated tracked robot in urban search and rescue domain applications. Traversability assessment is fundamental under different fronts, such as for example, path planning, self-adaptation, terrain interaction and safe locomotion. State-of-the-art methods are commonly based on a point-wise analysis of the point cloud. This analysis only considers the geometrical (e.g., normals and curvatures) as well as the robot kinematic constraints (e.g., nonholonomic, skid-steering, odometry). Unlike these methods, the CNN I’m proposing classifies the traversability of the terrain patches from both images and point clouds. The network estimates the most suitable directions for the tracked robot to autonomously crawl them. Moreover, it resorts to a CNN recognising the kind of the material of the soil (e.g., metal, concrete, stone, wood) from images as an expert network. Both the CNNs are also merged into a single network by sharing their convolutional features, so as to improve prediction accuracy.

Knowledge Representation and Reasoning - 2010 - 2014

I have been involved in the H2020-ICT-2014-1, RIA, SecondHands Project. The goal of SecondHands is to design a robot that can offer help to a maintenance technician in a pro-active manner. My main responsibility in this project is to develop a framework which allows the robot to reason about complex, either logical or statistical, structures representing its knowledge of the environment. Among others, the main novelties of this framework will be (1) the definition of such complex structures where reasoning can take place, (2) a segmentation of these structures suitable for optimising robot action planning and (3) the use of different levels of reasoning mechanisms, spanning from logical entailment to reason about time. At a preliminary step of the development of this framework I implemented a framework responsible for establishing a coherent correspondence between abstract representations and perceptual data in an embodied symbolic reasoning system. Within this framework, first the problem is transformed into a vertex coloring problem and then a solution is found, accounting for the constraints. This framework has been successfully used in conjunction with the well-known Fast-Downward algorithm for planning the actions of a humanoid robot instructed to perform a search-and-grasp-object task.

Augmented Reality and Simulation - 2012 - 2013

We developed an AR-based simulation framework which allows robot developers to build on-line an Augmented Reality Environment (ARE) for real robots, integrated into the visualization interface of Robot Operating System (ROS). The system we proposed goes beyond an interface for drawing objects, as the design exploited a stochastic model activating the behaviors of the introduced objects. The framework also builds a world model representation that serves as ground truth for training and validating algorithms for vision, motion planning and control