Research

Ambient Awareness for Autonomous Agricultural Vehicles

Terrain assessment using proprioceptive and exteroceptive features

I am currently working to the development of a novel approach for automatic terrain estimation and classification by an agricultural vehicle during normal operations. The novelty of this research lies in that terrain estimation is performed by using not only traditional appearance-based features, that is colour and geometric properties, but also contact-based features, that is measuring physics-based dynamic effects that govern the vehicle-terrain interaction and that greatly affect its mobility. Tests were performed during the experimental campaigns of the project Simultaneous Safety and Surveying for Collaborative Agricultural Vehicles (S3-CAV) funded by FP7 ERA-NET ICT-AGRI-2 action.

Terrain assessment for precision agriculture using vehicle dynamic modellingG Reina, A Milella, R GalatiBiosystems engineering 162, 124-139, October 2017 PDF

Multi-modal terrain mapping and estimation for precision agriculture

I developed a multi-modal terrain mapping and estimation approach for applications in precision agriculture in collaboration with the University of Salento and the Danish Technological Institute (DTI), Odense, Denmark. The activity was carried out in the context of the S3-CAV project.

The proposed methods exploit a modular imaging system that incorporates complementary sensors to generate multi-modal terrain maps including geometric, visual, hyperspectral and IR features. The sensor suite (developed by DTI) comprises two visual cameras forming a stereo couple, a VIS-NIR sensor and a thermal camera. The idea is that, by registering and integrating all data, an accurate multi-modal representation of the terrain can be built.

Stereovision provides a range representation of the environment in the form of 3D point cloud. The visual appearance of each 3D point can be obtained in the visible, near-infrared and infrared spectrum. In addition, as the vehicle moves across its operating space and senses the supporting plane through its proprioceptive sensors, the environment representation is augmented with the mechanical properties of the terrain.

Current and future developments of the system will include the processing of the maps using supervised or unsupervised classification methods to generate semantic representations of the environment, and for terrain traversability assessment. Research efforts will also target the integration of the output maps into Farm Management Information Systems (FMIS) to enable map-based control of farming applications.

A multi-sensor robotic platform for ground mapping and estimation beyond the visible spectrumA Milella, G Reina, M NielsenPrecision Agriculture, 20(2), pp 423-444, April 2019 PDF

Multi-Baseline Stereo for Scene Segmentation in Natural Environments

I took part to the development of a multi-baseline stereo system for long range perception by an autonomous vehicle operating in agricultural environments. The system is able to segment the scene into ground and non-ground regions and uses a self-learning framework whereby the ground model is automatically learnt and continuously updated while travelling.

The system has been implemented in collaboration with the University of Salento within the project Ambient Awareness for Autonomous Agricultural Vehicles (QUAD-AV) funded by FP7 ERA-NET ICT-AGRI action.

Ambient awareness for agricultural robotic vehiclesG Reina, A Milella, R Rouveure, M Nielsen, R Worst, MR BlasBiosystems engineering 146, 114-132, June 2016 PDF

In-field high throughput grapevine phenotyping with a consumer-grade depth camera

Methods for automated grapevine phenotyping are developed, aiming to canopy volume estimation and bunch detection and counting. A consumer-grade depth camera mounted on-board an agricultural vehicle is used. First, a dense 3D map of the grapevine row, augmented with its color appearance, is generated, based on infrared stereo reconstruction. Then, different computational geometry methods are applied and evaluated for plant per plant volume estimation. The proposed methods are validated through field tests performed in a commercial vineyard in Switzerland. It is shown that different automatic methods lead to different canopy volume estimates meaning that new standard methods and procedures need to be defined and established.

Four deep learning frameworks, namely the AlexNet, the VGG16, the VGG19 and the GoogLeNet, are also implemented and compared to segment visual images acquired by the RGBD sensor into multiple classes and recognize grape bunches. Field tests are presented showing that, despite the poor quality of the input images, the proposed methods are able to correctly detect fruits, with a maximum accuracy of 91.52%, obtained by the VGG19 deep neural network.

In-field high throughput grapevine phenotyping with a consumer-grade depth cameraA Milella, R Marani, A Petitti, G ReinaComputers and Electronics in Agriculture 156, 293-306, January 2019 PDF

E-SHELF: Electronic Shopping & Home delivery of Edible goods with Low environmental Footprint

We are currently developing artificial vision technologies for automatic detection of Out-of-Stock (OOS) events in retail. The system is based on the use of 2D/3D visual sensors and is integrated in an eco-sustainable electronic commerce system that includes the following components:

  • Network of food points of sale aggregated in a single virtual market place;
  • Artificial vision technologies for the automatic monitoring of stocks;
  • Mobile app for e-commerce and support for food purchasing decisions;
  • Fleet of vehicles for last mile logistics, equipped with Internet of Vehicles (IoV) technologies and governed by an eco-sustainable optimization engine;
  • Social network for the loyalty of online consumers and the management of mechanisms for enhancing the reputation of both points of sale and products;
  • De-verticalising middleware with high interoperability and scalability for the management of data flows.

For more information, please refer to E-SHELF project homepage