Precision farming relies on the ability to accurately locate the crops or leaves with problems and to accurately apply a local remedy without wasting resources or contaminating the environment.
This project develops a unifying framework allowing incorporation of many different types of sensor data, methods for creating 3D maps and maximising map accuracy to facilitate operations on a narrow scale with a smaller environment footprint, methods for combining this data to make relevant information easily visible to the farmer, and methods for incorporating real-time sensor data into historical data both to increase precision during applications and to provide fast automated safety responses.
On-field tests with mobile robots
Results of terrain assessment. Multi-sensor information acquired by the robot on-board sensors during a test on dirt road and gravel.
Results of grape bunch detection. A bunch is enclosed by the red circle. Probability maps of detection of specific classes of objects are in the second row.
Terrain assessment is a key issue for the development of intelligent agricultural vehicles. On natural soil, wheelground interactions play a critical role for vehicle mobility, which can be radically different on plowed soil rather than on compact soil. The ability to estimate the traversed terrain can contribute to increase the safety of agricultural vehicles near slopes, canals, or on highly deformable ground. Soil characterization can also provide information to predict the risk of soil compaction from farm machinery.
In the context of this research, the problem of terrain assessment was addressed by using a multi-sensor approach, in which data from both proprioceptive and exteroceptive sensors, are integrated for terrain characterization. In particular, a method based on supervised learning techniques was developed to recognize different terrain types.
The proposed algorithm aims at identifying the traversed terrain among a predefined number of classes based on proprioceptive characteristics (slippage, vibrations, motion resistance) and exteroceptive features (geometric and color features extracted from stereo images). The validation of the methods was carried out using data acquired in the field by a mobile robot equipped with different sensing devices.
Experimental results showed that it is not only possible to classify various kinds of terrain using either sensor modality, but that these modalities are complementary to each other and can be combined to reach higher classification accuracy.
A grapevine phenotyping platform using an agricultural vehicle equipped with a consumer-grade RGB-D sensor was developed.
The system is intended to acquire visual and 3D information to reconstruct the canopy of the plants for geometric measurements, such as plant volume and height, and to detect grapevine clusters.
A deep learning approach using visual images and pre-trained CNNs was also developed to segment the scene into multiple classes and in particular to detect grapevine clusters.
Tests were perfomed in a commercial test field in Switzerland and, despite the poor quality of the input images, the proposed methods are able to correctly detect fruits, with a maximum accuracy of 91.52%.