LADDER, an acronym for Labeling and Detection Deployment for Entity Recognition, has transformed the dream of creating customized computer vision systems into reality, making it accessible to many without the need for programming skills. Utilizing a laptop and dedicating just a few hours, users can leverage LADDER's Graphic User Interface (GUI) to perform object labeling, neural network training, and object detection in images—all without writing a single line of code. More use cases can be found here.
An image labeler and classifier through interactive recurrent annotation. We developed a software package, ROOSTER, to integrate both labeling and prediction in a single user-friendly graphic user interface with interactive deep learning to reduce the laborious human labeling for fast development of machine vision systems. ROOSTER provides fully automatic labeling for abundantly available initial images of wheat stripe rust to gain essential predictability. The navigation of integrating prediction with labeling benefits human adjustment to iteratively improve predictability. The development of a detection system for wheat stripe rust was presented as a use case to demonstrate the efficiency of using interactive deep learning to develop machine vision systems. The details can be found here.
One of the major bottlenecks in Alfalfa breeding is the labor-intensive phenotyping burden for biomass selection. In this study, we employed two alfalfa fields to pave a path to overcome the challenge by using UAV images with fully automatic field plot segmentation for high-throughput phenotyping. Alfalfa fields were imaged one day before harvesting with a DJI Phantom 4 pro UAV carrying an additional Sentera multispectral camera. Alfalfa plot images were extracted by GRID software to quantify vegetative area based on the Normalized Difference Vegetation Index. The prediction model developed from the first field explained 50–70% (R Square) of biomass variation in the independent field by incorporating four features from UAV images: vegetative area, plant height, Normalized Green–Red Difference Index, and Normalized Difference Red Edge Index.
The RustNet can be used to detect the wheat stripe rust with images from drone, smartphone and videos. It was developed with a semi-automatic image labeling strategy which combines automatic labeling and human correction together. In labeling stage, we started with automatically labeling two hundred 100% non-disease images and two hundred 100% non-disease images. These images help to transfer the ResNet-18 that has been pre-trained with the ImageNet into the wheat stripe rust detection mission. We used the model of Stage 1 to predict dozens of images with almost all leaves infected or without infected leaves. Some labels were correct and some were not. Labels adjustment of expert were involved by under the help of ROOSTER . The Stage 1 version of RustNet was retrained with these new labeled images and got the Stage 2 version of RustNet. We continue use the Stage 2 version of RustNet to predict dozens of new partial infected images. More labels were corrected in this round of prediction than before. Experts adjustment was added again. The new images labeled were add into the images of Stage 2 and used to retrained RustNet and got the Stage 3 version of RustNet.