Push Past Green: Learning to Look Behind Plant Foliage by Moving It

Abstract: Autonomous agriculture applications (e.g., inspection, phenotyping, plucking fruits) require manipulating the plant foliage to look behind the leaves and the branches. Partial visibility, extreme clutter, thin structures, and unknown geometry and dynamics for plants make such manipulation challenging. We tackle these challenges through data-driven methods. We use self-supervision to train SRPNet, a neural network that predicts what space is revealed on execution of a candidate action on a given plant. We use SRPNet with the cross-entropy method to predict actions that are effective at revealing space beneath plant foliage. Fur- thermore, as SRPNet does not just predict how much space is revealed but also where it is revealed, we can execute a sequence of actions that incrementally re- veal more and more space beneath the plant foliage. We experiment with a synthetic (vines) and a real plant (Dracaena) on a physical test-bed across 5 settings including 2 settings that test generalization to novel plant configurations. Our experiments reveal the effectiveness of our overall method, PPG, over a competitive hand-crafted exploration method, and the effectiveness of SRPNet over a hand-crafted dynamics model and relevant ablations.

Dracaena Experiment Execution Videos

Base Setting

Execution Video 1

Execution Video 2

Execution Video 3

Execution Video 4

Execution Video 5

Failure Modes

Repeated Actions

Pushing Towards Center

Repeated Pushes Towards Center

Vine Experiment Execution Videos

Base Setting

Execution Video 1

Execution Video 2

Execution Video 3

Sparse Vines

Execution Video 1

Execution Video 2

Execution Video 3

Separated Vines

Execution Video 1

Execution Video 2

Execution Video 3