Invited Speakers

Prof. Cyrill Stachniss (University of Bonn)

Efficient Semantic Perception for Robotics in Agriculture

Abstract: Crop farming plays an essential role in our society, providing food, feed, fiber, and fuel. We heavily rely on agricultural production, but at the same time, we need to reduce the footprint of agriculture production. Agricultural robots offer promising directions to address management challenges in agricultural fields. For that, autonomous field robots need the ability to perceive and model their environment, predict possible future developments, and make appropriate decisions in complex and changing situations. This talk will showcase recent developments in semantic perception for crop production. I will illustrate options to boost the performance of robot perception systems operating in field environments.

Bio: Cyrill Stachniss is a full professor at the University of Bonn and heads the Photogrammetry and Robotics Lab. Before his appointment in Bonn, he was with the University of Freiburg and the Swiss Federal Institute of Technology. Since 2010 a Microsoft Research Faculty Fellow and received the IEEE RAS Early Career Award in 2013. From 2015-2019, he was senior editor for the IEEE Robotics and Automation Letters. He is currently a spokesperson of the DFG Cluster of Excellence "PhenoRob" at the University of Bonn. His research focuses on probabilistic techniques for mobile robotics, perception, and navigation. The main application areas of his research are autonomous service robots, agricultural robotics, and self-driving cars. He has co-authored 300 publications, won several best paper awards, and coordinated multiple research projects on the national and European levels.

Prof. Cindy Grimm (Oregon State University)

Reach Out and Touch: Transitioning from Visual to Tactile Perception for Manipulation in Orchards

[Teaser Video]

Abstract: In the cluttered environment of an orchard open loop control is challenging - touching a branch can shift other branches or fruit, the wind may blow, and precise localization is difficult. We present two in-progress research projects (tree pruning and apple-picking) that demonstrate the advantages of closed-loop visual control that transitions to tactile or force-based control. 

Prof. Salah Sukkarieh (University of Sydney)

Field Robotics in Grazing Livestock Agriculture 

[Teaser Video]

Abstract: In this talk I will present our work on the application of field robotics and AI to support grazing livestock production. The talk will look at the technology in mobile robotics, trajectory planning and machine learning over vegetation to support the movement of animals to improve nutrition and sustainable plant growth. 

Prof. Stavros G. Vougioukas (University of California, Davis)

Efficiency and Fairness Metrics for Scheduling Harvest-assist Robots 

Abstract: Mechanizing the manual harvesting of fresh market fruits constitutes one of the biggest challenges to the sustainability of the fruit industry. Robotic harvester prototypes are being developed and field‐tested for high‐value crops like apples, kiwifruit, and strawberries. However, most of the developed robots have not, to date, successfully replaced the perception, agility, and speed of experienced pickers at a competing cost; the challenges of inadequate fruit picking efficiency and throughput remain largely unsolved. As an intermediate step to full automation, robotic harvest aids have been recently introduced to increase harvest productivity. These robots reduce workers' non‐productive walking times by transporting full and empty trays for each worker. The robots are a shared resource for the human picker crew, and the robot team's dynamic task allocation and schedule result from an optimization process that must incorporate individual workers' stochastic harvest rates. The apparent objective is to compute policies that maximize the harvest crew's efficiency. However, such policies tend to 'favor' faster workers, are agnostic of an individual worker's ergonomics and physical capabilities, and can lead to worker reluctance in working with robots. This talk will discuss alternative "fairness" metrics, which can be used to increase worker service or income equality, and present preliminary results from their use in a robot-aided strawberry harvest simulator calibrated with data from a commercial harvesting operation. The talk aims to stimulate discussions on the implications of using efficiency or fairness metrics during worker-robot collaboration in agricultural and possibly other work settings. 

Prof. Simon Pearson (University of Lincoln)

The Future of Agricultural Robotics 

Abstract: The talk will consider how agricultural robotics can support global challenges in the food system. In particular how we use robotics to concurrently drive economic and environmental productivity to the benefit of all in society. This includes the development of system to support the selective harvesting of crops, reduce the use of synthetic chemicals and support the drive to net zero. Advanced robotic technologies supported by AI have the potential to help transform for the food system for the public good. We discuss the existing state of the art and the key robotic advanced required to realise the technologies full potential. 

Dr. Anamika Nigam (Ocado Technology)

Applying Reliability Engineering to Advanced Robotics Systems

[Teaser Video]

Abstract: Ocado Technology is transforming online grocery through cutting-edge innovation in automation, robotics, artificial intelligence, machine learning, simulation and more. We build and support the Ocado Smart Platform (OSP) - an end-to-end ecommerce, fulfilment, and logistics platform that enables unmatched efficiency and the best customer experiences. At the heart of the model are highly automated warehouses, where swarms of mobile robots known as ‘bots’ whizz around on top of a grid of storage bins. We call this system ‘The Hive’.

Our proprietary AI air traffic control system orchestrates the bots to pick customer orders. Travelling at speeds of 4 metres per second with just milimetres between them, the bots can fulfil a 50-item customer order in just a few minutes. Developing such robots that work round the clock to attain required throughput is a challenging task. The bot services team at Ocado Technology adopts best practices from systems engineering, simulation and testing throughout the product development process to ensure design verification and reliability of the bots. The mission is to excite and resolve every potential failure mode of the bots when they are still in development. Various tools and methods that improve the reliability of the bots include, but not limited to, parameter diagrams, Functional Block Diagrams, Failure Modes Effects and Analysis, Design Verification Plans and Reports (DVP&Rs) and accelerated life testing on rigs. In this presentation, we intend to discuss the usage of these tools in the context of bots, which not only improves their performance but also the reliability in the longer term.



Prof.  Andrea Bertolini (Sant’Anna School of Advanced Studies, Pisa)

Bringing Innovation to the Market: Understanding the Regulatory Efforts at European Level and Their Possible Consequences for Robotics in Agriculture


Abstract: Regulation is not a mere impediment to innovation. If well-conceived, it provides the necessary conditions for technological proliferation. With this aim in mind the European Commission made a series of proposals, starting with the Artificial Intelligence Act and two proposals for directives applicable to technologically advanced products over the last year and a half. Those proposals will – at least in part – soon be transposed into law, and apply throughout European member states, and eventually produce a Brussels effect globally. The talk will introduce some of the main regulatory aspects, explaining their potential impact on agricultural robotics, including liability, product certification, and risk categorization, and will briefly discuss those potentially relevant aspects that are instead left untouched by those proposals. 

Corentin Chauvin-Hameau (Crover)

A robot for Grain Storage Monitoring 

[Teaser Video]

Abstract: Cereal grains are at the basis of our global food system. Every year, around 20% of the total stock

of grains is lost while in storage, representing the first source of loss pre-consumption. Maintaining

optimal temperature and moisture conditions is key to preserve quality and prevent problems, such

as moulds or insects infestations, from arising and developing. There currently is no ideal solution

for monitoring the condition of the grain. It either gives a limited picture of what is really happening

in the bulk, or requires human intervention and potentially presents risks for safety.

Crover is currently developing a robot capable of autonomously navigating in grain bulks and

taking measurements with a high spatial resolution. Grain sheds and silos provide many interesting

challenges to solve, both on the conception of the robot and the development of its autonomy:

highly slippery and changing nature of the terrain, strong presence of dust, localisation... Finally, the

need for a low-cost system driven by the market constrains available options.

This talk will go through the journey of a small Scottish startup striving to solve all these

challenges, and present its current progresses.

Dr. Konrad Ahlin (Georgia Tech Research Institute )

Virtual Reality and Robotics in Poultry Processing

[Teaser Video]

Abstract: Poultry products are a dominant source of the world’s protein, expecting to reach approximately 37% of meat consumption by 2025. However, despite the importance of poultry, many of the operations used in processing this meat are performed manually. The automated systems that exist are typically used as a means of increasing throughput at the cost of yield. Recent advancements in robotic solutions are beginning to address some of these issues. Machine vision can identify key points in unstructured objects, and modern machine designs are fast and agile. Unfortunately, these solutions are not accurate enough to be used for large scale operations. A production facility requires equipment to have a success rating of greater than 99%, and this target excludes robots from performing all but the simplest of tasks. However, researchers at Georgia Tech’s Research Institute (GTRI) in collaboration with the Agricultural Technology Research Program (ATRP) are forwarding a different approach to food production that mixes robotic labor with human decision making. The key to this approach is to create a virtual reality environment that encompasses the live, relevant information of a task that will be performed by a robot. An operator with an off-the-shelf virtual headset and controller can then collaboratively work with the robot on a shared operation from anywhere in the world, with the person giving information or commands to the robot only as necessary. The pilot project to demonstrate this technology is a robotic solution to the “Cone Loading” operation, where a partially processed piece of chicken is transferred from a bin or conveyor belt onto a moving cone. In processing facilities where this task is performed, two operators are typically tasked with moving approximately 60 pieces per minute (the specifics can vary drastically per facility and production line). This task is simple for a person to perform, but it is taxing for anyone to do over long periods of time. A robotic solution would be desirable, but robots struggle in this environment. Singulating and gripping a malleable, semi-rigid, slippery piece of natural product is possible with modern technology, but creating algorithms to cover every conceivable scenario is still out of reach. Bridging robotic labor with human decision making fills the gaps. Virtual reality allows for a robot to perform the task as directed by a human operator. Furthermore, the more interactions that a human operator has with a robotic device, the more data that is generated for the algorithms to learn how to handle edge-case scenarios. This approach will allow for machine learning methods to be developed by a large set of training data gathered in their relevant environments. Initial results in this research have been very promising. People can very naturally determine feasible grasping positions, and this data is sufficient for the robot to perform the task. Allowing users to connect via the internet changes the labor paradigm and enables a distributed network of cooperative approaches. Every manufacturing environment desires automation as a means of increasing yield and throughput, and robotics has been at the center of the modern industrial revolution. However, the future of manufacturing and food production will be a mix of robotic labor and human intelligence.

Dr. Rocco Limongelli (Sant’Anna School of Advanced Studies, Pisa)

AI in the Hay: The Convergence of AI, Data Legislation, and Sustainability in Agriculture


Abstract: As we move into a new era of farming, where agri-bots are becoming key players, we face novel challenges surrounding data management and sustainability. This talk aims to shed light on these crucial issues and stimulate discussion around potential solutions. Agri-bots depend heavily on data to perform tasks accurately and efficiently. However, the collection and use of this data have brought about complex questions concerning data privacy, ownership, and management. Traditional legal norms that primarily address physical assets fall short when dealing with the intangible nature of data. Recent developments suggest that we need to evolve from the concept of data ownership to data management, focusing on access, control, portability, and erasure. In response to these challenges, the European Union introduced the Data Act in 2022, an innovative piece of legislation aiming to provide a fair and regulated framework for data accessibility. This talk will delve into the implications of the Data Act, its broadened definition of data, its emphasis on fairness and transparency, and the introduction of non-compete requirements. Simultaneously, as we leverage AI to enhance farming practices, we must consider the environmental footprint of these technologies. While AI holds the potential to make farming more sustainable, an unsustainable approach to AI development can yield ambivalent results. As such, it is essential to strive for responsible and ethical deployment of AI in farming to align with sustainability goals.



Dr. Jonathan Blutinger  (Redefine Meat )

Developing a Digital Chef

[Teaser Video]

Abstract: Software has made waves in many industries from telecommunication to music; but it has yet to make its way into our kitchens. What happens when we marry software with food? What type of software and hardware is required to sustain this technology for a future ecosystem of robot chefs? This talk will answer these questions and more surrounding the development of a “digital chef” that can assemble and cook meals, without a human in the loop.

Grzegorz Sochacki (University of Cambridge )

Incremental Learning of Cookbook by Visual Recognition of Human-Chef Intention

[Teaser Video]

Abstract: Robotic chefs are a promising technology that can bring sizeable health and environmental benefits by lowering unhealthy convenience food consumption and reducing the labour required for making highly palatable meat-free foods. In this paper, we propose an algorithm that incrementally adds recipes to the robot’s cookbook based on the visual observation of a human chef, enabling the easier and cheaper deployment of robotic chefs. A new recipe is added only if the current observation is substantially different than then all recipes in the cookbook, which is decided by computing the similarity between the vectorizations of these two. The algorithm correctly recognizes known recipes in 93% of the demonstrations and successfully learned new recipes when shown, using off-the- shelf neural networks for computer vision. We show that videos and demonstrations are viable sources of data for robotic chef programming when extended to massive publicly available data sources like YouTube.