Introduction to Shared and Cooperative Control
Abstract: The challenges of 2025 compel us to ask: why do we need shared and cooperative control, interpreted from the perspective of System, Man & Cybernetics? To explore this question, we will take a journey through the history of shared and cooperative control systems, beginning with natural cooperation between humans and between humans and animals, where metaphors such as the horse, the dog, and Gaia illustrate enduring principles of guidance, control, trust, and balance. We then revisit the foundations of automation and the very definition of control, before turning to the first shared-control concepts. From H-Mode and conduct-by-wire, to applications in automated vehicles and shared-control wheelchairs, these developments illustrate how shared control has matured into cooperative and hybrid systems. Along the way, metaphors have become models, and models have shaped patterns of design and interaction. The keynote concludes by underscoring why assessment is not only a technical necessity, but also a key driver of responsible and sustainable progress in cooperative control.
HMI Requirements Resulting from Cooperative Driving
Abstract: More than 125 years of automobility remind us that we should be aware of the fact that individual mobility is based on the fact that the driver contributes exceptionally high activity and human performance in the human-vehicle system. Besides improvements in vehicle technology the human factor and human performance is crucial to avoiding accidents in critical situations. However, critical incidents and accidents can often be caused by human error or limited capacity. Since the 90s these effects have been successfully countered with a variety of driver assistance systems. Sensory deficits of the driver and misperceptions should and could be compensated by technical sensors. Drivers use these assistance systems temporarily and shall be assisted in the execution of sub-tasks of the driving task where they remain - following the Vienna Convention – in the supervisory role.
The potential automation or partial automation of driving is not only more of the same but a radical qualitative and quantitative change in the paradigm of individual mobility, provoking many questions in the area of human factors research and human-vehicle interaction. Shared control as an interaction paradigm offers many advantages but also challenges in HMI design.
Human-in-the-loop in the operating room: Surgeon or just patient?
Abstract: The global-scale adoption of robotics in the data age brings new challenges to researchers and engineers. Through the improved autonomy of systems, safety and reliability are becoming major issues. This is becoming very visible in safety-critical domains, such as medicine. Even today, commercialized surgical robot systems are almost exclusively based on human-in-the-loop control or deterministic algorithmic solutions (such as registration techniques for image-guided technologies), and adaptive decision-making expert systems are lagging. AI shows very promising results in the critical parts of surgery, such as vision, decision support, reasoning, diagnosis, and situation awareness. AI can reduce the complexity of intraoperative workflow, provide prediction of patient outcome, and enhance the efficiency of postoperative reporting. Can autonomy be pushed to the extremes to replace human surgeons? Can autonomous robot procedures be as safe as human-drive? Will society allow such far-fetched concepts along the current ethical and regulatory fields? The presentation will aim to provide a comprehensive answer to those haunting questions.
How to wisely design (haptic) shared control: creating systems that work well and feel nice
Abstract: Already in the first decades of heavier than air aviation, automation was introduced. (Sperry autopilot, 1914). For a long time, the capabilities of this automation were limited, and traditional models for the divisions of work, based on concepts borrowed from human-human collaboration described how automation and pilots collaborate (Sheridan and Verplank, 1978). Initially, this automation provided functions that could also be performed by the pilots, and offered a means to lower pilot workload. Gradually functions were added to the automation that cannot be performed by pilots; the Flight Management System (FMS), for example, can optimize climb, cruise, and descent in a manner not achievable for pilots. Such functions are deemed not safety-critical, if the FMS fails, pilots continue the flight in a less optimal manner, with increases in workload. Further advances in automation and machine autonomy will lead to situations in which pilots will collaborate with automation, rather than operate automation. However, increased automation autonomy increases the risk of misalignment between pilots and automation. To explain the implications, I use previous work on haptic shared control as a model for collaborative joint control. Haptic shared control provides a means for synchronous collaboration between humans and automation in real time on a short-term (manual) control task. Future automation will enable collaboration on longer-term tasks, synchronously and asynchronously, and optionally with intermittent communication. The presentation explains how the lessons learned on haptic shared control can be re-applied in collaborative control with highly autonomous systems.