From Pixels to Prompts: Open-World Autonomous Driving with Foundation Models (Abhinav Valada)
Abstract: In the last few years, we have seen remarkable progress in AI, from large language models to visual understanding and generation. The key to their success has been leveraging training data at an unprecedented scale and models that are able to learn from such large amounts of data. We are already seeing some examples of applying similar principles to various domains of robotic learning. However, underneath the impressive demo videos, scenes and tasks in them are still carefully curated leading to poor performance in the open world. In this talk, I will discuss our efforts towards open world robot autonomy, where learned models from perception to reasoning, generalize effectively across diverse tasks, robots, environments, and scenarios. These techniques have not only set the state-of-the-art, they have opened doors to a wide variety of new applications in industry. Lastly, I will conclude the talk by presenting our work on ensuring safe, trustworthy, and responsible robot learning, which is crucial for both open world learning and fostering acceptance in society.
Autonomous Teams: Where Learning Meets Control (Andreas Malikopoulos)
Abstract: At the frontier of autonomy lies a fundamental question: how can we design teams of agents that reason, act, learn, and memorize cooperatively in dynamic, uncertain environments? In this talk, I will present a unified perspective that integrates learning and control as the foundational principles for building such autonomous teams. In the first half of the talk, I will discuss how my group has advanced the ability of engineered systems to reason and act optimally under uncertainty, and to learn adaptively from interactions with their environment. I will illustrate these ideas through transportation-related applications, including self-learning powertrain control, power management of hybrid electric vehicles, and optimal coordination of connected and automated vehicles. These examples highlight how data-driven control and reinforcement learning can enable autonomous systems to operate safely and efficiently in real time while achieving near-optimal performance. The second half of the talk will focus on the next frontier—building autonomous teams. I will present a theoretical framework grounded in team theory, a mathematical formalism for decentralized stochastic control problems in which multiple agents with asymmetric information cooperate toward a shared objective. I will discuss recent structural results for sequential dynamic team problems with nonclassical information structures, showing how to construct information states that remain invariant to control strategies, thereby enabling dynamic programming decompositions in decentralized settings. These results can aim toward a unifying scientific foundation—what I refer to as the Science of Autonomous Team Intelligence—where teams of agents, whether robotic, vehicular, or human–machine, can reason, act, learn, and memorize collectively, achieving coherent and safe behavior in complex, uncertain environments.
Intelligence in Motion: Distributed Control and Architectures for Future Mobility (Bassam Alrifaee)
Abstract: Modern mobility systems are evolving into complex, intelligent networks of autonomous agents. As vehicles, infrastructure, and digital twins become increasingly interconnected, the challenge shifts from achieving individual autonomy to orchestrating distributed systems. As vehicles become more intelligent and connected, their safety and efficiency will depend on how effectively distributed decision-making scales while remaining reliable. Our research combines methods from artificial intelligence and control theory to enable adaptive, safe, and efficient behavior in connected and automated vehicles (CAVs). We investigate approaches such as multi-agent reinforcement learning, data-driven predictive control, cooperative sensor data fusion, and service-oriented software architectures to advance this vision. This talk explores how distributed control and software architectures enable scalable decisionmaking for future mobility. We investigate how CAVs learn from data, plan jointly, and coordinate with others in real time while maintaining strict safety guarantees. This enables vehicles not only to plan their motion but also to anticipate and coordinate with others dynamically. Finally, we highlight the Cyber-Physical Mobility Lab (CPM Lab)—an open-source, reproducible platform for CAV research—demonstrating how open science and collaboration accelerate sustainable autonomy and mobility.
Controllable Congestion: Toward Actionable and Model-based Performance Measures for Highways (Cathy Wu)
Abstract: Performance measures play a central role in transportation systems, guiding investment, planning, and decision-making. Most existing congestion metrics use historical data to describe the current state of the system but provide no indication of how much improvement is truly achievable through active traffic management strategies. As a result, when agencies use these metrics to make critical decisions on new infrastructure, they risk investing in projects that may not meaningfully enhance performance. The need for more actionable metrics is becoming increasingly important with advancements in low-cost intelligent transportation systems (ITS) that can influence traffic flow in real time. We present research researc that aims to close this gap and introduces controllable congestion as a novel, actionable metric that quantifies the upper bound of total delay that can be reduced using speed control strategies. To estimate controllable congestion, we develop a nonlinear optimization framework grounded in the METANET macroscopic traffic model and solved using model predictive control (MPC). Results from a representative bottleneck scenario across different demand levels and operational constraints show that controllable congestion captures insights unseen in conventional metrics. Collectively, this framework offers decision-makers an actionable tool for evaluating to what extent smart traffic control strategies can yield meaningful improvements in system performance.
Cooperative Motion Planning via Behavior-Level Agreements in Intelligent Transportation Systems ( Jonas Mårtensson)
Abstract: Future Intelligent Transportation Systems rely on cooperative motion planning and shared perception between automated vehicles, connected infrastructure, and cloud services, even under uncertainty in sensing and communication. We present Behavior-Level Agreements (BLAs), a formal mechanism for negotiating and verifying cooperative behaviors in real time. BLAs use temporal logic, reachability-based behavior refinement, and assume/guarantee contracts to ensure that only behaviors that are provably safe and consistent with shared objectives, such as collision avoidance, spatial coordination, and access to constrained resources, remain admissible. The framework explicitly accounts for sensor uncertainty and partial situational awareness, adapting or dissolving cooperation when assumptions no longer hold. Initial proof-of-concept scenarios demonstrate how BLAs enable scalable online refinement of behavior spaces and support safe perception-aware cooperation in complex transport sites.
Formal Methods and Safe Personalization for Trustworthy Autonomous Vehicles (Necmiye Ozay)
Abstract: Planning and alignment with human intent, while preserving notions of safety, is crucial for deploying AI-enabled autonomous systems in safety-critical applications. In this talk, I will present our recent work that infuses temporal logic with learning for safety and alignment. In the first part of the talk, I will present a method for learning multi-stage tasks from a small number of demonstrations by learning the logical structure and atomic propositions of a consistent linear temporal logic (LTL) formula. In the second part of the talk, I will show how one can learn to rank different behaviors consistent with a given safety specification from human preferences, while ensuring that rule-violating behaviors are never ranked higher than rule-satisfying ones. These methods will be illustrated with applications in autonomous driving.
Uncertainty-Guided Planning Using Natural Language Communication for Cooperative Autonomous Vehicles (Neel Bhatt)
Abstract: Cooperative autonomous driving at scale demands communication strategies that are both efficient and interpretable, yet current methods often fall short—either burdening networks with high-bandwidth sensor data or ignoring the uncertainties inherent in perception and planning. In this talk, I will present UNCAP (Uncertainty-Guided Natural Language Cooperative Autonomous Planning), a novel planning framework that enables connected autonomous vehicles (CAVs) to communicate using lightweight natural language messages while explicitly modeling uncertainty in shared observations and decisions. UNCAP leverages a two-stage communication protocol where each ego vehicle identifies the most relevant peers for information exchange and then transmits succinct, uncertainty-quantified messages that maximize mutual information. This approach allows vehicles to selectively fuse critical signals into the planning process, reducing communication overhead without sacrificing safety. We demonstrate through extensive experiments across diverse driving scenarios that UNCAP achieves substantial improvements in communication efficiency, safety margins, and decision confidence—highlighting its potential for scalable, reliable cooperative autonomy.
Data-Driven Safe Iterative Control: Learning from Successful Trajectories and Failed Executions (Rahul Mangharam)
Abstract: How can robots learn to be safe by failing? This talk presents two complementary approaches for robots to learn to balance safety with high performance from both successful and failed task executions. First, we introduce Safe Information-Theoretic Learning-based MPC (SIT-LMPC) for nonlinear stochastic systems. By utilizing normalizing flows for uncertainty modeling and adaptive penalty functions for safety, this framework learns value functions from prior trajectories. Fully parallelizable on GPUs, SIT-LMPC iteratively improves performance without assuming prior knowledge of system dynamics. Second, we present Failure Aware Iterative Learning-based MPC (FAI-LMPC), a failure-aware constraint learning algorithm for linear systems with unknown dynamics. This method iteratively carves out the admissible state-action set using data from failed rollouts. It recovers a certified controlled-invariant terminal set, ensuring that the controller adapts its constraints to avoid repeating failures. Together, these methods demonstrate how systems can simultaneously learn optimal policies from successes and define safe operational boundaries through failure.