Research Projects

Safe Perception-based Planning in Unknown Environments

Temporal logic motion planning has emerged as one of the main approaches for specifying a richer class of robot tasks than the classical point-to-point navigation and can capture temporal and Boolean requirements, such as sequencing, surveillance or coverage. Common in the majority of complex mission planning algorithms is that they typically assume robots with known dynamics operating in known environments that are modeled using discrete abstractions, e.g., transition systems. As a result, these methods cannot be applied to scenarios where the environment is initially unknown and, therefore, online re-planning may be required as environmental maps are constructed; resulting in limited applicability.

To address these challenges, we have proposed the first perception-based safe planning algorithm for multi-robot systems with known dynamics that operate in environments with partially unknown semantic and geometric structure. Specifically, the uncertain environment is modeled using probabilistic semantic maps and/or occupancy grid maps while mission. To define mission and safety properties in uncertain maps, we extend Linear Temporal Logic (LTL) by including sensor-based predicates that allow us to incorporate probabilistic performance and safety guarantees directly into the mission specification. The proposed method generates reactive control policies that adapt to the continuously learned map of the environment that is updated as per noisy semantic measurements generated by learning-enabled perception systems. The proposed method avoids the need for computationally expensive approaches that allow for control in belief space and, therefore, it scales well with the number of robots and the size of workspace. Finally, we have also developed learning-based approaches for temporal logic mission planning for robots with unknown dynamics operating in uncertain environments. The proposed learning-based control approaches are also supported by probabilistic satisfaction guarantees. A demonstration can be found here.

Action-perception loop for safe autonomy in unknown environments.
Generating semantic measurements using the YOLO neural network.

Real-time Defense Against Adversarial Digital and Physical Attacks to Learning-based Perception Systems


Perception systems typically rely on Deep Neural Networks (DNNs) for object classification. At the same time, DNNs have been shown to be vulnerable to adversarial input images, i.e., inputs which have been deliberately been modified to cause either misclassification or desired incorrect prediction that would benefit an attacker. Adversarial examples in the literature can be divided into two sub-classes depending on how the attack is executed. One augments the physical environment to induce missclassification (e.g., adding a sticker to a stop sign), while the other adds a small perturbation to the classifier input data.

To establish reliability and security of perception systems against adversarial input images, we have developed VisionGuard, a novel attack- and dataset-agnostic detection framework for defense against adversarial digital and physical attacks. To evaluate VisionGuard against physical attacks, we have built AdvNet, the first dataset that includes both clean and adversarial traffic sign images that can fool state-of-the art Convolutional Neural Networks. In fact, VisionGuard is the first defense mechanism that can handle both digital and physical adversarial inputs while scaling to large image domains (e.g., ImageNet dataset) without sacrificing detection performance.

Optimal Wireless Networking for Multi-Robot Systems

All-time network connectivity plays a pivotal role in enabling teams of mobile robots to accomplish cooperative tasks, such area coverage and exploration, consensus, temporal tasks, and search-and-rescue missions. Indeed, network connectivity is an underlying assumption in every distributed control and optimization algorithm. For this reason, in recent years, there is growing research in designing controllers that ensure point-to-point or end-to-end network connectivity for all time employing either graph-based models or more realistic communication models.

We have developed the first distributed integrated control framework that simultaneously optimizes area coverage and routing of information. In particular, the proposed framework considers more realistic wireless communication models than existing communication-aware coverage control methods do, as it involves optimal routing of information over a network of varying link reliabilities, and ensures desired information rates that depend on the frequency with which events occur in the sensors’ vicinity. Moreover, we also proposed the first distributed global planning algorithm for multi-robot communication networks in complex environments. As a result, the robots can navigate their environment and accomplish their tasks without being trapped in intermediate stationary configurations, a particularly challenging task when planning robot motion in complex non-convex environments. At the same time, the approach in optimizes the use of robots and network resources (e.g., available routes and information rates) and does not require any a priori assignment of tasks to robots.

Communication-aware coverage control for a network consisting of 14 mobile robots (black dots) and 2 static access points (blue diamonds). Thickness of the communication links (green edges) captures the amount of information flow.

Distributed Autonomy in Communication-Constrained Environments

Preserving network connectivity for all time may prevent the robots from moving freely in their environment to fulfill their tasks and instead restricts their motion to configurations that maintain a reliable communication network that is required for coordination. This is more pronounced in communications-limited environmets such as underwater or cluttered environments. Therefore, a much preferred solution is to allow robots to communicate in an intermittent fashion and operate in disconnect mode the rest of the time. The advantage of intermittent communication is that it provides more flexibility to the robots to accomplish their tasks as they are not constrained by all-time communication requirements. The great challenge in obtaining distributed intermittent communication protocols is to ensure coordination between the robots, even when they mostly operate in disconnect mode.

We developed the first distributed and correct-by-construction intermittent communication control framework that guarantees connectivity over time infinitely often, scales to any number of robots, and is flexible enough to account for various robot tasks. The proposed distributed framework can determine online which robots should communicate, where, and when, so that the assigned task is accomplished, while information is propagated in the network intermittently in a multi-hop fashion, and a user-specified control objective, such as traveled distance, is optimized. We have applied the proposed intermittent communication framework to various robot tasks, such temporal logic tasks, state-estimation, and high-level, time-critical, dynamic tasks resulting in significant performance gains compared to the existing solutions in the literature that maintain communication for all time. Moreover, we validated the proposed algorithms experimentally.



Large-Scale Autonomy via Formal Methods

Formal languages such as Linear Temporal Logic (LTL) have been extensively used in robotics due to their ability to capture high-level complex tasks, such as coverage, data gathering, persistent monitoring, and intermittent communication. Finding optimal robot paths that satisfy LTL-specified tasks can be achieved using tools from model checking theory and optimal control synthesis. The grand challenge in optimal control synthesis problems is dealing with the state-space explosion; in fact, existing approaches face scaling challenges as the number of robots, the complexity of the task, or the size of the environment increase.

To mitigate these challenges, we have developed STyLuS*, for large-Scale optimal Temporal Logic Synthesis, that is designed to solve complex temporal planning problems in large-scale multi-robot systems. The proposed method is probabilistically complete, asymptotically optimal, and converges exponentially fast to the optimal solution. STyLuS* can address temporal logic task planning problems that are hundreds of orders larger than the planning problems that off-the-shelf model checkers (e.g., SPIN, NuSMV, and nuXmv) can handle. A novel application of the proposed optimal control synthesis algorithm was in the control of mobile magnetic microrobots for manipulation tasks and in building acoustic impedance maps using mobile robots.

Controlling magnetic micro-robots for temporal logic micro-manipulation tasks.