1) Resilient Autonomy: Failures are inevitable in real-world applications. Autonomous systems integrated in our daily activities should not behave randomly in the case of such failures. One potential way of avoiding this is enforcing an idle mode or a pre-defined behavior to the system while the specification is revised and the violation is resolved. However, the duration of revising a specification (e.g., from operator feedback) or resolving the violation can be arbitrarily long. In this research thrust, we aim to address the following research question: how can we allow a system to continue execution with guarantees in terms of the satisfaction of relaxed specifications (e.g., drone lands at a site close to the desired end location in case it is occupied, autonomous car continues driving to find another opportunity to change lanes when the lane is not changed in the desired time)?
Some recent papers:
A.T. Buyukkocak and D. Aksaray, “Temporal Relaxation of Signal Temporal Logic Specifications for Resilient Control Synthesis”, IEEE Conference on Decision and Control, Cancun, Mexico, 2022.
2) Constrained Reinforcement Learning: Constraint satisfaction is an important aspect in the process of learning optimal policies. When an autonomous vehicle is simultaneously learning and performing its mission in real time, it is not acceptable to violate constraints during the exploration process. Exploration with possible constraint violation might potentially cause catastrophic events such as accidents or undesired performance. In this research thrust, we aim to develop learning algorithms that yield optimal policies while ensuring not to violate desired spatio-temporal specifications throughout learning. This research was supported by DARPA DSO.
Some recent papers:
D. Aksaray, Y. Yazicioglu, and A.S. Asarkaya, “Probabilistically Guaranteed Satisfaction of Temporal Logic Constraints During Reinforcement Learning”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
X. Lin, A. Koochakzadeh, Y. Yazicioglu, and D. Aksaray. "Reinforcement Learning Under Probabilistic Spatio-Temporal Constraints with Time Windows," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.
3) Planning in Semantically Uncertain Environments: Temporal logic is a very useful language in expressing rich specifications. Planning under temporal logic specifications has gained a significant interest in the literature. However, most of the existing works focus on scenarios where the semantic of the environment is known. In this research thrust, we focus on planning over semantically uncertain environments. For example, the robot does not know where regions A, B, C, and D are but its task is "first go to region A, then go to region B, and never visit region C before visiting region D". Accordingly, we address the problem of planning in environments with probabilistic labels and aim to provide theoretical guarantees on correctness, completeness, and safety.