Memory is a fundamental component for AI in autonomous systems, enabling them to learn, adapt, and make informed decisions. In the past few years I have developed a class of neural network models called memory guided models, which uses past experiences in a more explicit way to guide machine learning. This research direction can solve a number of challenges we face with modern AI systems. Some of the advantages have been listed below.
1. Contextual Understanding:
AI systems with memory can store and recall past events, actions, and their consequences. This allows them to learn from mistakes, identify patterns, and make better decisions in future similar situations. By remembering past sensory inputs, such as visual and auditory data, AI systems can build a mental model of the world, recognize objects, and anticipate potential events. The most interesting challenge lives at the space of information retrieval dependent on the context. For instance, if you are teaching a humanoid robot to stack Lego blocks, the color of the blocks are irrelevant but the shape is not. However, when following instructions from a pictorial guide to build a Lego house, the color of the blocks sort of guide you for the order in which the blocks should be assembled. I believe there are a number of interesting problems that can be solved here, which can positively impact the state of the art in robotics.
2. Decision-Making and Planning:
Memory enables AI to forecast future outcomes based on historical data. This is crucial for tasks like traffic prediction, weather forecasting, and autonomous vehicle navigation. The ability to draw situational similarity is what allows humans to act prudently in different situations. Memories can allow different experiences to be compared at an abstract level without the need for functional similarity. This essentially directs at a search problem when priming the decision-making with memories, and in some sense is complementary to the previous point of context.
3. Continuous Learning and Adaptation:
Memory allows AI systems to continuously learn and improve their performance over time. They can adapt to changing environments and new challenges by updating their knowledge base. AI can learn new skills and strategies by observing and imitating human behavior or by practicing through simulation. Memories permit this to happen, on account of its ability to act as cluster centers in order to group different behaviors. They act as centroids which help in keeping things separate.
Neurosymbolic learning algorithms offer a promising way to address several critical challenges in current autonomous systems by combining the strengths of neural networks and symbolic reasoning. Here’s how these algorithms can tackle some of the biggest issues:
Interpretable decision-making and human-robot interaction: Pure neural network models are often "black boxes," making it hard to understand or explain their decisions. Neurosymbolic algorithms enable symbolic reasoning to be layered onto neural networks, enhancing interpretability. This is especially valuable for autonomous systems where decisions must be understandable and justifiable, such as in autonomous vehicles or medical robotics. This enables human-robot interactions through involving context, language, and intention—tasks where symbolic reasoning excels. Neurosymbolic learning can help autonomous systems better interpret human instructions, respond appropriately, and even understand complex human cues, enabling smoother and more reliable collaboration.
Handling complex, high-level reasoning: Neural networks alone struggle with tasks that require reasoning over multiple steps or understanding relationships between abstract concepts, which are essential for high-level decision-making. Neurosymbolic learning can integrate logical rules and reasoning with perception, enabling autonomous systems to perform complex planning and reasoning, such as in logistics or search-and-rescue scenarios.
Improved robustness in dynamic environments, safety and verification: Autonomous systems need to adapt to rapidly changing and uncertain environments, where pure data-driven approaches might fail without extensive retraining. Neurosymbolic algorithms can bring in prior symbolic knowledge (e.g., rules, facts) to guide decision-making, making systems more robust and able to handle novel situations without requiring large amounts of new data. Ensuring safety in autonomous systems is challenging due to the unpredictability of neural networks. Neurosymbolic algorithms can leverage explicit rules and constraints, making it easier to verify and guarantee safe behavior. This is particularly crucial in domains like autonomous driving, where safety guarantees are essential.
Efficiently learning with less data: Neural networks often need vast amounts of labeled data, which can be impractical or costly to obtain. By combining neural perception with symbolic knowledge that encodes prior information, neurosymbolic approaches can significantly reduce the need for data, making autonomous systems more viable in domains where labeled data is scarce (e.g., rare medical conditions or specific industrial faults).
Enhanced task generalization and logical consistency: Pure neural networks may overfit to specific tasks or data distributions, making it hard for autonomous systems to generalize. Neurosymbolic learning allows systems to use structured, symbolic knowledge to generalize across different contexts or tasks more effectively. This is valuable in scenarios like household robotics, where tasks are diverse and vary between environments. Neural networks often struggle with maintaining logical consistency and common sense, leading to potentially dangerous behavior in autonomous systems. Neurosymbolic approaches can provide a foundation for consistent, rule-based reasoning, allowing systems to apply common sense or avoid logical fallacies, improving reliability.
By blending the strengths of neural learning with structured symbolic logic, neurosymbolic learning algorithms provide autonomous systems with the ability to make more robust, interpretable, and adaptable decisions, improving both their performance and trustworthiness in real-world applications. The image on the left below has been sourced from the DARPA ANSR program.
Verifying safety properties of autonomous systems is challenging for several reasons, primarily because these systems must operate in complex, dynamic environments and adapt to unpredictable scenarios. Videos of robotics experiments are often deceptive. Where things are declared as success if they work in 3 out of 10 trials. What goes out in the publications or media are the 3 cherry picked videos when it did work. However to be considered as a dependable technology things need to work at least 99 out of a 100 times. Verification techniques are required to make this next step possible. Something which is often less talked about in the robotics literature. While there are some major hurdles, listed below, all of these should be viewed as potential research opportunities.
High-dimensional and continuous state space: Autonomous systems, like robots, operate in high-dimensional spaces with continuous variables, making it difficult to account for every possible state. Safety guarantees require evaluating many potential states and actions, which becomes computationally intractable in high-dimensional spaces.
Complex interactions with the physical world: Autonomous systems must navigate, manipulate, or respond to physical environments that are not fully predictable. The physical world introduces uncertainty due to factors like sensor noise, unmodeled dynamics, and environmental changes. This unpredictability makes it difficult to ensure that the AI system’s behavior will consistently meet safety properties.
Adaptability and learning in real time: Many autonomous systems, particularly those using reinforcement learning, adapt and learn as they interact with the environment. While this adaptability helps with performance, it also means that the system’s behavior may change in unexpected ways, making it harder to verify safety properties that rely on fixed, predictable actions.
The “curse of dimensionality” in formal verification: Traditional formal methods for verifying safety (such as model checking) face the curse of dimensionality. As the number of variables and states grows, so does the computational burden, often exponentially, making exhaustive verification impractical for complex systems.
Uncertainty in human-robot interaction: Many autonomous systems are designed to work with humans, introducing additional challenges. Humans’ behaviors and decisions are difficult to predict, and AI systems must be able to respond safely in real-time, even to unexpected or irrational actions. Ensuring safety in these interactions requires accurately modeling human behavior, which is an inherently uncertain and variable factor.
Limitations of current safety frameworks: Many existing frameworks were not designed with autonomous systems in mind. Traditional verification methods assume relatively static and isolated systems. Autonomous systems, however, often need to be responsive to their environments and operate under limited time constraints, which can make conventional methods inadequate.
Because of these factors, verifying safety properties of autonomous systems often requires a mix of approaches, such as probabilistic reasoning, simulation-based testing, formal methods (when feasible), and real-world testing to handle the inherent uncertainty and complexity.