About this Page
When my friend from our student days found out that I had embarked on writing a third book on AI, he posed a thought-provoking question: What new aspects of AI theory could possibly be explored at this stage? He expressed interest in understanding the distinction between the Feeling AI I'm currently focusing on and the prevalent AI models that have gained widespread attention on the internet. Unfortunately, providing him with links to sources elucidating this difference proved challenging, as the dominance of Large Language Models in Generative AI has overshadowed alternative AI approaches. These circumstances inspired me to create my own personal website. The main page will serve as a platform for regularly updating visitors with news about Feeling AI, including my publications and reviews. I intend to present information on this page in a format accessible to individuals outside the AI domain, while ensuring that links are provided to scientific justifications for further exploration.
To answer this question, I will adopt the perspective of Ming-Hui Huang and Roland T. Rust1 on AI. In real-life scenarios, the necessary human knowledge and skills of employees delivering services to customers depend on the nature of the service. If this service is provided by a machine rather than a human, it must possess a level of intelligence comparable to Human Intelligence (HI). Building upon this foundation, J. Reis, Y. Cohen, N. Melao, J. Costa, and D. Jorge proposed three types of AI: mechanical, thinking and feeling.
Intelligent Machines (IM) that autonomously execute simple, standardized, repetitive, and routine tasks are classified under Mechanical AI. An example of a civil application is the utilization of smart robots to clean hotel rooms.
Thinking AI, in contrast to Mechanical AI, possesses the ability to learn and adapt from data, making it suitable for complex, systematic, rule-based, and well-defined tasks. Within the realm of Thinking AI, there are two subtypes: Analytical AI and Intuitive AI. Within the realm of Thinking AI, there are two subtypes: Analytical AI is in services right now and Intuitive AI, which we recently (2023) have faced. Analytical AI is currently in use and focuses on exploring customer diversity, identifying meaningful patterns, and delivering personalized services to customers. Examples of implementations of Analytical AI include AI personal assistants such as Apple’s Siri, Microsoft’s Cortana, Google’s Bard, Amazon’s Alexa, and Samsung’s Bixby. These assistants can automatically self-improve by learning from data, and they utilize natural language and voice queries to function effectively. On the other hand, Intuitive AI is another subtype of Thinking AI that can generate adaptive personalization systems. These systems become increasingly effective at personalizing services for individual customers over time Ming-Hui Huang and Roland T. Rust2. The era of generative Artificial Intelligence (gen AI) is currently underway. Platforms like OpenAI’s ChatGPT, IBM’s Watson, and other representatives of gen AI are employed to generate new content based on multimodal data (text, visual, and audio), taking contextual factors into account. Gen AI has garnered significant interest across various industries and among individuals for both professional and personal use. A distinguishing feature of services provided by Thinking AI is their connectivity requirement: AI must be connected to global internet resources to deliver the service effectively. The ultimate goal of further improvement in Thinking AI is to achieve Artificial General Intelligence (AGI), which aims to develop a general-purpose AI capable of learning to accomplish any intellectual task that humans can perform. AGI is envisioned as a Universal Problem Solver, a concept that scientists have been striving towards for over 75 years. Similar to gen AI, AGI is considered by scientists as a component of the internet environment. Without access to knowledge and databases from the internet, AGI would be limited in its ability to provide services effectively.
Scientists currently studying AI are primarily focusing on creating specifications for future systems, with discussions revolving around empathetic intelligence rather than Feeling AI (FAI). The term "Feeling AI," as defined by M. Huang and R. Rust1, actually refers to empathetic intelligence, which is AI's ability to recognize and understand others' emotions, respond appropriately emotionally, and influence others' emotions. The goal is to develop an "Emotional Shell" for intelligent machines, enabling them to behave as though they have feelings when interacting with humans. This Emotional Shell can be created today using gen AI based on artificial neural networks capable of learning from multimodal data. Indeed, the approach of creating an "Emotional Shell" is well-suited for automating services performed by intelligent machines in the virtual world of the internet. These machines can effectively provide services to humans or other machines in the online environment. However, when it comes to automating services performed by different autonomous machines, such as robots operating in the real physical world, relying solely on the Emotional Shell approach may not adequately address all the challenges associated with ensuring the autonomy of intelligent machines. For years, I've been developing an alternative approach from a different perspective. Cognitive functions such as perception, needs, drives, emotions, context, and attention play a crucial role in supporting the autonomy of decision-making by Intelligent Machines in the real physical environment. FAI isn't just an "Emotional Shell" reflecting given input data; it primarily shapes behavior as an infinite sequence of responses by mapping input data into machine actions through cognitive processes. The main task of FAI cognitive models, inspired by living beings, is to assist intelligent machines in making decisions in uncertain situations with incomplete data.
Today, a wide range of services are being provided by robots of various types. The list of machines belonging to robots includes drones, aerospace and aquatic machines, autonomous vehicles, and humanoid robots. Robotics, much like AI, has emerged into our lives, but unlike AI, it has entered the physical world rather than the virtual one. Recent remarkable achievements in robotics include Waymo Self-Driving Cars and robo-taxis, robo-shuttles, as well as autonomous trucks. A robot is a universal machine designed to provide various types of services within its construction limitations. Tuning up a robot for specific services is typically achieved through training. However, this process has traditionally required significant time and human resources. In an effort to consolidate the efforts of robot laboratories worldwide, the RT-X project have been launched. The ultimate goal of the Global RT-X Project is to create a General Robotic Brain by assembling data, resources, and code pertaining to the skills that robots have already been taught. This initiative aims to train future general-purpose robots more efficiently by leveraging collective knowledge and expertise from across the globe. It's first role of gen AI in its collaboration with robotics. The next task of gen AI is to transfer to robots the ability to reason in the physical world. This involves imparting to robots an understanding of semantic relationships between objects in an image, basic common sense, and other symbolic knowledge that may not be directly related to the robot’s physical capabilities. These models are trained by scraping the Internet for data. Capabilities of reasoning and drawing conclusions coupled with pre-trained set of different skills significantly raise the intelligence of services delivered by robot. The robot, which autonomously performs commands but remains inactive until receiving the next one, lacks true autonomy in decision-making. While it can execute commands from various sources, including humans and pre-programmed sequences, it still relies on external input to determine its actions. What such a robot lacks to be a really autonomous system? If we compare this robot to a living organism like the Hydra, which has a simple nervous system of about 1000 neurons, we can see a stark difference in autonomy. The Hydra is capable of carrying out its mission throughout its life autonomously, without waiting for external commands. This autonomy is facilitated by the Feeling Natural Intelligence inherent in living beings. Feeling Natural Intelligence of living being is main tool that provides autonomy decision making in various situations that Hydra faces.
Many different approaches have been tried to create AI over its history. I have always adhered to the 3D approach: borrow how NATURE does, develop like EVOLUTION does and simplify unless does POSSIBILITY. Using 3D approach, I and my colleagues have created math and computer models several cognitive functions borrowed from living beings and implemented them as a different versions of incomplete Feeling AI blueprint. So, sensory data perception as a main cognitive function in a form of Perception System has been verified on several applications [1], [2], [3], [4]. This way the following models of solitary cognitive functions short-term memory, context, attention, needs, emotions and goal-oriented control with continues planning have been proposed and verified. An attempt was made to couple these cognitive functions together in FAI blueprint version, aimed at supporting autonomy of decision-making in unmanned systems.
I'm currently working on the third book of the "Intelligent Machine Trilogy", a book devoted solely to the Feeling Artificial Intelligence. The starting point of my reasoning is as follows. Living being has sensory systems to perceive external and internal environments and has actuators to act on the external environment. To fulfill its mission, a living being is constantly generating actions or reactions. Infinite sequence of responses is exhibited as living being behavior. Natural intelligence ensures the endless realization of the living being mission by shaping responses according to the external and internal states of the environment. The greater the uncertainty of the external environment, the higher the level of natural intelligence required to realize the mission. I can now answer the question of what FAI should be like. Feeling AI is a model that mimics as closely as possible the principles of response-making production by the Natural Intelligence of a living being, designed to maintain the autonomy of the intelligent machine's mission fulfillment for as long as possible. In other words, Feeling AI ensures the existence of living things.
My strategy for creating FAI is "From simple to complex". First of all, I found out the point of view of scientists in the field of biology, neurobiology, and cognitive psychology on the Natural Intelligence of a living being. Is there any structure blueprint of Natural Intelligence based on scientific facts from biology and psychology? Does this architecture discern different levels of intelligence? For my luck, I found such paper. Paul F. M. J. Verschure, Cyriel M. A. Pennartz and Giovanni Pezzulo consider the goal-directed behavior of animals as a product created by cognitive processes mentioned above. Proposed by them architecture has distinctive dividing on level: from simplest reactive, adaptive, cognitive to thinking. Their proposed architecture has a distinct separation of levels: from simplest reactive, adaptive, cognitive to thinking. Then I found the living being with simplest nervous net, which behavior sufficiently well studied and published by scientists in biology, neurobiology, cognitive psychology. Hydra was chosen as it fulfills these requirements. Analyzing the behavioral data allowed me to formulate 6 principles for how Hydra's Natural Intelligence shapes responses in different situations. Architecture blueprint of reactive layer of FAI which satisfies these 6 principles has been proposed. Sensation, Action and Response-making systems on the base of math and computer models of cognitive functions has been formalized. It's important to note, FAI reactive layer realizes partly function of the Perception, in particular data abstracting is done only separately for different sensory modality but multimodality concepts can't be built. The principles of response-making can only be explained on the basis of the influence of such cognitive functions as need, drive and emotion. On this theoretical foundation a new method of soft sequential control for autonomous systems has been proposed, which my graduate student is currently working on.
At the next step, according "Develop like EVOLUTION does" of 3D approach, I've identified the following living being in evolution chain of nervous system. It was Jellyfish, in contrast to Hydra, it has not diffusion nervous network but has a brain that creates numerous adaptive behaviors. Based on scientists' findings on the behavior of the jellyfish as a living creature with a rudimentary brain, I proceeded similarly to the previous step. I've formulated new principles in addition to the Hydra ones and proposed already two layers FAI architecture blueprint (adaptive and reactive layers) explaining how Jellyfish's Natural Intelligence shapes its responses. Adaptive layer is supported such cognitive functions as Short-Term Memory, Attention, and Learning. Then ....
As is typical of promoters, I'll only focus on the pros. First of all, FAI is transparent system in contrast to gen AI and AGI which bases are artificial neural networks. Foundation of the FAI Perception system is Knowledge Base contains the "What is this" type of knowledge presented by three models: sign model of the word, model of the internal and external meanings of the word. Foundation of the Response-making system is Knowledge Base contained the "How do this" type of knowledge too. Both Knowledge Bases are transparent and understandable for human. Availability of tracking mechanism of responses is doing transparent of FAI. The second pros needed more clarification. Each layer of a robot's FAI (reactive, adaptive, contextual, and thinking) is based on principles borrowed from the Natural Intelligence of living beings. If a robot's perception system is built on a set of sensors that sense environmental properties similar to what a human can sense, then such a robot is simply a human assistant, understood by the human, performing a mission in an autonomous mode. But if we expand the sensor set with properties that humans can't sense, such as ultrasonic or infrared range or odor perception comparable to a dog's, you get a robot with truly augmented reality. And given that the perception of augmented reality properties is built on the same principles and models as human perception, the robot can share its sensations with humans and exhibit human-like behavior in environments unknown to us. This holds great promise for using autonomous robots to explore new environments based on future discoveries in physics, chemistry, and cosmology. The next pros concern our safety. FAI unlike gen AI and AGI isn't Internet-dependent. It's autonomy intelligence. It doesn't require the global databases to make responses and fulfill its mission. In the modern world of tough real and cyber wars where we now live, this is a pivot advantage. Damage, destruction or intervention into data centers will causes catastrophic consequences for all world if resources needed to robots and more specifically AI are centralized and concentrated in one hand. Autonomy systems resistant to such violations. External intervention via the Internet cannot change the robot's mission and cannot interfere with its execution.