Self aware AI can simply be defined as a machine or robot that has similar cognitive abilities to the human mind. Scientists are working on programs that model the human mind and its ability to attach meaning and understanding to objects, situations, and things. Instead of a computer organizing information and spitting out an answer, individuals hope to create a robot that is able to attach meaning to its actions rather than just computing information and presenting it. But how exactly does this work and what programs are being worked on now? To start, two particularly relevant programs are being worked on and are being modeled after psychological principles. The first long-term project is called the SOAR architecture. This stands for State, Operator And Result, which was devised by Lehman et al. (2006). The second long-term program is the ACT-R architecture. This stands for Adaptive Control of Thought-Rational, which was introduced by Anderson et al. (2004) ( Chatila R et al., 2018).
SOAR is modeled after human cognitive behavior. Every behavior we take has some form of thought process or logic behind it. It explains AI in terms of learning regarding chunking and reinforcement learning. Chunking is when we take small pieces of information and form it into groups to better comprehend it. A good example of this is remembering phone numbers due to the dashes. Reinforcement learning is used in AI in which they are able to learn new information without being programmed to know it.
Below, Lehman explains the steps that they hope computers will be able to make. For instance, he claims that:
1) It is goal-oriented.
2) It takes place in a rich, complex, detailed environment.
3) It requires a large amount of knowledge.
4) It requires the use of symbols and abstractions.
5) It is flexible, and a function of the environment.
6) It requires learning from the environment and experience (Lehman et al. 2006).
These steps are seen in humans when they go about their everyday lives. These six steps help to break down how humans process information, how they act on it, and ways in which they associate meaning to situations. For instance, when referencing number 4, abstractions can be explained in this quick example. One way to explain abstraction is that we still know our friends exist even though we may not see them in front of us. We may go about doing other things but we know that even though our friends are not present with us they are still existent. Our simple cognitive action in terms of memory and understanding, may seem simple to us, but applying a cognitive thought to an AI is extremely complicated.
ACT-R architecture is based on knowledge and facts that are stored and organized in a declarative memory. Declarative memory is when we remember facts, dates, and events. In addition to the programmed declarative memory, ACT-R hopes to also pair it with rules and procedures that AI could follow and learn from. The three main components in ACT-R are modules, buffers, and pattern matchers. Modules come in the form of 1) perceptual - motor modules and 2) memory modules. Perceptual - motor modules deal with interactions with or a simulation with the real world, an example of this would be a robot that has a camera to see. The memory modules deal with declarative memory and procedural memory. Procedural memory, for example, is knowing how to do things like opening a door or makeing a cup of coffee. ACT-R uses buffers to access its modules except for procedural memory. The content in the buffers at any given time explains the state of the AI and what it's doing in the moment. The pattern matcher searches for content that matches the state of the buffers, once it finds it, it changes the system of the ACT-R. This action is reproduced multiple times, which is very similar to human cognitive procedures. (Raluca Budiu, 2013). These are only two of the many programs being worked on to create self aware AI (Chatila et al., 2018).
Sophia is a social humanoid robot, meaning she is meant to interact with humans and learn from them. Sophia is extremely sophisticated, she can read other people's facial expressions, comprehend human emotions, and body language enough to socialize with a stranger and have a conversation. She is composed of very complex AI that encompasses language processing, symbolic AI, program designed cognitive functions, among many others. Sophia is also able to experience a form of emotions herself, which were composed from human evolutionary psychology theories. She was tested by the Tononi Phi measurement of consciousness and they found that in certain circumstances, and depending on what data she is processing, she has a simple underdeveloped form of consciousness. Sophia can either run her AI herself freely or sometimes her AI is paired with human programming and dialect. She likes to refer to herself as a hybrid human intelligence which is the best of both AI and human intelligence in one. Her cretaing was to aid in future development of AI reach, and to help with real world problems found in education and medicine (Sophia, Hanson Robotics.com).