New Post: "Present Day, Present Time: 2021/05/09 - Layer 20"
Digging up an old project (In the form of a personal Blog) - By Hugo Arraes.
Bureaucrat, AGI aspiring architect, meme lord, game dev enthusiast, isekai lover, soft robots designer wannabe.
https://www.linkedin.com/in/hugo-arraes/
https://brasilia.academia.edu/HugoArraes
( Arquitetura de um Agente Emocional baseado em Modelos Psicológicos para uso em Jogos Eletrônicos - 2012 - Hugo Galvão Ribeiro Arraes )
TAGS:
Robotics,
Cognitive Science,
Artificial Intelligence,
Human Computer Interaction,
Metaphysics,
Philosophy of Mind,
Philosophy of Science,
Metaphilosophy,
Self and Identity,
Machine Learning,
Autonomous Agentes,
Theory of Mind,
Developmental Robotics,
Robot Learning,
Consciousness,
Affective Neuroscience,
Affective Computing,
Personal Identity,
Future of artificial intelligence,
Emotions And Self Regulated Learning,
Artificial General Intelligence,
Software Agents,
Emotional Computing,
Philosophy of Artificial Intelligence,
Inteligência Artificial,
Machine Ethics,
Common Sense Model,
Artificial Consciouness,
Jogos Digitais,
Kansei Engineering,
Gognitve Architectures for Emotional Robots,
ROBOTICA,
Emoções,
Agentes Inteligentes,
Philosophy of Mind and Artificial Intelligence & AI
Present Day, Present Time: 2021/03/07 - Layer 1
This is it. I decided to dig up from the grave my old undergraduate thesis, from 2009~2012, translate and put it online, in the form of a personal Blog, in addition to documenting my attempts to create a robot dog with cheap and improvised pieces.
There will be days that will be posted here, sometimes in a broken English, only apologies for lack of time or even pieces of my life. If you are interested or curious about AGI, Robotics, sensor ideas, etc, feel free to visit. I am not a know everything guy (To tell you the truth I failed calculus 4 times in college) but I think I can bring up at least some ideas that some of you may find interesting.
I intend to slowly present the main concepts in the form of little small drawings made by my own, non-artistic, hand. Then, I intend to revisit all the content, deepening what we will see to the point of addressing network/graph of concepts. And, finally, we will see the grand modular schema of the whole network/graph, the flow of information, specialized sub-modules with their subnetwork/graph, etc, etc, etc.
Every time that we could go deeper than intended at first, I will change the color o text so that you can skip it or, if curious, read it before it can be better explained further in the future.
This Blog is not intended for academic purposes, just uncomfortably personal posts.
For now, I want to introduce you to this yellow guy here. He will be our model role to exemplify details of the proposed AGI architecture.
This will be our end point. To detail the Grand Schema with information flux and every specialized local Network/Graph.
Present Day, Present Time: 2021/03/07 - Layer 2
At the same time as translating the AGI architecture, I am building a small dog (Gryphon???) robot to experiment with. Currently trying to test a new concept of movement and control with the implementation of 1 dog leg, with 16 ~ 20 Servos 9g.
In my country (Brazil), 20 x 9g servos are a little fortune, imagine the ES9051 ones. If all goes well, will need 96+ servos on a single robot. Can't even imagine the headache of powering all PCA9685s.
For the robot's tactile sensors and the limb's 3D position, will also put here some attempts to implement different original approaches.
Here, are the tactile skin approach for robots using only a sponge-like material, fiber optics, a couple of LEDs, and one internal central camera. Touch causes a decrease/increase in the light on the proximity of the point of pressure and voila. Simpler, cheaper, and almost complexity-free.
You can use even OpenCV to decode the input.
There's no need to make robot tactile input complex and with difficult repair capabilities.
And here to get the limbs 3D position for robots using only 3 RGB LEDs, one fiber optic per joint, and one central camera. It's simple and cheap.
Oh, you can use a black light as a primary source of light and an external black light paint or film on the sponge or transparent gelatinous medium to reflect inwards visible light and capture it on optical fibers too. Can use micro reflective particles in a transparent medium, etc, etc. The sky is the limit.
The angle can easily be inferred from the color returned to the optical fiber. There's the option to use optical fiber as a centralized light source too.
No electronic parts or an electrical current in body members is my policy :-p
All body inputs can be read in only one point, with a single internal camera.
Structural damage reports can be obtained this way too, like broken bones or fractures on supporting structures, if made of transparent acrylic or resin.
And, lastly, because I don't have money or space for a 3d printer, for prototyping, will build the skeleton and supporting structures with epoxy. Yaaaayyyyy.
Present Day, Present Time: 2021/03/08 - Layer 3
Hello guys. Today we’re going to put our square-shaped, yellow-colored friend to work.
Let's start with one of the central points, which depends on several other concepts to work, but even so, we will skip all of them and go straight to it because it is fun to talk about and I need this fun to motivate me to write.
What is Attention? Let's define it as something that appeals to our senses, to our desires and needs. It's something mutable, that changes from time to time and from people to people. BUT, this is some very vague definition. What in ourselves is mutable, to measure our "Attention" as a value quantified? What this value is composed of?
Let's call it some arbitrary valuation. We can infer that some object with a lot of numerical quantity of "something" is the target of our attention and we save these values for every object or live being in the world.
Not only we save these values but refresh them based on the values set by ourselves or even for ourselves.
What we are calling merely "values" can be classified as emotional states and we can actually even see some attempts to categorize it, as in Plutchik's Wheel of Emotions classification.
Like in Plutchik's Wheel of Emotions, we can have a simplified version of a wheel of emotions.
All these emotional values affect the Agent and the sum of the multiple values that are bound to the surrounding objects build the Agent emotional values in general, like a "big sum" of all of these values.
Here a representation of what happens at each moment, with the extraction of information from the environment.
First, when the emotional Agent is placed in an environment (a), it will start to collect information through its sensors and cross-check this information with what already exists available in memory (b). After checking, the emotional content attached by the Agent to each element of the Scene is retrieved, according to the pre-existing values in memory (c). Finally, the set of Elements of the Scene, as well as the emotional states associated with these end up influencing the Agent's Current Emotional State (d).
That's it for today. Tomorrow* we will continue the saga of "Attention" and how it becomes something fun to model.
*(idk)
Present Day, Present Time: 2021/03/11 - Layer 4
Hello guys. These days there were many setbacks, many worries that were taking me out of sleep but... it's life. Let's go.
Continuing from where we left off, we will detail the issue of emotional value associated with objects and agents in the world.
To be simpler, we got the 4 basic human emotions.
In short, we got the 4 basic human emotions and associate all of them, with their individual values, on every object or agent that is seen or was saw. Even different parts of a singular Object or Agent have their own value and the sum of these parts make a value total.
In the process of valuing the different elements of a scene, a queue of priorities is set up where the elements with the greatest added emotional value come first, preceded by other objects in decreasing order of emotional value.
In the construction of the Scene, in the Attention module, all the elements of the environment and the emotional states associated with them are considered (a). Then, the spatial emotional influence provoked and received by each evaluated element is evaluated (b). After accounting, an emotional value (c) is affixed by the Agent to each element of the Scene so that they check whether or not they have enough value to pass through the filters of the Attention module. Finally, the set of Scene Elements that managed to pass through the filters are linked according to the importance value assigned by the Emotional Agent (d).
Beyond the raw emotional value, there's a threshold that is used as an "Attention" limiar. Only when an emotional value is above this threshold is that it is considered on the Priority Queue.
This threshold line is moved mainly by 3 different evaluators (Filters) that will direct the attention target.
The Attention module has five different types of filters stored in an internal module, three of which are for basic use called "Memory Filter", "Prediction and Probability Filter" and "Relevance Filter". There are too two different filters, "Goal Filter" and "Problem Filter". Each element of the scene that passes through each one of them will receive a determined value or mark according to the importance criterion adopted in each filter.
The Memory Filter works basically like a type of "curiosity" where the Emotional Agent is faced with an unknown Object, Agent, or Event and tends to keep the focus of attention directed at them. However, as the presence of these new elements becomes common, the tendency is to treat them as a routine and divert attention more easily. For each Agent/Object/Event in the Scene, when evaluating who goes through this filter or not, it is verified in the Memory module if the Emotional Agent has little contact with each one or not. It is basically a variable whose value will be obtained from the inverse of the contact quantity, be it punctual or a sum of time intervals. In (a) the value of the attention given by the Emotional Agent to a Bee is exemplified when he first observes it, however, as this contact becomes common, the attention directed to Bee tends to fall.
The Prediction and Probability Filter embodies the state of surprise where the Emotional Agent keeps his attention focused on elements of everyday life that come out on the scene in an unexpected way, that escape expectations or that contradict a probability. This is the case of Agents, Objects, and Events that may have behavior or aspects that differ from those expected in the internal module Problem / Solution as well as Common Sense Memory, both from the external module Memory. As an example, we have the case of (b) where two bees are talking, an unusual action for a Bee and not expected by the Emotional Agent.
The Relevance Filter takes into account the issue of emotional impact linked to the Agent/Object/Event that is the target of attention in the Memory and Scene module, that is, the tendency is to keep the focus of attention on the elements that produce higher values of Emotional involvement, whether this is a good emotion, a bad one or both. In (c) the emotional value attributed by the Emotional Agent is exemplified for various elements of the scene where the element with the highest emotional value attributed will be the one that will attract the most attention.
Next time we will disclose the last piece about emotional values and "Attention", then the fun will start.
Present Day, Present Time: 2021/03/11 - Layer 5
Hello guys. Yesterday I was scolded because I stayed up here late translating material and creating the last post for you. Thank me. Today I'm going to sleep on the couch if I do it again, so maybe I'll finish posting today or just tomorrow. Don't judge me. Sooo... This will be a micro post. Bear with it.
Shall we continue to talk about values for emotional states?
Do you remember that we had values representing the 4 basic emotional states for each element around the Agent and 4 emotional states for the Agent himself? So ... Now double that amount.
There is the current emotional value, which is what represents the emotional state of the Agent at the present moment. It is what will dictate your behavior, attention, etc. And there is the natural emotional value, which would be a natural tendency for the agent to more easily value certain emotions or not. It would be what can give each agent a unique personality.
The second serves as a modulator for the first.
Now, for the various elements distributed in the world and in the Agent's memories, we have 2 types of emotional states: The Emotional state assigned to that Element in particular and the emotional state of the influence of the environment reflected in it.
Oh, as for sadness, there is no sadness emotion related to a particular Element, other than the Agent itself, because sadness is the absence or cessation of something good.
Let's see, in case we have a cake and a bee, we can see that although the emotional values of the cake and the bee are immutable in both situations, the emotional value of the scene allows the contagion of surrounding elements.
In the second case, there is a LOT of fear contaminating the cake and little happiness contaminating the bees. This allows for a simpler calculation when making a decision.
The dynamics of the mutual influence of the Emotional values linked to each element of the Scene, including the Emotional Agent itself, is calculated through the distance of these elements from each other and the Emotional value linked to each one of them in order to add influences of the same type of Emotion, as shown in in the figure beside. In this case, we have two different scenes with different emotional values linked to each element in order to exemplify the emotional reinforcement provided by the spatial positioning of the elements in the scene. (a) shows the small influence of a single Bee on the Cake in the Emotional State SCENE, however, in (b), when joining several bees, this influence increases considerably.
This emotional contagion permits effortless unsupervised learning capabilities. For example: If a robot has a need to navigate in a maze and near a door appears a trap that "hurt" him, he will associate a value in the variable fear to the trap and also the door, because it was contaminated by the fear of the trap, either by spatial or temporal proximity. Hence, in the next navigation, seeing a door will already associate the value of fear. If this happens, again and again, it can pass to avoid even new doors since he learned from the fear of the traps that the door is associated with. He will fear doors. And, if the same Agent had not found traps, then would not be afraid of doors. This type o event can trigger irrational fears like Fructophobia or even superstition like observed even in pigeons. It's like watching football with your lucky sock or shirt for your team to win. This learning mechanism doesn't need the Agent to have an understanding of the situation.
As in this case, where the "cost" of decision-making along a path can be easy to calculate using only emotional contagion.
Not rushing things (we already doubled the number of emotional variables), ALL the Elements and even the Agent himself are a sum of a multitude of concepts, everything having its own emotional values, that when linked together form a more complex concept of an Object or even an Agent, linked as a network/graph.
In the next post, we will talk about the PROBLEM tag and how it is the soul of making a decision.
Now that we know the basics of emotional states and the attribution of their values, let's talk about Problem Solving and have FUN next time. (Ahhhhrrg, my curfew.)
Present Day, Present Time: 2021/03/12 - Layer 6
I'm back! Now is Friday and I have a little more time to post but not so much more (there`s a manga reading session after this too, then I can't take too long here. Sorry guys.)
Let's go?
Now that we know much more about the types of values on surroundings Elements and the values on the Agent himself, we will see about PROBLEMS and how it is important for reasoning.
To begin, reasoning is all about choices of how to act. It's about knowing the limited set of possible actions and what actions are involved with the solution of what problems. Is to know how actions are linked to a second action named reaction and what actions are the inverse of other actions. A lot of fun potential!!!
Usually, when you interact with a thing, you can only choose to engage with an interaction or avoid the interaction. This limits the entire set of possible actions to these 2 characteristics. Do you want to press a button? Engage in an interaction. Do you want to not press a button? Give up on squeezing. Done. You are free now. It ends with a closure. A truly end.
And... there's the PROBLEM. Far from an end, it is a means. A means to be avoided or be given a solution.
Anything that prevents the Agent from reaching a later end can be called a PROBLEM. From a closed-door or a ferocious dog loose in the yard. Something that crosses our path and we have to avoid acting or solving the impediment to finally be free to follow up our beloved closure.
BUT, sometimes, you can't avoid a problem. If a problem cannot be avoided, it needs to be solved or you can give up on the endpoint objective. A problem is solved by finding one or multiple SOLUTIONS.
Technically, avoiding a PROBLEM is also a sort of SOLUTION... maybe. The simplest of all solutions. Even doing nothing can be one too, but it depends on the context.
In this example, a hammer is a solution to solve the problem created by a bee blocking the path.
Yes, yes, but, what's it all have to be with reasoning and problem solution?
Remember the Priority Queue (From Layer 4)? The highest emotional value, the Cake (68), is the first element of the queue and, because of it, the main object of attention. The Agent really wants to get near it. BUT, there's an object, the Bonfire, impeding access to the Cake.
Because of this, the Bonfire can be marked as a PROBLEM, as it prevents or makes difficult access to the Cake.
When an Element is marked as a problem, it sums the value of the first element of the queue, only for calculations purpose, becoming the first element himself.
With this, it's easy to find the SOLUTION or SOLUTIONS linked to this PROBLEM. And maybe even the solution can become a new problem itself. Then repeat the cycle of problems and their solutions until one or multiple last solutions can be found. It's something dynamic and tends to not last for too long.
In this example, when the Emotional Agent detects a Bonfire (b) blocking the path of its objective, the Cake (a), marks the Bonfire as a problem (c) and tries to locate the solution indicated in memory (d), Water. However, a new problem arises (e) because the solution found needs another element as a necessary condition to be able to be put into practice (f), that is, the Bucket. However, near the Bucket, there is an Enemy Bee (g) serving as an obstacle for the Emotional Agent who, by marking the Bee with the status of a problem (h), can then bypass it. Thus, the Agent is able to dodge the Bee, pick up the Bucket, collect the Water, and only then put out the Bonfire to finally get the Cake.
It's it. Next we will explore Actions, Reactions and Common Sense. Till next time, partners.
Present Day, Present Time: 2021/03/14 - Layer 7
Hello, my friends!!! Today, we will see the first part about Actions and why it's so important. Will see what is a Reaction and how Common Sense works.
Let's go folks.
For starters, let's have the backing of some definitions:
What would be an Action? Let's call it a sequence of two or more consecutive moments. The stretch of a leg or arm forward has two distinct moments. The same with the return of the same leg or arm. Even the standstill can be described as two (identical) consecutive moments.
And a Reaction? We can simplify it as the result of an anterior Action. This is it. Move a Leg against the ground inserting force on it and your body will move in the opposite direction.
Truth be told, some Reactions we can count as, actually, an Action and vice versa, but let's simplify.
The thing that makes all this system shines is that the same Reaction can have multiple Actions as an origin!!!
If you want to move your body backward, all you need to do is move a Leg against the ground inserting force forward OR move an arm inserting force forward against a tree trunk.
Annnnd, we have the Anti-Reaction!!! Woooaaaaa!!!
If your Reaction is to move forward, the opposite of it is to move backward.
People, this is pure gold!!! With this, you can easily and with almost no computational power make a fast reaction to the environment to reach an objective.
You could easily find the exact sets of Actions to perform to Counteract a Reaction almost effortlessly.
You could even theoretically use it to make a bipedal robot easily learn to stand up on its own.
Everything will depend on how the concept network, which goes from Action, Reaction, Anti-Reaction, and Action again, are linked in the Concept Graph. But that's a topic for when ALL this more superficial part is explained here.
Let's not rush things, people.
Then, there's an example of a range of possible Actions accessible to the Agent and how it can learn and mark what Actions are a SOLUTION meant to solve a PROBLEM.
We can mount easily a network/graph of Problem/Solution as a means to generalize knowledge. But, again, let's not rush things.
Unlike Agents and Objects in memory, an Action cannot be linked directly to an Emotional State, as it depends a lot on the context of the Action and the elements involved. The act of opening a door can only be considered good or bad depending on the circumstances surrounding this Action and the relationship of the Emotional Agent with these other elements involved. This internal module has no direct connection with the external Emotion module and depends on the emotional states linked to the different agents and objects in the scene.
Okay. What do these things have to do with common sense? Its what we will see on the next post. Bye everyone.
Present Day, Present Time: 2021/03/14 - Layer 8
Today, Sunday, struggling to write this post. Let's go!
In the last post, we saw about Actions, Reactions, and even the concept of Anti-reactions. Now, we will see about Common Sense and how it is implemented.
Firstly, what is Common Sense?
In short, it's all about memory. That's it. The end. Finito. Fim. Acabou.
Common sense is the act to add all objects and agent's references into generic nuclei representing the sets of these things. It's like a common network/graph of concepts that represent, for example, a generic Bee that has all the emotional content of all Bees that have been saw.
Technically, an Agent, even having never interacted with a certain element, can obtain from the data of the internal module Common Sense Memory a preconceived emotional value to be associated with this new element and, therefore inducing behavior that is closer to the appropriate one expected from the Agent, according to the experiences acquired with other elements of the same type or class.
Due to the diversity of possible values, it's appropriate to associate a value that represents the set of types or classes of Agents and Objects in memory, that is, the average of all the aggregated emotional values recorded being used as a standard emotional value for new or unknown elements.
Just skip all this. We will see in detail this module and all others in the near future.
If an element is unknown, the first impressions about this element will represent the whole class of the same new element.
Not only for emotional values but for even the components that build the whole world itself. Common Sense allows to generalize and transpose knowledge between different Elements with similarities in common.
It's like a Bee and a Beetle, which both have wings. Using this common wing, we can extend the knowledge of what we know about one to the other or even unknown new insects.
Don't worry about the image on the side, we will detail it when start to describe Concepts and Network/Graph of concepts.
Ok. We saw that Common Sense is a simpler concept, but, to detail it, we need to dive a lot more profound into this. Luckily, we can do this when we approach the different modules of the architecture model. Till next time.
Present Day, Present Time: 2021/03/21 - Layer 9
Hello, dear readers!!!
These days got a lot happening in my life, so there was no post. My little dog, Luna, underwent surgery (The second, this year). I thought I caught covid, then I woke up fully recovered the next day, so I think it was a false alarm. Still on home office, then, isolated. The health system in my city has finally collapsed due to the new Brazilian variant of covid and there's no sign of vaccines readily available. Even those who previously got covid are getting it again.
Well, this is it. Let's continue?
Just a normal human scheme:
Generally speaking, to be able to perceive the world around him (a), a human being uses a set of specialized sensory organs (b) like vision, hearing, smell, touch, and taste in a way that the nervous system processes this sensory information and provides an appropriate emotional response, as well as an action directed by this emotional response.
Necessarily, in order to properly process this information from the external environment, it is necessary that the different types of events are remembered (c), and associated with elements or events considered good or bad for the individual (d). Complementing this information, the individual uses other sensory perceptions from the internal organs (e) to help identify whether new stimuli can be considered good or bad. With all this information, it is up to the brain to separate what is important at a given moment and ignore what can be considered superfluous (f) so that it is easier to keep your attention on what is most important to the individual (g) and thus react appropriately.
This reaction will basically depend on the set of beliefs, personal experiences, and emotional state at the moment of the reaction and generate an action of complex origin and consistent with the individual's mental state (h). This action is sent to specialized groups of neurons (i) responsible for translating the complex set of neuronal firing into a concise command to different muscle groups (j) and to perform one or more actions to modify the world surrounding it (a).
Just a normal emotional Agent scheme:
Annnnd that previous image can be roughly transcribed in this scheme.
Firstly, the external information available to the Emotional Agent, that is, the external input about the other Agents and Objects that surround him exists in the World and is perceived and captured through the Sensors that play roles such as vision or hearing. Then they transmit the captured data to the Memory and Emotion modules where the information will undergo a qualitative judgment and will be stored, influencing the Agent's emotional state during the process.
Complementing this external information, the Internal States module is responsible for providing internal information, internal input, of the Agent itself to the input data flow, that is, visceral feedback of the internal conditions of the Emotional Agent. This information concerns certain Agents or Objects existing in the World and is related to certain Events previously listed in memory. Right after being treated and stored, the flow of information goes from each of these modules mentioned above to the Attention module, which is responsible for filtering what type of input should be passed on or not and what level of relevance to add to each of them in a different way to facilitate the directing of attention to the really important elements of the environment.
The flow of information, already filtered, is sent to the Priority Queue where it will be placed in order of priority based on the relevance found in the Attention module. Then, the first one in the queue, that is, the one with the most relevance, is separated and sent to the Logic module where the behavior of the Agent for this input in question will be decided based on the emotional state of the Agent and the emotions attached to this one, the input itself, and whether there are problems in executing the selected logic in the Attention module. If there is a problem, after it is detected, it is sent from the Attention to Priority Queue module, where it inherits the priority value from the first in the list and adds the value of its own priority, thus assuming the position of the most important element.
After selecting the Agent's action/reaction logic and behavior based on the observed emotional state and the emotions attached to the input itself, the Action / Reaction Scripts module is triggered, which will be responsible for executing the set of scripts corresponding to the selected reaction logic, for the input in question. These scripts will manipulate the Emotional Agent Actuators which, in turn, will interact with the World.
And here is a model of the representation of the information flow for each module according to the importance of the information classification. The influence of the Attention module and the Priority Queue module can be seen in terms of the information that may or may not advance in the flow.
So that's it, folks. This was our introduction to the architecture modules. In the next post, we will begin to detail each of these modules, starting from the Emotion module, and their sub-modules.
Till next time and stay safe.
Present Day, Present Time: 2021/03/27 - Layer 10
Annnd we are back again.
This week the air conditioner broke and people are breaking the wall here to install a new one. We are doing a war operation here isolating ambients and using PFF2 Masks. Then... they stepped on the internet cable on the roof of the building and we all know what happened. It was back only at the end of the day. Weeeeeeeeee.
Unfortunately, the description of the different modules will take a larger amount of text than I would like. Hope it doesn't start to get boring.
Well, that's it. let's continue?
Emotion module schema:
This is a relatively simpler module, that holds only the main variables that represent the emotional status of the Agent.
Below we will review some of the general concepts about emotions, understand the relationship between them, the logic of values of the different variables of emotions, and their interactions.
The variable referring to the emotional state Anger will be increased in the event of a "bad event" for the Agent such as pain, aggression, or stress. On the other hand, when the increase in the Anger variable results in an action by the Emotional Agent, a decrease in its value can be observed proportionally to the action triggered. If this action is prevented from occurring due to interaction with other variables that represent other emotional states, anger tends to accumulate and reach a high value.
As for the interaction with other variables, when an increase in the Anger variable is observed, there is a relative decrease in the variables Happiness, Sadness, and Fear because the accumulation of anger tends to "blind" the other emotional states. As for the decrease in Anger, the values of Happiness and Fear are recovered in proportion to the decrease, however, Sadness will not recover its value.
The variable referring to the emotional state Fear will be increased due to the "possibility" of the occurrence of a "bad event" for the Agent, such as the presence of an enemy that can generate aggression or an object that can generate stress.
However, a decrease in its value can be observed if the "possibility" of damage disappears. When an increase in the Fear variable is observed, there is a relative and proportional decrease in the Happiness variable since the accumulation of Fear tends to inhibit the emotional state of Happiness. As for the subsequent decrease in Fear, the value of Happiness proportional to the decrease is recovered.
The variable referring to the emotional state Joy / Happiness will be added in the face of the occurrence of a "good event" for the Agent, such as the satisfaction of a need or the presence of an object of desire.
As for the interaction with other variables, when an increase in the Happiness variable is observed, there is a relative decrease in the Fear variable so that the accumulation of Happiness tends to inhibit the emotional state of Fear. As for the decrease in Happiness, the value of Fear is recovered in proportion to the decrease.
The variable referring to the emotional state Sadness will be increased, unlike the others, in view of the withdrawal of a pre-existing "good event" for the Agent, that is, a "loss" such as the interruption of the satiation of a need or the removal of an object of desire. However, in order to decrease, in addition to what occurs naturally with time, it is necessary to disappear this "loss" event in memory (a process called "forgetfulness"). When there are emotions such as anger or fear associated with the lost element, the absence of this influence on the emotional state of the Emotional Agent will provide a "relief", that is, the decrease of these emotions.
When interacting with other variables, when an increase in the Sadness variable is observed, there is a relative decrease in the variables Anger, Fear, and Happiness so that the accumulation of Sadness tends to inhibit the other emotional states. As for the decrease in Sadness, there is a recovery of the initial values of the other emotional states and, unlike the other emotional states, it is the only emotion that is not linked to the entries in memory so that its presence is only perceived in the Emotion module.
It is important to add that, in addition to the increase/decrease mechanisms described earlier, every mental state represented will suffer a slow and gradual decrease over time, as well as will be considered emotional empathy according to the Emotional States of other Agents. The Agent will tend to sympathize with the Emotions felt by other Agents, that is, he will repeat these Emotions as if they were his own. This repetition occurs in values proportional to the Emotions already associated with these other Agents, that is, he will tend to feel an Emotion according to the value of Happiness, Anger, and Fear associated with the agent of empathy. In addition to proportionality, there may be an inversion of the value of the Emotional state in the repetition if the target of empathy is an enemy.
As for the potentials of the activation functions of emotional states, the positive response is greater than the negative response in situations of low affective stimulus, which is called Positivity Offset, however, the increase in the response per amount of stimulus is greater for the negativity stimulus than that of positivity, that is Negativity Bias. Positivity Offset ends up being the tendency of a positive response near-zero stimuli, that is, the motivation for the Agent to explores ends up being stronger than the motivation for hiding at low levels of evaluative activation so that this trend must have an important survival value at the species level as it encourages exploration in neutral environments.
On the Negativity Bias, since exploratory behavior can place the individual in close proximity to a hostile stimulus, and since it is more difficult to reverse the consequences of a damaging or even fatal attack than just the loss of an unseen opportunity, the process of Natural selection may have resulted in a stronger reaction to successive negative stimulus than to positive ones.
Species possessing Positivity Offset and Negativity Bias enjoy the benefits of exploratory behavior and a self-preserving posture through the predisposition to avoid or withdraw from threatening events.
These variables that represent Emotions are divided into two distinct but interconnected categories, the Natural Emotional State and the Current Emotional State, which provide an individualized and singular emotional reaction according to the agent in question.
The Natural Emotional State is the emotional state of the agent at the beginning of the simulation, that is, it is his emotional personality and will serve as a measure to modulate the emotional values added to each new entry in memory, as well as the entries in the Current Emotional State. The Current Emotional State is derived from the Natural Emotional State and from interactions with other Agents/Objects/Events/Memory and represents the Agent's Emotional State at the current time.
Despite the structural similarity between the Natural Emotional State and the Current Emotional State, this second is the carrier of the emotional state of the Emotional Agent so that it is in him that the increase/decrease of the values of the variables related to Emotions is seen.
The Emotion module is responsible for representing the agent's Current Emotional State and the Natural/Standard Emotional State, that is, the personality. These two types of emotional states are located in different internal modules, however, directly connected.
The internal module Natural/Standard Emotional State performs a qualitative analysis of the external inputs with the aid of the internal inputs sent by the external module Internal States and the Memory module. The qualitative analysis is distributed in different submodules, one for each type of emotion, where each one is responsible for analyzing the data and inserting values in the variables corresponding to the emotional state associated with them. Such values are sent for storage in the Memory module and aggregation in the internal Current Emotional State module.
The Current Emotional State internal module will be responsible for the accumulation of emotions generated in each input that passes through the Natural/Standard Emotional State, building a circumstantial emotional portrait of the Agent, regardless of personality, according to the set of inputs received. This circumstantial emotional portrait, due to the volatility of the current Emotional State internal module, is constantly changing whether through new aggregations/subtractions of Emotional States or even the absence of these in the case of weakening over time of the stimuli received.
The Current Emotional State provides too the Body's emotional reflex to the module Internal States in Emotional Body Expression providing a physical reaction on the part of the Emotional Agent according to the Current Emotional state. As for the data flow, the Emotion module is used to associate agents and objects, through the Internal States, with new emotional states and to collect the emotional feedback already stored in the Memory module. In addition, the output of the information from the Logic module also influences the reconstruction of the environment inside the Attention module.
With this structure, it is simulated, in addition to the current mental state of the agent, taking into account all external and internal inputs, the individual emotional personality of the Emotional Agent. In this way, two agents with exactly the same structure, receiving the same sets of information, however, with different values in the variables related to the emotional states contained in the internal module Natural/Standard Emotional State may present divergent behaviors because each new record will be modulated by the value of these variables. So, in addition to the values of each new record diverging, when this set of entries is saved in the Memory module, a set of emotional values is created differently between the emotional agents and, in the same way, different values are aggregated in the internal module Current Emotional State of each one.
In the next post we will see the most complex and primordial module of the entire architecture, the Memory module.
Till next time folks!!!
Present Day, Present Time: 2021/04/02 - Layer 11
Hello, everybody!!!
Today I fooled around a little with Arduino and servos. All for the sake of prototyping a dog leg. I will use 16 servos for one leg (A basic single back leg will need a minimum of 14 servos and 18 for a single front leg). There's a problem with the amperage of the fonts. I will have to improvise with more PCA9685s with a more weak power supply and using half the slots for servos.
Ah, I took Luna (my little dog) to the pet shop to remove some knots in the fur, which appeared after the last surgery and they ended up shaving her all. Now, from Shih Tzu, she looked like a Bulldog. Hehehe.
I will post here the next steps. Maybe tomorrow(?) I will post about the memory module. Stay tuned.
Present Day, Present Time: 2021/04/04 - Layer 12
Hello again!!!
Today I found that will need modeling clay because will need to test with different approaches for the leg bones and epoxy is just only a one-tries modeling trip.
Anxious to return to playing with servos and "bones".
Let's continue with the modules?
Memory module schema:
This is one of the most important and most complex modules of the Emotional Agent because it is from there that we have which agents and objects are associated with what types of emotions and how events affect the Current Emotional State through the Natural/Standard Emotional State, both of the module Emotion.
Below we will review some of the general concepts about emotions used in this module.
In this module, the conceptual architecture of the Emotions related to each Agent (AG) / Object (OB) / Event (EV) entry recorded in memory is divided into two distinct sets of mental states that encompass anger, fear, and happiness by numeric variables.
The Emotional State AG / OB / EV is modulated by the Natural/Standard Emotional State, that is, an aggregated emotional history is generated for each element in the memory. The Emotional State Scene is added to each element in the internal Scene module in the Attention module. It is worth mentioning that, of the emotional states related to memory entries, there is no presence of Sadness because, unlike other emotions, it is related to a situation of loss/absence, not the presence of a particular Object or Agent.
Firstly, the data captured from the environment arrives through the external Sensors module, which in turn then distributes them, together with the external Emotion module, to the internal modules Events Action / Reaction, Memory Agents, and Memory Objects.
The internal Events Action / Reaction Memory module has a record of the different types of events with actions and reactions known to the Emotional Agent so that they are used to recognize events involving agents and objects. This list of events, actions, and reactions allow the Emotional Agent to recognize the different actions, who performed them, and who received them, thus building an appropriate emotional response to the relevant action.
Examples of Basic Actions or Events are the act of Hurting, containing Agent or Active Object and Agent or Passive Object of the Action; the act of Activating any mechanism, be it a door or a lever; the act of taking some object from the environment; the act of dropping an object into the environment; the act of handing over an object to another specific agent; the act of combining a particular object with another object, whether the same type or not; the act of Consuming an object by the agent, be it food, water or some type of potion; the act of Giving Order or Imposing an Objective to an Agent; the act of Teaching or demonstrating a type of Action, whether unknown or not, and its relationship with a particular Agent or Object; the act of approaching, that is, reducing the distance in relation to an agent or object; the act of moving away from, that is, increasing the distance in relation to an agent or object and, also, any and all possibilities of the Agent's Body Movement.
Unlike Agents and Objects in memory, an Action cannot be linked directly to an Emotional State, as it depends a lot on the context of the Action and the elements involved. The act of opening a door can only be considered good or bad depending on the circumstances surrounding this Action and the relationship of the Emotional Agent to these other elements involved. This internal module has no direct connection with the external Emotion module and depends on the emotional states linked to the different agents and objects in the scene.
In the Agents Memory, all Agents with whom the Emotional Agent has had or has contact are registered, allowing an individualized record of all Actions referring to these Agents and the different types of mental states attached to them. For each element stored in this internal module, there are two types of aggregated mental states, one being the AG / OB / EV Mental State, composed of the aggregation of the different mental states collected through the continuous interactions with the environment and the Scene mental state, which arises from the internal module Scene, in the external module Attention. Also stored in the Agents Memory is the spatial position in which the Agent in question was located the last time it was detected, as well as data on his physical condition, speed of movement, etc.
In the Objects Memory, similarly to the Agents in Scene, all the Objects that the Emotional Agent had or have contact with are registered, allowing an individualized record of all the actions involving these and the different types of mental states added to them. For each element stored in this internal module, there are also two types of aggregated mental states, one of which is the AG / OB / EV Mental State composed of the aggregation of the different mental states collected through repeated interactions with the environment and, on the other hand, the Scene Mental State that appears from the different interactions in the Cena internal module, from the external module Attention. As well as the spatial position in which the object in question was found the last time it was detected in Cena. Details are also recorded in this module such as who owns a particular object and its physical condition.
From these three main internal modules, together with the associated mental states coming from the external module Emotion and data from the external modules Internal States and Logic, the following specialized internal modules are created: Common Sense Memory, Recent Events Memory, Problem / Solution Memory, Events Possibilities Memory and Goals Memory, which are detailed below.
The Emotional Agent, even having never interacted with a certain element, from the data of the internal module Common Sense Memory a preconceived emotional value can be associated with this new element inducing a behavior closer to the appropriate one according to the experiences acquired with other elements of the same type or class. Due to the diversity of possible values, it is appropriate to associate a value that represents the set of types or classes of Agents and Objects in memory, that is, the average of all the aggregated emotional states recorded being used as a standard emotional value for new or unknown elements.
As a practical example, we have the situation where the Emotional Agent encounters two types of known enemies (a) and (b), each with a particular emotional value. With the presence of a new agent (c) of the same type or class as the enemies, this new element is assigned a common sense value based on the average of the emotions linked to the two enemies.
There is also the possibility for the Agent to press a button - or open a certain door, for example - and thus receive Emotional values from a certain previous, or even subsequent, event, as these events are sufficiently close in time. As an example, the Emotional Agent when crossing a room assigns part of the Emotional value associated with the enemy found in this room to the entrance door, used before meeting the enemy, and the exit door, used after this encounter. With this type of emotional temporal aggregation, the Emotional Agent gains the ability to anticipate events or actions that generate desirable consequences or not.
The internal module Recent Events Memory is responsible for associating the last Actions and Reactions with the Agents and Objects involved with them. Then, an Event can promote a temporal Emotional influence zone for new Events or occurrences already registered in Memory. It is also from this module that different mental states are aggregated to the Agents and Objects of the Agents and Objects Memory modules as Events involving them occur.
The range of possibilities for actions by the Emotional Agent in a given event, even if these actions do not lead to the resolution of a specific type of problem, is obtained in the internal module Events Possibilities from the internal module Recent Events Memory. In this module, we have the construction of a set of all known Action and Reaction possibilities involving Agents and Objects in memory from the experience and interaction with them. If the Emotional Agent encounters an enemy, it then has a set of possibilities for actions based on observations of the environment and previous experiences.
The exemplified set of possibilities can be detailed as Speaking (a), Attacking (b), Distancing (c), and Approaching (d).
As for the resolution of problems, whether they come from Agents, Objects, or Events, it is possible to navigate in a tree of actions to be performed by the Emotional Agent to reach a punctual solution, even if this solution is a set of actions for the agent to be able to get up and stand on its own feet. In the internal module Problem / Solution Memory, from the internal module Events Possibilities, there is the grouping of actions that resulted in the resolution of a problem involving the types or classes of the Agents and Objects coming from the Common Sense Memory.
In the example, the action related to solving the problem, in this case, the encounter with the enemy, is indicated by the option Attack (b).
It is also possible to achieve specific objectives independently of the Emotional Agent's expected Action for a given moment because, in the internal module Goals Memory, the direction of the behavior of this Emotional Agent is produced. This direction is given through specific actions as a solution to a given problem, involving or not a specific Agent / Object also previously chosen, causing a specific behavior. Thus, in addition to the element pointed out as an objective, the addition of one or more specific actions from the internal module Events Possibilities is allowed regardless of whether they are linked to problem-solving or not.
It is like a gymkhana, where anyone who manages to travel a certain distance with a spoon in their mouths, holding a boiled egg, wins (egg-and-spoon race). This internal module exists to allow the execution of arbitrary solutions or those that have no direct relationship with solving a problem.
Phew, this post almost doesn't come out. Well, this was an introduction to each of the internal modules of the Memory module. We will detail them later on, together with their operation.
Till next time!!!
Present Day, Present Time: 2021/04/09 - Layer 13
Hello, people!!!
Today I'm inspired!!! I have a more than 10~20 years playlist and every time that I hear it I got full of energy. On the rare occasions that I found new music that I like it's common to hear it in the loop all day long. hehe.
Today, this internal module is very simpler, then, will take only a couple of minutes to make this post, including creating the cosplayer of our little yellow squared friend (Don't ask me why I do this. Even I don't know).
In addition to the information coming from the environment outside the Agent itself, such as the presence of enemies nearby, it is vitally important to also obtain the internal states by the Agent himself, which may vary depending on the game, such as current health status, level of health, hunger or even pain. It is through the Internal States module that the internal input is generated, which, in turn, will complement the flow of data captured by the external sensors to be sent to the Emotion and Memory modules.
Depending on the game and the applied logic in question, the types of internal input can vary widely, from the physical condition of the Emotional Agent, its spatial positioning, and even indicators of pain and pleasure. This module is essential in learning because, in the face of new stimuli involving other Agents, Objects, and the Emotional Agent itself, it is the Internal States module that will define whether these stimuli are good or bad.
Basically, it will associate an emotional state with the external Emotion module to that stimulus registered in the memory. For example, the Emotional Agent finds in the environment two unknown Agents identical to each other (a), but when attacked by one of them (b), it will start to associate the Emotion of fear both to the Agent who attacked him and to the Agent who remained inert (c).
Next, we will see the second more important module of this architecture, the Attention module.
See you later.
Present Day, Present Time: 2021/04/10 - Layer 14
Hello again!!!
Today I spent some time making a structure to lift and support the robot "bones" prototype. I started mapping the connection points for tendons too. Tomorrow will finish the supporting structure.
Present Day, Present Time: 2021/04/11 - Layer 15
Hello again, again !!!
Today I'm trying to make 2 posts on the same day!!! [2021/04/10] (ps: I failed) Let's hear our best custom 10~20 years old playlist (will post it here, someday) and start to write. Let's go!
Today's module is very important for Agent's existence. It's with it that he can form an internal representation of the external world, one Scene.
Attention-related variables serve to limit the amount of information to be passed on to subsequent modules by separating the really relevant information from mere external stimuli. In this module, the conceptual architecture of Surprise / Attention is represented as a set of variables that serve as a filter to define which Agents, Objects, and Events are relevant and will generate interaction on the part of the Emotional Agent. Its representation is given by numerical variables with positive values and which cover all emotions addressed. With the increase in the focus of Attention, some of the emotional states pass through the filter, colored circle, in the Attention module.
The Attention module itself is responsible for assessing the importance that certain Agents and Objects have for the Emotional Agent and generate the ability to direct the focus of the interaction. Importance is assessed by the presence of an explicit objective in memory, by the number of records of a particular element in memory, or by the behavior of an element in the environment that differs from that recorded as a solution to a given problem. The mapping of the environment through the positioning of the Agents and Objects and different interactions between them also defines the level of importance, thus calculating an additional emotional value related to the complexity of the environment for each element of the scenario.
The process of mapping the environment starts in the internal module Scene, where the global assessment of the set of Agents, Objects, and Events sent by Memory, Emotion, and the Internal States modules are made. In other words, a general picture of the current environment in which the Emotional Agent is inserted is assembled, where all the elements mutually influence the emotional values attached to each element, forming a second layer of mental states called Emotional State Scene.
This new layer, Emotional State Scene, is linked to the emotional states that already exist in the elements of the Scene so that, in addition to adding value when passing through each filter in the Attention module and in the position in the Priority Queue module, they will be essential for decision making in the Logic module.
The dynamics of the mutual influence of the Emotional values linked to each element of the Scene, including the Emotional Agent itself, is calculated through the distance of these elements from each other and the Emotional value linked to each one of them in order to add influences of the same type of Emotion, as shown in in the figure beside. In this case, we have two different scenes with different emotional values linked to each element in order to exemplify the emotional reinforcement provided by the spatial positioning of the elements in the scene. (a) shows the small influence of a single Bee on the Cake in the Emotional State SCENE, however, in (b), when joining several bees, this influence increases considerably.
The construction of the Scene is also responsible for detecting problems involving the Emotional Agent and the important elements of the environment that are necessarily sent to the external Priority Queue module and later Logic. Thus, when a physical barrier or possibility of danger from another element of the Scene is verified, a mark will be added to them indicating the existence of one or more problems.
The assessment of which Agents, Objects, and Events must follow in the flow between the modules and which will be retained in the Scene is made through different types of filters with values defined according to the level of Attention of the Emotional Agent. This level is subtracted from the value associated with the emotions of those involved in the Scene where a negative value, as a result, means that the Agent / Object / Event did not pass through the filter, however, a positive value indicates, in addition to the passage to the subsequent module, the value of "strength" of the associated emotional state.
The Attention module has five different types of filters stored each in an internal module, three of which are basic use filters called Memory Filter, Prevision and Probability Filter, and Relevance Filter and two specialized filters, which are the Goal Filter and Problem Filter. Each element of the Scene that passes through each one of them will receive a determined value or tag according to the importance criterion adopted in each filter.
Basic use filters:
The Memory Filter basically works as a type of "curiosity" where the Emotional Agent is faced with an unknown Object, Agent, or Event and tends to keep the focus of attention directed at them. However, as the presence of these new elements becomes common, the tendency is to treat them as a routine and to divert attention more easily. For each Agent / Object / Event in the Scene, when evaluating who goes through this filter or not, it is verified in the Memory module if the Emotional Agent has little contact with each one or not. It is basically a variable whose value will be obtained from the inverse of the contact quantity, be it punctual or a sum of time intervals.
In (a) the value of the attention given by the Emotional Agent to a Bee is exemplified when he first observes it, however, as this contact becomes common, the attention directed to Bee tends to fall.
The Prevision and Probability Filter embodies the state of surprise where the Emotional Agent keeps his attention focused on elements that come out of everyday life, that appear in Scene unexpectedly, that escape expectations or that contradict a probability. This is the case of Agents, Objects, and Events that may have behavior or aspects that differ from those expected in the internal module Problem / Solution as well as Common Sense Memory, both from the external module Memory.
As an example, we have the case of (b) where two bees are talking, an unusual action for a Bee and not expected by the Emotional Agent.
The Relevance Filter takes into account the issue of emotional impact linked to the Agent / Object / Event that is the target of attention in the Memory and Scene module, that is, the tendency is to keep the focus of attention on the elements that produce higher values of Emotional involvement, whether this is a good emotion, a bad one or both.
In (c) the emotional value attributed by the Emotional Agent is exemplified for various elements of the scene where the element with the highest emotional value attributed will be the one that will attract the most attention.
Specialized filters:
The Objective Filter allows directing the focus of the Emotional Agent's attention when receiving directly from the Memory module, the Agents, Objects, and Events that have the goal tag. Thus adding a predetermined value equivalent to the importance of the objective and sending, regardless of its value, to the Priority Queue module.
Finally, the Problem Filter, like the Objective Filter, is different from the others because it does not evaluate the emotional potential of the elements in the scene, but if there are impediments, dangers or problems to be marked, whether immediate or probable, involving the Agent / Object / Event focus of attention that is, consequently, the target of the external Logic module.
Now, we can demonstrate how the previous modules work together as follows:
In the construction of the Scene, in the Attention module, all the elements of the environment and the emotional states associated with them are considered (a). Then, the spatial emotional influence provoked and received by each evaluated element is evaluated (b). After measured, an emotional value (c) is affixed by the Agent to each element of the Scene so that they check whether or not they have enough value to pass through the filters of the Attention module. Finally, the set of Scene Elements that managed to pass through the filters are linked according to the importance value assigned by the Emotional Agent (d).
In the next post, if there is no "Little detour [?]", we will see the Priority queue module. A must-have module.
Till next time, my friends.
Present Day, Present Time: 2021/04/14 - Layer 16
Hello, my dear friends readers.
These days, time is scarce and energy levels are low. Very low. But, let's continue. I do this for love, either way. How I dream of working doing research all day long. It would be paradise. Now, going back to earth, let's continue.
Today we will see the Priority Queue module. It's basically a module to direct attention and aid cognition.
In order for the Emotional Agent to react to different types of external stimuli in an appropriate manner, it needs to decide which element of the Scene it needs to focus attention on the Logic module in a given period of time, one at a time, reacting consistently with that stimulus.
This ability to direct the focus of attention is possible thanks to the work of the Attention module, where, after evaluating the elements of the Scene through each of its filters, it provides the Priority Queue module with the ability to organize the elements categorizing it.
In this module, the Agents and Objects are organized in order of priority so that only the first in the queue, that is, the one with the highest priority, will be sent to the external module Logic. Likewise, the emotional state associated with it and also acquired during the construction of the Scene in the external module Attention, which will be sent to the external module Internal States to subsidize the Body emotional reflex in Emotional Body Manifestation regarding the focus element of attention. Thus, the Emotional Agent will always seek to focus on what is most important and relevant to him, however, without ignoring other Agents, Objects, and Events of relatively minor importance.
After removing the first element from the list, due to a change in relevance or achieved objective, the element that assumes the most relevant place will be sent to the external module Logic and so on and on.
In the case of the detection of a problem in the Scene involving the first element of the queue sent to the Logic module, after checking all the elements of the Scene in the Attention module, the Agents, Objects, and Events causing some type of impediment receive the indicative marker of a problem. This allows that, even if these elements do not pass the other filters, they are selected through the Problem Filter and sent to the Priority Queue module. Upon reaching the queue, they inherit the priority value from the first one on the list involved in the problem, which was sent to the Logic module, and add their own priority value. It is worth mentioning that, in the case of multiple problems related to a single Agent, Object, or Event that is the first in the list, the problem with the highest priority will be selected for sending.
Once a problem is detected and after it receives the value of the first in line plus its own value, subsequently appearing a more relevant Agent / Object / Event, the structure of the former first placed with the problems related to it is moved entirely to the pertinent position in the queue giving space for the new first place. In this way, a dynamic and adaptive structure is created where the conditions of the environment, in addition to the individual conditions of each element of the Scene, provide which element should be the focus of attention.
In the construction of the Scene, in the Attention module, all the elements of the environment and the emotional states associated with them are considered (a). Then, the spatial emotional influence provoked and received by each evaluated element is evaluated (b). After accounting, an emotional value (c) is affixed by the Agent to each element of the Scene to verify whether or not they have enough value to pass through the filters of the Attention module. Finally, the set of Scene Elements that managed to pass through the filters are linked according to the importance value assigned by the Emotional Agent (d).
In (a), the situation observed in the previous figure follows, where the elements of the Scene are distributed in the Priority Queue module according to the importance value assigned to them by the Emotional Agent who will react to the presence of the Cake because it occupies the first position of the queue. As a consequence of the subsequent reassessment of the Scene by the Logic module, the Bonfire is marked as an obstacle (b) between the Emotional Agent and the Cake making this object that is marked as a problem have the importance value of the Cake added to it and become the first in line.
And that's all for today. Until next time folks!!!
Present Day, Present Time: 2021/04/17 - Layer 17
Ohayo!!!
Good to see you here, dear readers. I hope that you are being able to overcome my eccentricities and stay here. I wish you don't run away even if I say: "save the cheerleader save the world". Hehe.
This module is the true brain of the Agent. The brain of the brain. Let's continue?
The Logic module is what will orchestrate the decision-making process, where the Emotional State Scene and the Emotional State AG / OB / EV end up subsidizing the use of a specific logic or even the calculation of the cost of a given action in terms of Emotions involved and the spatial positioning of the elements of the Scene.
The choice of the best cost turns out to be basically the result of adding positive Emotional values like Happiness and subtracting Negatives like Fear. In this sense, Anger may or may not be polarized in positive or negative, depending on the target of the cost calculation if it is an enemy or an object of desire. Thus, this module is responsible for a specific action that the Emotional Agent will perform in relation to a single and determined Agent / Object / Event received from the Priority Queue module.
As an example, when an Emotional Agent likes a piece of cake (a), however, the shortest path to this object is full of bees (b), then that causes the cost of the longest path (c) to be more favorable if the Fear is considered.
The different specialized internal modules are Standard Logic, Goal Logic, and Problem Logic where the determining factor of which module will be responsible for the reaction of the Emotional Agent is whether or not the focus of attention passed through the Goal Filter or Problem Filter of the external module Attention.
The internal module Standard Logic is responsible for the reactive and exploratory behavior of the Emotional Agent and is related to simpler and instinctive behaviors, such as curiosity since it receives the elements that passed through the internal modules: Memory Filter, Prevision and probability Filter and Relevance Filter of the module external Attention (a).
Upon receiving the Agent / Object / Event from the external module Priority Queue, the emotional state's AG / OB / EV and Scene will be checked in order to direct the type of behavior that should be selected from the internal module Problem/Solution Memory from external module Memory. At the same time, it is checked with the Scene internal module, of the external module Attention, if next to this same Agent / Object / Event there is any other element that represents or may come to represent a problem or even an impediment to the execution of the solution found in the memory. If so, the problem status is marked next to these elements allowing them to pass through the problem filter on the external module Attention and then execute the internal module Problem Logic in the Logic module.
The internal Goal Logic module receives the elements that passed through the internal Goal Filter module (b), from the external Attention module, and through the Internal Goals Memory module from the external Memory module. It should be remembered that the elements, in addition to the value attached to the output of the Goal Filter, can add new values to the variables related to their emotional states if they also pass through the Memory Filter, Prevision and probability Filter, or Relevance Filter.
Unlike the Standard Logic and Problem Logic internal modules, the Goal Logic, in addition to having access to the internal module Problem / Solution of the external module Memory has access to the internal module Goals Memory. This way, in addition to receiving the Agent, Object, or Event from the Priority Queue module, it can directly receive which action to perform without necessarily having to access the internal Problem / Solution module to search for a compatible action.
Thus, through the use of objectives, the behavior of the Emotional Agent can be guided towards less intuitive and more specific actions, differently from the other internal logic modules where a more standardized action is presented according to the list of problems and solutions found in the memory.
Exactly as in the other logics, it is verified with the internal module Scene, of the external module Attention, if with the Agent, Object or Event there is any other element that represents or may come to represent a problem, even if an impediment to the execution of the solution found in memory or imposed on the internal module Goals. If so, the status of problem is marked with these elements, allowing it to pass through the Problem filter (c) in the external module Attention and later sent to the priority queue. Then, the internal module Problem Logic occurs.
The internal module Problem Logic is related to more complex behaviors because, unlike the internal modules Standard Logic and Goal Logic, the Problem Logic serves as an aid to solve intermediate problems detected between the Agent / Object / Event that is the focus of attention and an element that has been preventing the implementation of an Action.
We have as an example a Bee preventing the Emotional Agent from reaching a piece of Cake, causing him to take the hammer to solve this problem, according to a solution located in memory.
For this reason, the Problem Logic will generally be executed as a support after the use of the other logics to serve as an intermediate step and solve the obstacles until the natural flow between the modules can be resumed.
Although it is already running, it remains to be checked again if there is any other element with the Agent / Object / Event and the previously detected problem that represents or may represent a new problem or impediment to the execution of the solution found in memory. If so, new problems are marked with these new elements which, when they pass through the problem filter, the execution of the internal module Problem Logic returns.
Thus, the Emotional Agent can chain the resolution of complex problems into interdependent problems and easily find a solution in memory.
In addition to specialization, as to the type of logic involved in the behavior of the Emotional Agent, the resulting actions are also divided into specific groups of possibilities that are located in the internal modules Confront, Avoid and Circumvent.
The Confront internal module is linked to the Standard Logic, Goal Logic, and Problem Logic modules since it involves problem-solving behavior and interaction with Agents, Objects, and Events. There's an example of a typical action of the Confront module, where the Emotional Agent approaches an enemy to attack. Despite the name, confrontation does not necessarily indicate conflict because every action that finds its purpose, whether it is consuming food or opening a door, is a confrontational action aimed at reaching a solution.
The internal module Avoid is also connected to the modules Standard Logic, Goal Logic, and Problem Logic and is responsible for the precautionary and escape behavior in the face of imminent danger.
The example demonstrates a typical action in the Avoid module where the emotional Agent moves away from an enemy to escape or avoid contact.
When triggered through the internal Goal Logic module, it means that either the objective itself is to avoid/escape from a certain Agent / Object / Event, or it could not be reached, thus generating a new type of problem in which one must seek a solution in memory.
When triggered through Standard Logic, the Emotional Agent will only have to avoid or evade elements that pose danger, since this logic is used when the Agent has no explicit objectives.
In more complex situations, it may happen that additional elements in the Scene are marked as problems if the Emotion tied to the initial danger is greater than the Emotion tied to the additional elements.
When triggered through Problem Logic, avoidance usually means looking for other problems that involve reaching the goal since, for some reason, the current problem must be abandoned.
The internal module Circumvent, unlike the previous ones, is connected only to the internal module Problem Logic, since one of the solutions to overcome a certain obstacle would be to circumvent it avoiding a confrontation and, at the same time, an escape.
The example demonstrates a typical action of the Circumvent module where the Emotional Agent bypasses an enemy marked as a problem in order to reach the objective, that is, the piece of Cake.
In these cases, before trying to solve the existing problem, the Emotional Agent seeks solutions that take fewer actions and that are less dangerous or costly in resources, depending on the game, as in the case of, when finding a locked door, try to turn around on the wall before trying to open it.
There is, in a way, a hierarchy between the different groups of Actions depending on which logic is being used. In the case of Problem Logic, the tendency is always to try to circumvent a certain problem and, if this, starting from the impossibility of execution, the confrontation of the origin of the problem begins and, when this is an impossibility, this problem is finally avoided.
As for Goal Logic and Standard Logic, the main difference is that for the latter, if the cost does not justify the gains in relation to emotional states, it starts to give preference to avoid the element in question. In the use of different logics and groups of actions, the notion of cost must be taken into account, so that a positive value means that the confrontation is advantageous as opposed to a negative value that will imply avoiding it.
Again, as an example of how the mentioned modules work, when the Emotional Agent detects a Bonfire (b) blocking the path of its objective, the Cake (a), marks the Bonfire as a problem (c) and tries to locate the solution indicated in memory (d), Water. However, a new problem arises (e) because the solution found needs another element as a necessary condition to be able to be put into practice (f), that is, the Bucket. However, near the Bucket, there is an Enemy Bee (g) serving as an obstacle for the Emotional Agent who, by marking the Bee with the status of a problem (h), can then bypass it. Thus, the Agent is able to dodge the Bee, pick up the Bucket, collect the Water, and only then put out the Bonfire to finally get the Cake.
Sorry, guys. Today's post was too tiring to translate/write. It took me 2 days to finish. Arrrg.
In the next post, we will see the last module (Scripts Action / Reaction) and then, finally, be able to start to post about concepts and graph.
See you later!!!
Present Day, Present Time: 2021/04/17 - Layer 18
Hello!!!
This is the smallest post ever posted here. I'm happy about this little detail :-p
These days got a little of DOGE and it multiplied by 8!!! Sadly I only put something like a cheap Steam's game value, to begin with. Now I can buy 4 pricey games on steam. Hehe.
Back to business, let's go?
The Scripts Action / Reaction module is nothing more than the translation of the logic selected in the external module Logic for the execution of the selected steps and actions through the direct execution of functions, scripts, and other elements particular to the Emotional Agent project in question. As with other types of Agent models, there are groups of functions and scripts responsible for the actions of the Emotional Agent in the environment through the Actuators.
The functions and scripts are divided into several codes with specific functions, such as how the Agent will move, perform certain actions, and how the animation of the character will be done, whether through drawings in sprites or 3D objects.
This is the most open module and depends exclusively on choices such as the platform on which the game or robot is being developed, the programming language used in the development, and the game style selected.
This is it. With this last post, we had an introduction on all the modules of the architecture and can now, in the next post, start to talk about concepts.
Bye!!!
Present Day, Present Time: 2021/04/19 - Layer 19
Hello, everybody!!!
Today, we will speak about concepts and why it's so important for the architecture.
Let's the fun start!!!
Let's start with a little introduction. Shall we?
What is a concept?
It's all about sensorial input and information. The concept that can be created by a particular input and stored in memory.
It's like, in order to understand an image, we use many subsystems (like V1, V2, V3, V4, MT, PO, TEO, TE, etc.) to break the information into granular concepts to create a network/graph of concepts with this input information, such as shape, specific angles, color, pattern, motion, etc.
To identify letters we use the elements we have to identify day-to-day formats such as a mountain or the tip of a branch. This is "see" an image, knowing the concepts about it. Knowing that a letter H has a group of shape concepts linked together and that one building in a city can have the same shape concepts. That the red of a building has the same concept as the red of a car and being able to know and link these objects to a singular concept.
For the auditory system, a network of sounds is constructed as the different structures within the cochlea are activated. Almost a dictionary of sounds.
Once again, in parallel, a network of muscle stimulation is constructed, from which the exploratory movements are mapped and the network of movements and muscle stimulation is created.
When a baby babbles and produces an "AAAA", "BUUUU", "MAAAA", he is building a network/graph of new muscular concepts and, at the same time, feeding and building the network of auditory concepts.
The two networks ending up connected due to the temporal proximity of construction and activation.
These nets can be constructed even before they are fully interconnected, for example, where a network/graph of concepts about letterforms can be constructed even before the muscular network of concepts of the vocal organs is connected to the network of concepts of specific sounds.
Example: A person may be able to read "Arnold Schwarzenegger" without knowing how to "pronounce" "Arnold Schwarzenegger" but knowing that he is an actor.
The opposite is also true where a network/graph of specific sound concepts can be constructed and linked to the network of in-memory object concepts even before it is linked to the network of concepts about letter shapes.
Example: A person may be able to pronounce "Neko" (Cat in Japanese) without knowing that the "letters" "猫" pronounce that sound, but knowing that Neko is a feline.
You need to have in your memory the vast network/graph of concepts that are associated with letterforms, the network of the concepts of the junction of these letters (word), and the network of concepts of meaning associated with these letter junctions. If you do not have these networks of concepts built or linked to networks that form words or even letters and try to read a book in an unknown language, you will only see the forms that you can associate with the concepts you already have linked to these shapes in memory.
In short, the structure of a language does not differ much from the act of seeing, hearing, moving, and freely making associations among these different types of concept networks.
It seems that language it's only an extension of a more general network/graph of concepts used to represent physical elements of the world in concept networks using elements like shape, color, form, etc.
In the same way, the network/graph of concepts of muscular motor stimuli used to produce a sound with the mouth is mounted and linked with the adequate network/graph of concepts mounted by ear input. It's all starts with a "seed", a primary structure that will be ground to others concepts networks/graph. It's like a baby saying "Woof-woof" pointing to a dog because the motor and sound conceptual network is already mapped and linked because of hours of "baby speak" training. He can hear the "Woof" sound and know what muscular group and in what sequence can be activated to produce a similar sound.
In a camera case, you can't just store the pixels of the image to see, but you have to generate a network/graph of concepts about the image in a network of concepts. Shape, color, everything has to have its own conceptual representation, then, the image should be reconstructed with this representation.
Even words themselves have their own network/graph of concepts.
Now, back to our architecture, let's start with a top-down approach:
Maybe you can remember that we broke down the bee into several pieces and every piece had its own emotional value. But, how we were able to do this?
It all starts on the Memory module, where the input from the external world is processed to extract the features on it. This input can be an image, sound, touch, or even a gut feedback loop.
These features are classified, cataloged, linked, and on and on.
The image is passed through several subsystems, each designed to extract one or more characteristics (concepts) in order to form a conceptual representation of the Bee, in the form of a network.
With this conceptual representation of a Bee, it is easy to observe the minor concepts that come together to build it.
The same is done in parallel for every Agent or Object in the Scene.
With these separate concepts, it becomes easy to search for similar concepts in the common sense memory (Orange squares) or to create a new entry, in case it is the first contact with a new concept.
Even common sense memory forms its own network of connections, functioning as an index for searches and aggregation of representations of emotional states of a certain class or type of Agent or Object.
Ok. Time to stop. Today was atypical. Covid took a cousin of my wife today. He had 49 years. Rest in peace.
We will continue talking about concepts next time.
Stay safe my friends and until the next post.
Present Day, Present Time: 2021/05/09 - Layer 20
Hello everybody!!!! Seems like an eternity since the last time that I posted here. These days I was exhausted, fiscal, and metal. BUT, we need to continue forward to achieve our beloved soft robot pet.
Let's continue?
Last time we saw a little about the common sense network/graph and now we will see that every Agent, Object, and Event on memory has its own individual version of "common sense" on its own memory site.
Let's start with an example. In this case, our little friend sees a Bee behind a bush. What could be the problem, you ask?
Normally we represent the full network/graph of concepts of an Agent or Object here, but in the real world, it's almost impossible to extract all the features on a single time frame of interaction.
All we got, in the best of the scenarios, is a partial feature extraction of an Agent or Object. Then, we are stuck with only a partial conceptual representation of the scenario elements.
With this partial representation, how our friend will know that what he sees is actually a Bee?
It's all happens here, on their own specialized local memory network/graph, outside the Common Sense memory.
The partial representation will link only with a few of the features that compound the whole of a Bee or even other similar insects. In this case, those features were sufficient to identify a Bee.
Then, it becomes easy to flow back the missing information about the Bee behind the bush.
Of course, we are free to detail more our representation and density of features. íts all depend on the sensory input.
That's it, folks. Will try to be more present here the next weekend. Ah, I will post about the Soft robot pet update.
And I will have too to set aside time for readings in neuroscience for reverse engineering purposes. It gets complicated to do R&D with just over 40 minutes of free time per day. Urrrg!!!
Stay tuned!!!