Human-Machine Communication
Perception of Humanness
Theories of Communication
Human-robot communication or human-machine communication is becoming a larger part of the workplace and soon day-to-day life. Robots are becoming very important in the service sector. Airports, restaurants, and many other industries that provide services for people are incorporating robots and artificially intelligent technology. These service sectors are expected to rapidly invest in robotic and AI technology. These systems can carry out a broad range of physical and computational tasks that were once done by humans. These AI robots also often interact and communicate with humans. Psychological research has been conducted to study how humans respond to and experience different forms of AI robotic technology (Blut, 2021). This research not only predicts future AI-related phenomena but also sheds light on human psychology.
Some of this research overlaps with studying the customer-object relation. In industries that appeal to the senses such as marketing or food services, studies have investigated how sense contributes to a consumer's perception of an object. Consumer-object perception has also been studied on an emotional level. How people form attachments to objects and place value on material possession influences how marketing is conducted. The customer-object relation most applicable to AI robots is anthropomorphism. People often perceive inanimate objects as having human characteristics. This characterization leads to a sense of humanness in the object that changes the individual's perception. This is highly applicable to AI robotic technology because the technology is often designed to imitate human characteristics such as speech, ability, and appearance. Robots that look like humans provide a human presence that mechanical-looking robots lack. In some cases, AI’s ability to imitate human writing has allowed virtual programs to pass as human. Some research has shown that service robots that customers perceive as more human, result in participants liking companies more. Some research supports that the human qualities of the robot facilitate social interaction, while others are skeptical of anthropomorphized robots. The skeptics propose that humans may feel discomfort when interacting with a highly human-like robot. The uncanny valley effect where a robot is close to human yet not human may strike people as cold, eerie, or unsettling (Blut, 2021). Many depictions of AI in movies and media use this effect to produce a sense of unease, regardless, the likenesses of robots are often increasingly human.
A meta-analysis found that overall, the anthropomorphism of the service robot increased the customers' intentions to use the robot. The human-like characteristics of robots seem to allow humans to apply the social rules of social interactions. Human-like robots meet the customer’s expectations making interaction easier. This supports anthropomorphism theory. The characteristics that appeared to moderate this positive effect were likability, animacy, intelligence, social presence, and safety. The functional characteristics of usefulness and ease of use were found to lead to customer acceptance. Anthropomorphism was related to customer satisfaction and positive affect but not negative affect. Anthropomorphism had a stronger effect on female robots than on male robots. It also had a stronger effect on information-processing tasks. The need for interaction and the age of the customers were found to influence the extent to which participants anthropomorphized the robots (Blut, 2021). These findings suggest that the more human-like a robot is, the easier it is to anthropomorphize, which aids communication. It also concludes that while humans have a general tendency to anthropomorphize, there is variation in this tendency between people.
The study recommended that AI robots be used in critical services and information-processing roles such as banking. These roles may have additional benefits for anthropomorphized robots. It also recommended that female names, voices, and appearances may magnify the benefit of anthropomorphized robots. Finally, the research suggests that businesses may be able to profile and target customers based on the customer's qualities and how they will interact with robotic service providers (Blut, 2021). The human tendency to pick up cues of humanness and respond accordingly seems to be conducive to any type of communication that involves humans and research in the area is important for companies that implement them.
Human-machine communication (HMC) proliferates, it is important to study how humans interact with machines. How humans perceive AI technology as well as how communication relationships arise is important to understanding this interaction. I-it and I-thou are two different relationships found in the study of communication. I-It is the relationship between a person and an object, while I-thou is the relationship between two people. The object of the conversation is personalized and on a deeper level. I-thou relationships often include authentic dialogue, a type of dialogue that is focused on the persons within the dialogue as opposed to some outside task or problem. As AI increases in strength, ability, and humanness, the possibility for I-thou relationships is more possible. A large part of the I-thou relationship is the perceived awareness of the partner. While new AI technology often includes social affordances and cues to promote social interaction, humans seem to find ways to connect regardless. There seems to be a widespread human ability to transfer human-to-human social skills into different contexts such as inanimate computers (Westerman et. al., 2020). I-thou relationships are difficult to achieve, but the combination of the human tendency to apply social skills and increasingly socially intelligent technology may make it possible.
For decades researchers and engineers have tried to develop forms of automated intelligence. This intelligence is often designed to mimic human intelligence. AI systems have been embodied and augmented to simulate human-like cognition and abilities. Some modern AI systems have incorporated emotional intelligence. Some systems can register emotion and produce emotional faces of their own. The front line of this development requires knowledge of human behaviors and perceptions. There is a push to make AI more “human-centric” so that humans can have better control of AI in the future. The Turing test was constructed to provide a goal for human-like AI. The Turing test involves testing whether an AI system passed as a human. As systems approach or achieve this level, the question remains about how humans respond differently when interacting with robots as opposed to humans. Some of these perceptions may be explained using the Computers as Social Actors (CASA) theory. CASA theory supports the idea that humans interact with machines as if they were humans. This is considered a mindless process because humans rely on overlearned behaviors, social categories, and premature cognitive commitments. If someone recognizes human attributes in the technology, he or she will often apply scripts of language as they would another human. Once cues are recognized, humans will satisfice the situation by applying scripts without reevaluating after further cues from the technology (Westerman et. al., 2020). This theory proposes that humans often pretend that the robot is a human or act accordingly.
This phenomenon may also be explained by Spinosian information processing. This type of processing wraps up belief and knowledge. It is a heuristic that can be efficient when talking to another person, but it predisposes people to believe that their interaction partner is a human by default. This heuristic is harmful because it makes it difficult to doubt beliefs (Westerman et. al., 2020). It often seems easier to accept that the object of communication is a human than to acknowledge that social interaction is directed toward a non-living machine.
While people predict that interactions with robots will result in greater uncertainty and less liking, this was not found. Instead, humans find robots to be fairly equivalent to other humans. Since interactions emulate human-to-human interactions, theories of interpersonal communication (IPC) and computer-mediated communication (CMC) can be applied to human-robot interactions (Westerman et. al., 2020). These theories are typically applied among humans, but due to the similarities in human behavior can be extended.
Relational Dialectics Theory (RDT) supposes that human communication involves a push and pull of tension and presence. This can be applied to the HMC because there is a degree to which novelty and predictability work oppositionally in a human-robot conversation. For example, if the machine spontaneously pushes expectations, then it may seem more interesting. When the responsibility of the robot is important, consistency is important. Another oppositional dynamic is the balance between the benefits of disclosure and the protection of privacy. Some systems are continuously recording to record and respond to verbal information, however, this brings up privacy concerns (Westerman et. al., 2020). These dynamics show up in human-to-human communication, but the unique capabilities of technology alter this dynamic.
Another theory of communication is Grice's four maxims which outline expectations that communicators hold. These include quality, quantity, relation, and manner. Quality is the truth value of statements, quality is the balance between too much and too little information, relation is the information's relevance, and manor is the information's clarity. These guidelines can be valuable for creating effective AI communication. These norms can be strategically violated but violations can be upsetting. For example, if a person lies (poor quality) the other person in the dialogue may feel manipulated. Since it is important to be able to trust AI, some AI has been produced that currently uses Grice’s four principles (Westerman et. al., 2020). These four principles along with many common phrases and mannerisms are often used subconsciously but are important for AI systems to use so that dialogue flows.
Other idiosyncrasies of human language include the fact that not all common statements make logical sense. Implicatures are statements that do not logically make sense. Implicatures are often temporarily and culturally unique, and to optimize AI communication, it is important that AI can incorporate common sense knowledge (Westerman et. al., 2020). These implicatures vary from place to place and change over time, and because they are not self-explanatory pose an obstacle for AI. These theories focus on the structure and content of communication, but communication can also be studied on a more relational level.
Three AI-human applicable models of computer-mediated communication are the Social Identity/Deindividuation Effect (SIDE) Model, the Social Information Processing Theory (SIPT), and the Hyperpersonal Model. The SIDE model shows how the strength of social identity can cause people to interact positively or negatively with each other depending on the level of group membership and identification. When members are highly aware of group membership, individualized characteristics disappear. This extends to human-robot communication because some studies told participants that they would interact with a robot which lowered their expectations of social presence and liking. When the robot was more human-like these findings were not found (Westerman et. al., 2020). The appearance of the robot and the statements identifying the robot seem to have been cues as to whether the robot was part of the group. When the robot was seen as part of the group (human-like) it was liked more than when the outsider status of the robot was made salient.
The Social Information Processing Theory (SIPT) may also apply to human-computer communication. The theory posits that under the condition that communicators share goals and motivations, communicators can find ways to adapt through various forms of communication. For example, many people have found ways to verbalize nonverbal communication when using text-only communication. This theory can be applied to AI chatbots, where the primary affordances of communication are text-based. While communication between the human and computer may take longer to form, relationships and communication can adapt to the limitations and affordances of the machine (Westerman et. al., 2020). This may mean that the difficulties in communication between AI and humans may disappear as humans get better at prompting AI. Some of the most popular text based chatbots include those designed by Open AI. The chatbot Chat-GPT has recently becomes very popular in recent media due to its impressive language predictive model.
Finally, the hyperpersonal model is based on the premise that communicators can select the information that they share with others. This selection allows communicators to deliberately control how they present themselves to others. This creates a relationship that is hyperpersonal because it is shaped to fit the listener and the communicator's intentions. This applies to AI which can store the interaction history of the human and construct responses based on that history. This can allow for a deep level of personalization. AI would potentially be able to incorporate other types of information such as biological data from the human to optimize personalization (Westerman et. al., 2020). Similarly to SIPT, the Hyperpersonal model proposes that relationships and communication can get better over time. This theory is more about the AI tailoring itself to the individual. For example, if the AI had access to biological data, it may be able to adapt to the human’s emotional and physical state.
Research in human-machine communication may also lead to a better understanding of humanity. Understanding the processes humans use to objectify or humanize machines can have a large ethical and practical value concerning how humans treat one another. Many humans objectify and depersonalize other people, which results in much harm. HMC may also inform human communication. While the mechanical scripted language that robots use is often contrasted with the interpersonal communication of humans, this line is not always clear. In many scenarios, such as meeting new people, human scripts are highly rigid and robotic. Humans also often use black-and-white stereotypes in first impressions (Westerman et. al., 2020). Furthering the study of communication between humans and machines and applying some of the common theories of communication will lead to insights into the trajectory of humanity's relationship with technology. It will help guide the emergence of AI, and it may provide a new perspective on human behavior. Finally, supporting theories such as CASA theory allow Human-human communication and HMC to be a mutually affording area of research.