When you first met your best friend, you probably told them a lot about yourself. Your name, where you're from, what your parents do for a living, if you have any siblings or pets, and so on. This kind of self-disclosure has many emotional, psychological, and relational benefits (Ho et al., 2018).
Disclosing personal information to another person can ease the emotional experience of the discloser and reduce stress from bad experiences (Ho et al., 2018). Telling someone about a negative experience makes it feel easier to cope with it. This effect is amplified if the discloser is met with support and validation, and it can make the people involved feel closer, more intimate. Lastly, disclosure improves "psychological outcomes deeply rooted in individuals' self-image" (Ho et al., 2018).
Perceived Identity: This term refers to who or what someone believes their conversation partner is. In this context, it means whether someone believes their conversation partner is a human or a computer.
Perceived Understanding Framework: This model states that feeling like your conversation partner really understands you on a deep, intimate level is important for conversations involving disclosure. When you feel like your partner really understands you as a person, this creates a stronger sense of social belonging. It even "activates areas of the brain associated with connection and reward" (Ho et al., 2018). This model essentially says that the benefits of self-disclosure are dependent upon the discloser feeling understood.
Personal Understanding Hypothesis: States that because computers are inherently incapable of truly "getting" you, the benefits you receive when disclosing information will be greater if you speak to a person than if you speak to a computer.
Disclosure Processing Framework: This framework states that sharing information that was previously kept secret can help decrease negative thoughts and feelings, and encourages the discloser to look at the information in a new, more positive light (Pennebaker, 1993). Because a computer cannot make moral judgments about someone, people are more likely to disclose more information, and that information is more likely to be intimate or personal. This model "emphasizes the advantages that non-human partners may provide compared to human partners" (Ho et al., 2018).
Disclosure Processing Hypothesis: Because a computer cannot form negative emotions or impressions about you, the benefits you receive when disclosing information will be greater if you speak to a computer than if you speak to a person.
Computers As Social Actors (CASA) Framework: According to the CASA framework, people tend to interact with computers in very similar ways as they interact with humans. This includes the deployment of social norms, language, politeness, and speech patterns. This happens without us actively trying; we don't have to think about being polite to the chatbot, we just are, because that's what we behave like with humans. This occurs regardless of whether or not the conversation partner is actually a computer, meaning that even if we just think the thing we're talking to is a computer, we talk to it like we talk to anyone else.
Equivalence Hypothesis: The benefits you receive when disclosing information will be the same whether you speak to a computer OR a human.
What Are The Effects Of Self-Disclosure?
In order to examine the effects of disclosing personal information to an AI, we must first know the effects of self-disclosure to another person. According to Ho et al. (2018), telling others information about yourself has many psychological benefits, like reducing the stress that comes from bad experiences (Martins et al., 2013). Self-disclosure may increase negative mood briefly, but this leads to improvement later (Kelley, Lumley, & Leisen, 1997). When you tell someone something and are met with support and kindness, this improves your relationship with them as well (Altman & Taylor, 1973; Sprecher, Treger, & Wondra, 2013).
Watch this short video for more information about self-disclosure and its benefits.
How Does AI Affect This?
For starters, the perceived identity (defined above) of the person or thing you're speaking to does matter. According to Ho et al. (2018), even just thinking your conversation partner is a computer has "profound effects, even when actual identity does not." If you believe you're talking to a computer, it doesn't really matter if you are or not. Even if you are actually talking to a human, the belief that you are speaking to an AI has a huge effect on self-disclosure.
People will often avoid disclosing information if they believe it will be met with judgment, rejection, or other negative consequences. Because an AI cannot make judgments on its own, people might be more likely to disclose personal information to a chatbot. However, people also tend to believe that chatbots are not as good at emotional tasks as humans are, so this may lead to people not disclosing personal information to a chatbot (Ho et al., 2018).
You may be wondering, "That's cool and all, but can this be tested?" The answer is yes! It can and has been tested. Let's discuss what was found.
Annabell Ho, Jeff Hancock, & Adam S. Miner's Experiment (2018)
Researchers Ho, Hancock, and Miner (2018) (the same Miner cited on the Mental Healthcare tab) from Stanford University in California conducted an experimental study on the effects of self-disclosure to chatbots vs. humans. More specifically, they studied "the effects of partner identity (chatbot vs. human) on disclosure outcomes to see which of the three hypotheses is best supported" (Ho et al., 2018). To refresh your memory on the three hypotheses, refer to the "important terms & definitions" section above.
In this study, Ho, Hancock, and Miner (2018) deployed a method called the "Wizard of Oz" (WoZ) method. Some of the participants in the study were told they would be talking to a chatbot and the others were told they would be talking to a human. In reality, all participants actually talked to a human; there was no chatbot. Half of the participants (49) were instructed to have an emotional conversation with their partner, and the other half were told to have a factual, objective conversation. In other words, some participants had an emotional conversation where they believed their partner to be human, some had emotional conversations where they believed their partner to be a chatbot, some had factual conversations where they believed their partner to be human, and some had factual conversations where they believed their partner to be human. This is where the notion of "perceived identity" comes in.
How Did They Do It?
In order to test the effects of self-disclosure to an AI, the three researchers set up an experiment. The experimental design is important because it allows the results to show causality, or that one thing definitely caused another. They randomly assigned the participants to either the "chatbot" or "person" conditions. Keep in mind that all participants actually spoke to a human; the only difference is the perceived identity. Participants were then randomly assigned to have either an emotional conversation or a factual conversation. After that, they spoke to their conversation partner for 25 minutes online. The person they spoke to did not know what the participant believed them to be. In other words, the person all the participants talked to was unaware if the participant believed they were speaking to a human or a bot.
During the emotional conversations, the confederate (person the participants talked to) responded with supportive, validating words and prompted the participant to tell them more. During the factual conversations, the confederate responded with positive interest and asked questions to follow up ("Great! What did you do after that?"). After the participants talked to the confederate, they were given two questionnaires that they were told were unrelated to each other. These questionnaires contained self-report items that the researchers were interested in (Ho et al., 2018).
What Did They Measure?
Ho, Hancock, and Miner were interested in three outcome measures: relational measures, emotional measures, and psychological measures.
Relational Measures: Participants rated how warm and friendly their conversation partner was on a scale of 1-5, with 5 being "extremely" and 1 being "not at all". They also rated how much they liked their partner on a scale of 1-7, with 7 being "like a great deal" and 1 being "dislike a great deal". Lastly, they rated how much they enjoyed the conversation on a scale of 1-5, with 5 being "a great deal" and 1 being "not at all" (Ho et al., 2018).
Emotional Measures: Participants reported if they felt better after their conversation and if they felt more optimistic after the conversation. These were combined into an index the authors refer to as the "feeling better" index. Participants also reported if they felt "distressed, upset, guilty, and afraid" (Ho et al., 2018) after the conversation. These responses were also combined into an index the authors call the "negative mood" index.
Psychological Measures: Researchers also measured self-affirmation using a rating of "defensiveness towards health risk information" (Ho et al., 2018). They did this by presenting the participants (who were college students) with an article that explained how sleeping less than nine (9) hours a day can cause cognitive impairments (lapses in memory, brain fog, etc), especially in people their age. Higher levels of self-affirmation tend to relate to lower levels of defensiveness against threatening information. After reading the article, participants were asked how important they think it is for college students to get 9 or more hours of sleep a night, how important they think it is for themselves to get 9 or more hours of sleep a night, and how likely they (the participants) are to change their sleep habits so that they get 9 or more hours of sleep a night. These responses were combined into a "defensiveness" index.
Ho, Hancock, and Miner also tested three processes believed to take place during disclosure and cause the benefits of self-disclosure.
Perceived Understanding: In order to reap the benefits of disclosing information, it is hypothesized that you must believe your partner understands you. The researchers used two questions from the Working Alliance Inventory (Horvath & Greenberg, 1989) to reflect this. The responses from these reports were combined into an index that we'll call the "perceived understanding index."
Disclosure Intimacy: Refers to how personal the information you're disclosing is. This was coded into 3 levels:
Low- This would be an objective fact, like "I walked to school today." There are no feelings or emotions attached to this.
Medium- Includes thoughts, feelings, beliefs, or attitudes. An example could be something like, "I feel like I have no control."
High- Only thoughts and feelings. "I am angry."
Cognitive Reappraisal: This is the name for the process of thinking of a negative event/time in a more positive way. To measure this, the researchers used LIWC, or linguistic inquiry and word count, which has been used and validated in many studies. Using LIWC, researchers counted each time a word used fell into one of three categories: causal words (explanations about why something may have happened), insight-related words (ideas about how to handle something), and positive emotion words (seeing the silver lining).
What Did They Find?
In emotional conditions, the participants reported feeling that they revealed emotions more and disclosed information that was closer to them than in the factual conditions (for both human and chatbot conditions). Participants also tended to use simpler and clearer language when they believed they were speaking to a chatbot than those who believed they spoke to a human.
Participants who talked to a "chatbot" had just as many relational, emotional, and psychological gains as those who spoke to a human. This happened mostly in the emotional conversations, which supports previous research. This means there was a "significant effect of type of disclosure, but no effect of partner" (Ho et al., 2018)! This occurred in all three outcome measures (relational, emotional, and psychological), and in all three process measures (perceived understanding, disclosure intimacy, and cognitive reappraisal).
No matter what type of conversation partner, participants reported "improved, immediate emotional experiences after emotional disclosure compared to factual disclosure" (Ho et al., 2018) and that they felt much better after emotional conversations as compared to factual ones. There was no effect of perceived partner, meaning these results support the equivalence hypothesis!
In both the human and chatbot conditions, ratings of relationship quality with the conversation partner improved after having an emotional conversation versus a factual one. Participants also reported that their partner seemed warmer and friendlier, and that they enjoyed the conversation more after having an emotional conversation. These results also support the equivalence hypothesis.
In both partner conditions, participants' defensiveness towards threatening health information did not change. However, participants in the factual condition were more defensive about changing their sleep habits than those in the emotional condition. Therefore, participants in the factual condition experienced less self-affirmation than those in the emotional condition. Perceived partner type made no difference, also supporting the equivalence hypothesis.
All participants experienced similar benefits of self-disclosure, regardless of other factors like perceived partner or factual/emotional. This provides even more support for the equivalence hypothesis. Throughout all of the sample, researchers found that it was the processes that were more highly correlated with disclosure benefits. They write, "the more participants engaged in these processes, the more beneficial their outcomes were" (Ho et al., 2018).
Participants reported feeling "more understood" (Ho et al., 2018) when they disclosed emotions rather than facts, but there was no noteworthy impact of the partner. On the same note, when it came to intimate disclosure and using words that reflected cognitive reappraisal, the participants were more inclined to do so after emotional disclosure rather than factual disclosure, and again, there was no influence of the perceived partner or any interaction.
Lastly, the researchers found that the "correlations between the processes and the outcomes were similar, with no significant differences in strength, between partner conditions" (Ho et al., 2018). In other words, people engaged in the processes that lead to beneficial outcomes just as much with a partner they believed to be a chatbot as with a partner they believed to be a human. This supports the equivalence hypothesis.
What Does All This Mean?
What this means is that the field of psychology is changing rapidly with the times we live in. With all the technology we have available to us, and the fact that that tech is changing and improving by the day, psychology will have to do the same. Obviously, we can't change an entire science in a day, but we don't necessarily need to. Psychology is a flexible field. If we can use AI to help people, and get results that are incredibly similar to what we get with human clinicians now, we can expand psychology.
This new cross-over of technology and psychology is called "cyber psychology" or "psyber psychology." Since so much of our lives is lived online, knowing how the Internet changes the way we think and behave is crucial. As AI improves and we find more and more uses for it, we take one more step into the future. Maybe one day, we won't even need to go to the doctor- we can just call up an AI and it can tell us what's wrong and how to fix it. That day is far, far ahead of us, but perhaps it is not as impossible as we once thought.
Click the button below to try out a chatbot for yourself!
Now you (hopefully) know more about AI and its many uses than you did before. AI being used in the workplace is taking off pretty rapidly and it may be in your workplace before you know it. The good news is that it probably can't do your job for you, so you don't need to worry about being replaced by a robot, and AI can actually make your life easier at work. It can assist you in making decisions, provide a good outlet for self-disclosure and venting/ranting, make your work feel more meaningful, and if you work or plan on working in the mental health care field, it can cut down on burnout and compassion fatigue. AI may seem scary, but really, it isn't; it's another tool that we can use to improve our lives and the lives of those around us.