The Intelligent Friend - The newsletter about the AI-humans relationships, based only on scientific papers.
Intro
Sharing our most sensitive facts with AI could be strange and scary. However, there are cases where we even prefer a chatbot to a human. Although research on the willingness to share information with AI sees several conflicting results, today's study provides numerous evidence on what leads us to discuss some more "sensitive" information with this technology.
The paper in a nutshell
Title: Do You Mind if I Ask You a Personal Question? How AI Service Agents Alter Consumer Self-Disclosure. Authors: Kim et al. Year: 2022. Journal: Journal of Service Research.
Main result: common beliefs about the limited social judgment capabilities of artificial intelligence (AI) lead consumers to more openly disclose sensitive personal information to AI compared to humans - but this effect is limited in many situations.
In the context of a service - in this case a purely medical service - the study tries to understand which conditions can influence the disclosure of users towards the AI. And the motivation is much more "human" than one might imagine.
AI and self-disclosure
First, we need to agree on what we mean by self-disclosure. In this study he refers to the act of revealing personal information, sharing facets of one's self with others1.
Although there is not total agreement among scholars, the authors reported the main factors that, according to previous research, could influence our disclosure towards AI vs humans or, even, to anthropomorphized AI versus machinelike AI agents. These factors include, among others, privacy concerns2, reciprocity3 and many others, as well as various moderators.
We don't want to be misjudged
How many times have you felt really embarrassed by the doctor in sharing certain information? Maybe related to something you forget or are not proud of. Here, the authors of this study use a slightly different perspective to investigate disclosure in relation to AI, focusing on social judgement.
In contexts where different services take place (doctors and so on) it is in fact very common the necessity of sharing sensitive information. However, we are sometimes afraid of being judged for what we say. This worry is associated with various negative emotions, such as shame or guilt. Well, if that's clear, let's take it one step further.
Why are we afraid of the judgment of the people we interact with when we share sensitive information? Because, first of all, by nature, people judge. Machines, however, are not.
Or at least this is the widespread belief among individuals. Because they are not worried about social judgment, people should therefore be more willing to spread sensitive – and sometimes embarrassing – information with AI45.
However, if social judgment is what we are really concerned about when sharing sensitive information, at the same time, when it comes to non-sensitive information, the difference between humans and AI should be small.
This is precisely what the authors demonstrate. The study reveals that the sharing of sensitive personal information is higher when individuals interact with an AI agent, as opposed to a human agent. This pattern wasn’t observed with non-sensitive information. It's demonstrated, therefore, that the fear of negative social judgment indeed motivates people to share more sensitive information with AI.
Emotions and disclosure
Although there are situations in which we have to share sensitive information for which we are afraid of the judgment of others, there are others in which it is also important to feel external support. For example, it could be the case of a loss, a difficult phase in our life and so on. And although the machine can provide many answers to our questions, we will hardly feel truly supported by AI.
The authors went deep into how we unveil ourselves during moments tinged with sadness — a sentiment where the dread of social reproach seems to ebb compared to the sting of guilt. In such instances, the act of divulging to an AI bore no marked difference from opening up to a human confidant.
However, when it came to sharing burdens heavy with sorrow rather than guilt, individuals leaned more towards human ears. This divergence, it appears, stems from the inherent contrasts between the emotions: while guilt might clamp our mouths shut, fearing the harsh light of judgment upon ourselves, sadness seeks out the solace of empathy, nudging us towards the warmth of human connection.
What happens when AI and humans look more and more alike?
Some participants were subjected to instructions regarding an interaction with the famous Replika chatbot (we talked about it once already in a previous issue, go and look into it later if you want!). The results were very interesting. Although we consider AI to have enough judgmental ability to feel subjected to social pressure and judgment, when AI and humans are more similar, the situation changes. That is, the characteristics of the context - such as the fact that the AI is more empathetic and physically similar to a human - strongly impact the result.
Furthermore, in the final study the authors demonstrate that when it is critical to filter out inappropriate information for AI to disseminate, we see a change in behavior, resulting in more information being shared with humans than AI.
Where are we with empathetic AI?
The scholars during their paper affirmed:
Previous AI research in the service literature proposed that “empathetic” AI (Huang and Rust 2018) or “emotional-social” AI (Wirtz et al. 2018) could emerge in the near future. However, what current consumers believe about AI (i.e., the lay beliefs) may or may not be consistent with current AI technologies.
The study was published for the first time in August 2022. It must be said that, although consumers probably perceive AI as something that does not see empathy as its main characteristic (although it would be interesting to conduct research on this), giant leaps have been made.
Currently chatbots like Pi's, developed by Inflection AI, represent the best version of GenAI which focus their competitiveness on being empathetic and adapting more to the user's state. And Pi knows it. While writing this issue, I asked Pi its main differences compared to other chatbots, and this was the answer:
Therefore, this specific focus on emotional and empathic interaction capabilities, as well as that more generally of AI and GenAI, is certainly receiving a lot of attention, investments and is developing quickly. For this reason, it will be even more interesting to bring numbers on the interactions and relationship between us and AI/GenAI.
P.S. If you would like to read an issue in which I try to take stock of the differences in emotional capacity characteristics between the various main chatbots, respond to this survey expressing your opinion! I'd like to know what you think!
Take-aways
Enhanced Personal Disclosure to AI: consumers are more inclined to share sensitive personal information with AI compared to humans when there may be fear of social judgement, driven by the perception that AI lacks the capacity for social judgment.
Boundaries of Disclosure: the willingness to disclose to AI over humans is not universal. Disclosure shifts in favor of humans in scenarios where social support is sought or when there is a risk of sensitive information being socially disseminated.
Influence of Humanlike Features on AI: Adding humanlike characteristics to AI can paradoxically increase fears of social judgment, thereby reducing the willingness to disclose in situations perceived as socially risky. Yet, these features can also enhance the perceived capacity of AI for empathy, increasing disclosure where social support is needed.
Further research directions
Deepen the variances in how individuals share personal information with AI versus human agents across real-world settings and a spectrum of service domains.
Explore a range of strategies to enhance or minimize AI's display of emotional understanding, including the role of visual elements, names, personas, and other anthropomorphic features.
Investigate a broader array of AI strengths, particularly those that outperform human capabilities, and their impact on consumer willingness, to share personal insights.
Thank you for reading this issue of The Intelligent Friend and/or for subscribing. The relationships between humans and AI are a crucial topic and I am very happy to be able to talk about it having you as a reader.
Has a friend of yours sent you this newsletter or are you not subscribed yet? You can subscribe here.
Surprise someone who deserves a gift or who you think would be interested in this newsletter. Share this post to your friend or colleague.
P.S. If you haven't already done so, in this questionnaire you can tell me a little about yourself and the wonderful things you do!
Cozby Paul C. (1973), “Self-Disclosure: A Literature Review,” Psychological Bulletin, 79 (2), 73-91.
Lutz Christoph, Tamó-Larrieux Aurelia (2020), “The Robot Privacy Paradox: Understanding How Privacy Concerns Shape Intentions to Use Social Robots,” Human-Machine Communication Journal (HMC), 1 (1), 87-111.
Nass Clifford, Moon Youngme (2000), “Machines and Mindlessness: Social Responses to Computers,” Journal of Social Issues, 56 (1), 81-103.
Kim Tae Woo, Duhachek Adam (2020), “Artificial Intelligence and Persuasion: A Construal-Level Account,” Psychological Science, 31 (4), 363-380.
Longoni Chiara, Bonezzi Andrea, Morewedge Carey K. (2019), “Resistance to Medical Artificial Intelligence,” Journal of Consumer Research, 46 (4), 629-650.
Cover credits: New York Times