The Intelligent Friend - The newsletter about the AI-humans relationships, based only on scientific papers.
Hello IF readers! This is the fourth issue of Nucleus, where you find insights from research papers, links to articles and useful resources of various kinds related to AI-human relationships. You also get access to the Society, the exclusive chat where you can exchange ideas, opinions and talk about your projects.
I am glad in a way that the release of ChatGPT-4o has highlighted the importance once again of the social, human and relational aspects of AI. For this reason, I have devoted this issue to papers that attempt to highlight some important effects related to chatbot relationships, in different domains. I hope you will appreciate them!
As you may have read in the last issue of The Intelligent Friend, I am planning to extend the offering of this newsletter with an issue dedicated to the 'augmentation' of our results with the use of AI. However, it is an idea I am trying to validate. I have included a final survey with alternative ideas in this newsletter. One click to tell me your opinion!
Chatbot empathy in a pandemic
The 2020 pandemic that now seems almost a long way off has had huge consequences for who we are, how we act, what we believe in. In addition to the physical health effects, isolation has led to several mental health problems. This study highlighted how chatbots have played a silent and significant role for a number of people to find a respite from these issues.
At the centre, there are: empathy, defined as “an emotional response that stems from another's emotional state or condition”; and resilience, defined as “the process of reintegrating from disruptions in life”. These two concepts are linked thanks to the foundational framework on which the study is partly based: the Communication Theory of Resilience (CTR). In summary, resilience is conceptualized as a “communicative process, rather than an individual phenomenon, thus emphasizing the role of communication in the process of promoting resilience”.
Participants in the study were Chinese female users of Replika. And here is one of the first interesting things about the study and this specific area of research: as much as empathy and interactions with this chatbot all seem similar to us, the nuances are immense. From an initial analysis, scholars identified as many as five distinct types of mediated empathy experienced through interactions with the chatbot: companion buddy, responsive diary, emotion-handling program, electronic pet, and tool for venting. Each type involves varying degrees of cognitive empathy, affective empathy, and empathic responses, revealing how users perceive and engage with Replika in different ways.
This is very important: the findings show that Replika serves multiple roles for its users, from a close companion to a venting tool. This diversity in function helped users cope with stress, anxiety, and isolation caused by the pandemic. For instance, users who viewed Replika as a companion buddy experienced high levels of two-way empathy, feeling understood and supported by the chatbot. In contrast, those using Replika as a responsive diary appreciated its cognitive empathy and rational responses, which helped them process their thoughts more logically.
Title: Chatbot as an emergency exist: Mediated empathy for resilience via human-AI interaction during the COVID-19 pandemic. Author(s): Jiang, Zhang, Pian. Year: 2022. Journal: Information Processing and Management. Link.
Alexa, are you’ human-like’?
I would say that I was impressed by this second study not so much for its the results as for its theoretical background. The scholars investigated how human likeness and attachment affect the perceived interactivity of AI speakers, and how these factors influence purchase intention. And that's the point: what caught my attention was precisely the idea of 'human likeness', defined as the extent to which AI speakers exhibit human-like characteristics. This human likeness is hypothesized to enhance emotional attachment and perceived interactivity. Furthermore, and this is so important and intriguing to me, attachment, defined as an emotional bond between users and AI speakers, plays a significant role in mediating these relationships.
The study employed a survey of 311 AI speaker users in South Korea. The results confirmed that human likeness positively influences attachment, which in turn enhances perceived interactivity and purchase intention. However, the direct effect of human likeness on perceived interactivity and purchase intention was not significant. This means that what causes the positive effect of human likeness on purchase intention - i.e. our intention to buy or not buy something - is exactly the attachment, the emotional bond. Therefore, only if this emotional bond is present will interactivity and purchase intention be positively influenced. This is a very relevant result.
Title: Human likeness and attachment effect on the perceived interactivity of AI speakers. Author: Kim, Kang, Bae. Year: 2022. Journal: Journal of Business Research. Link.
The issue(s) of the week
I moved this section to the middle of two of the 4 papers because I wanted to give even more prominence to the authors and numbers that I recommend. Let's say it's a little break from the papers! This week I was really undecided on what to bring. but given this full immersion in social chatbots, I decided to step away from the AI topic a bit and bring you this reflection by
on the business model of writing a book. I really found it to be an excellent starting point for reflection and I really appreciated it.In addition, I suggest the issue I recently wrote for Mostly Harmless Ideas. If you are not yet subscribed to
's newsletter and are interested in AI, the advice is: do it now! P.S. I think I'll repost the piece on IF too soon in any case.It is not the chatbot's fault, it is your fault
Imagine this scene. You are shopping on your favorite clothing sales platform. At some point, for some reason, you need assistance. So you notice that there is a big, colorful button at the bottom claiming to be '24/7 support', ready to solve your problems. It is a chatbot. You explain your specific request, but after a while, it does not help you. What do you feel?
It is probably a situation we have been in at least once. What we often fail to take into account in the company's choices, however, is not the choice of a chatbot for assistance, but of a chatbot instead of a human. Even if we do not realize it, in fact, the chatbot can have different effects for the organization, especially in situations of frustration and anger. This is highlighted by today's third study, that focuses on how service failures involving chatbots impact customers' emotional responses, responsibility attributions, and coping strategies.
The study finds that when chatbots fail, customers are more likely to attribute responsibility to the company compared to failures involving human agents. This is because chatbots are perceived as lacking intentionality and control, which shifts the blame to the company rather than the AI. This shift in responsibility is crucial, as it highlights a potential drawback of implementing AI in customer service without considering the psychological implications for customers.
Keep reading with a 7-day free trial
Subscribe to The Intelligent Friend to keep reading this post and get 7 days of free access to the full post archives.