The Intelligent Friend - The newsletter about the AI-humans relationships, based only on scientific papers.
I wrote today's post as a guest post for
's newsletter, . I'll be honest: It was an honor to write this piece for his newsletter. He has always been an inspiration to me with his work, and I am thrilled to have my contribution to his publication. Thanks again!In today's issue, we talk about a very interesting paper which in my opinion could certainly have inspired several subsequent works, categorizing different ways in which we perceive the relationship with AI. Enjoy the reading!
Alexa, will you marry me?
If you have Alexa, consider the last time you ‘talked to her’. Don't just think about what you asked for, but above all how you did it. Were you kind? Were you tough? Or didn't you pay the slightest attention to it? Even if we don't realize it, when we interact with devices like Alexa or Google Home, and increasingly with chatbots like ChatGPT-4o, Gemini, or Claude, we have different ways of thinking about ourselves and we relate to them differently from other people.
To give you a concrete idea, Amazon reported that half a million people told Alexa they loved her1, and a good portion of those even said they would marry her2. Alexa, Google Home, and similar devices fall into the category of so-called 'voice-controlled smart assistants' (VCSAs), i.e. smart devices that enable humans to interact and request tasks or services through verbal communication. It is precisely on these devices and how we interact with them that the researchers of the paper I want to tell you about today focus. In particular, in their study, they tried to understand how people - therefore consumers - perceived their relationships with these 'housemates', in a study that gave many subsequent avenues of exploration to scholars.
In this regard, before delving into the study, two important things must be specified:
As much as these insights apply in the specific focus of the study to VCSAs, I believe they are also very important for interactions with other forms of AI and GenAI, especially given the recent developments of ChatGPT-4o and voice-based interactions. It is no coincidence that several studies have applied taxonomies similar to those of this study also, for example, in the case of relationships with robots;
The second point I would like to underline is that, although it is interesting to know how we relate to these tools, the results have concrete impacts on us as consumers, and therefore are valuable for anyone who deals with AI in general or wants to implement solutions based about AI in your company or organization.
With that said, let's dive a little deeper into the study.
The paper in a nutshell
Title: Servant, friend or master? The relationships users build with voice-controlled smart devices. Authors: Schweitzer et al. Year: 2019. Journal: Journal of Marketing Management.
‘You're not just a machine’
We have learned that we form relationships with different AI-based devices and tools. And the scholars, as anticipated, have tried to answer a question: what are these relationships? However, there is a missing piece. If it is true that by interacting we form relationships of different types, the real first question to focus on is: why do we form relationships? What mechanisms underlie this dynamic?
There are different approaches and perspectives to answer this question. Today's scholars, however, focus on the broad concept of anthropomorphism: essentially, it is humans' tendency to perceive humanlike agents in nonhuman entities and events, such as seeing faces in clouds or attributing emotions to pets.
Anthropomorphism extends to several consumer products: for example, individuals often attribute human characteristics to gadgets, like wall clocks. This dynamic has already seen several important results in terms of effects on consumers: evidence shows that anthropomorphized products can enhance consumer preference, make products appear more vivid3, and increase their perceived value4. However, when we talk about this topic there is always a concept that comes back and that must be taken into account: the Uncanny Valley, which makes us understand how people perceive Anthropomorphism, namely how people think positively or negatively regarding how AI is similar to humans.
To illustrate this, let's think about a practical example. For example, imagine interacting with an automatic checkout at the supermarket to pay for your groceries. Now imagine it has a square shape and a small screen. Nothing special, right? Now imagine that this increasingly takes the form of a human, like that of an interacting cashier. As similar as it is to a cashier, you still perceive that it is a machine. Therefore, despite the ‘human touch’, your perception becomes negative, because you feel something human but not ‘really’ human. Now think at other similar technologies: your smartphone, your clock, your ‘smart’ car: our perceptions of functionality and efficacy are influenced by how these technologies and products are similar to humans maintaining a core technology aspect. Summing up, the Uncanny Valley represents clearly how different degrees of anthropomorphism can change our feelings and attitudes toward technologies and AI assistants.
This feeling of 'awareness' will grow until it brings you to a point of discomfort. This point (reduced very briefly) is the so-called 'Uncanny Valley', a concept coined since 197056 to describe the effects of an increasing degree of anthropomorphism of a technology on user perceptions.
Anthropomorphism is therefore one of the first dynamics that comes into play when we talk about building a relationship.
An extension of yourself
A second fundamental paradigm that scholars use as a basis for this study is the so-called 'self-extension theory', very important in marketing. Basically (and simplifying), if you think about the influence that particularly valuable products have on you, you increasingly consider them extensions of yourself.
Belk (1988)7 posits precisely that consumers perceive close others, as well as certain tangible and intangible possessions, as extensions of their self. In particular, the research tells us that self-extension is particularly likely when consumers have control over objects with relatively little autonomy. But if this is true, consequently, consumers may relate to anthropomorphized products either as others or as extensions of their self8.
This passage is very important because it underlines once again the importance that relational perspectives - in this case in the basic mechanisms through which relationships are formed according to today's scholars - influence our being consumers and our interactions with technologies and brands. However, if you think about some of the products that you have purchased in the last year, for example, you will understand that not all of them are equally important. Maybe you are a Disney fan, and therefore the Mickey Mouse t-shirt is important to you, but your checked sweater received from an aunt perhaps is not one of the 'indispensable' possessions that you would carefully preserve. And so from a daily intuition, we derive that products contribute to the extended self to varying degrees, with some being crucial and others less so.
Therefore, in the authors' perspective, the extension of ourselves played by our VCSAs that we are going to anthropomorphize has a very strong impact on the formation of relationships with these devices. Now that we have discovered this, we can go deeper into the question: what type of relationships do we perceive with Alexa, Google Home, and much more (among which I would suggest, also ChatGPT or Gemini)?
Partner, Servant, or Master?
In the paper we are exploring, several participants interacted with different VCSAs (including Google Home and Siri), and their experiences were documented over three weeks. They were also interviewed extensively by the researchers to derive different insights. Furthermore, and I would like to specify this, the scholars report that 'during a post-experience interview, the participants were asked to describe the bond between themselves and their VCSA': the students (participants) therefore also reported emotional aspects regarding the interaction with VCSAs.
From the results of the study three different relationships emerge that I think are really interesting in consumer perceptions:
Servant
To illustrate them, let's think once again about a concrete case. Imagine all the times during the day you have asked Siri to add a reminder, set an alarm, or send a message or an email. Think carefully about that interaction. How did you feel? You will likely have said 'thank you' to Siri in an almost friendly way, and you will have smiled slightly at that task completed as if you had a real personal assistant at your side (a bit like in the film 'The Devil Wears Prada'). It is exactly the first type of relationship that emerges from the study. In particular, young consumers frequently envisioned their VCSA as ‘a servant that helps consumers realize their tasks’.
From a more emotional point of view, they anthropomorphized the VCSA as a friendly, helpful, and reliable figure, akin to a professional secretary—polite, somewhat submissive, and always in the background (a bit like Anne Hathaway in the film I just mentioned). This perspective stemmed from the VCSA’s nature of responding to commands without initiating actions, making it seem dependent on the user's instructions. However, the thing that I found interesting about this interaction, in addition to the representation of this AI as a real personal assistant, is the perceptions of how these interactions took place.
In fact, the authors report that these users found interacting with the VCSA straightforward and beneficial, appreciating its role in enhancing their capabilities and information searches. They viewed the VCSA as an empowering tool, extending their abilities and acting as a digital extension of themselves (and here comes the theme of self-extension).
Master
However, if a subset of participants saw Siri, Alexa, or others as a subordinate assistant, some perceived exactly the opposite. This once again underlines the importance of digging deeply into consumers' perceptions by trying to understand what their fears are, what they value, and what turns them away from an interaction, as well as taking into account different traits and characteristics that can influence this. that you are observing or studying. A contrasting perspective was held in fact by those who viewed the VCSA as a master, where they felt like servants bound by its rules. These users described the relationship as a reversal of roles, with the VCSA being unpredictable and untrustworthy. In short, in this case, we are the AI secretaries who, despite often starting interactions, submit ourselves and are ready to help it in many possible ways.
Above all, and I think this deserves particular attention, they struggled to anthropomorphize the VCSA positively, sometimes comparing it to a ‘mentally impaired old man’. Their interactions were often negative, marked by impatience and frustration, as they felt stuck in unproductive exchanges and failed to achieve their goals. According to the authors, digging deeply, these sensations can be traced back to the studies of Hoffman and Novak (2018), and to the broader concern of losing control to automated systems, suggesting a need for user-friendly overrides to prevent such disempowering dynamics.
If you have ever imagined robots or an autonomous machine that take control on their own and begin to act independently of commands, and you are afraid of it, you likely have a first clear reference to this phenomenon illustrated by scientific evidence.
Partner
Finally, there is the group that I think is the most interesting of all: the perception of the relationship with the VCSA as a real partner. The participants did not feel subordinate or superior to the AI assistant, but, rather, exactly on the same level. This anthropomorphization attributed a distinct personality to the VCSA, making it an appealing and congenial entity with its own 'life'.
These users invested time in nurturing the VCSA, finding amusement and affection in its occasional missteps. They valued the relationship-building aspect, striving to develop a positive rapport. However - and this is important to specify - initial excitement often turned to disappointment when the VCSA’s repetitive and unsuitable responses frustrated their efforts, leading to emotional reactions and a sense of wasted time.
Practical insights
This truly engaging study showed us that on the one hand, we build relationships with AI-based devices, entities or tools through repeated interactions; that these can be of different types depending on the individuals; which can have a concrete effect on our attitudes, our actions and, ultimately, our choices as users and consumers.
It will therefore come as no surprise that the 'Managerial implications' section of this paper is particularly rich. The insights, even just from a first reading of what I have told you, are many and applicable.
Encourage Partner-like interactions: use speech acts and algorithms to promote the perception of VCSAs as partners. For instance, integrating proactive messages like “Hi there, you haven't talked to me for a while” can help VCSAs stay relevant and engage users more effectively;
Balance imperfection for relaxed interactions: Design VCSAs to exhibit some level of imperfection. Users tend to be more relaxed and forgiving when VCSAs make occasional mistakes, leading them to continue using the device for other tasks even if it fails at specific ones.
Provide control and feedback mechanisms: Ensure that VCSAs include features that give users a sense of control and the ability to communicate successfully with their devices. This could involve implementing verbal equivalents of "close door" buttons to reinforce user control and confidence;
Tailor user experience with emotional intelligence: Develop VCSAs to not only deliver accurate responses but also to create a better, individually tailored user experience. Focus on incorporating emotional intelligence to enhance user engagement and satisfaction.
P.S. In my recommendations, I decided to include a small description of why you should subscribe to all the newsletters I recommend. If you like, take a look and discover more!
- by : a reliable newsletter, full of insights that will make you passionate about education and the application of AI;
- by : if you are a curious person, you will love this newsletter;
- by : culture and society and much more on a scientific basis;
- by : many interesting papers on education that will surprise you.
Thank you as always,
Riccardo
Thank you for reading this issue of The Intelligent Friend and/or for subscribing. The relationships between humans and AI are a crucial topic and I am glad to be able to talk about it having you as a reader.
Has a friend of yours sent you this newsletter or are you not subscribed yet? You can subscribe here.
Surprise someone who deserves a gift or who you think would be interested in this newsletter. Share this post with your friend or colleague.
P.S. If you haven't already done so, in this questionnaire you can tell me a little about yourself and the wonderful things you do!
Risley, J. (2015, November 17). One year after Amazon introduced Echo, half a million people have told Alexa, ‘I love you’. GeekWire, https://www.geekwire.com/2015/one-year-after-amazon-introduced-echo-half-a-million-people-have-told-alexa-i-love-you/.
Murdoch, C. (2016, October 28). Want to marry Amazon’s Alexa? You’re not alone. Vocativ. https://www.vocativ.com/371706/amazon-alexa-propose-marriage/index.html
Noble, C. H., Bing, M. N., & Bogoviyeva, E. (2013). The effects of brand metaphors as design innovation: A test of congruency hypotheses. Journal of Product Innovation Management, 30(S1), 126–141.
Hart, P. P. M., Jones, S. R. S., & Royne, M. B. M. (2013). The human lens: How anthropomorphic reasoning varies by product complexity and enhances personal value. Journal of Marketing Management, 29(1–2), 105–121.
Mori, M. (1970). Bukimi no tani [The uncanny valley]. Energy, 7, 33.
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & automation magazine, 19(2), 98-100.
Belk, R. W. (1988). Possessions and the Extended Self. Journal of Consumer Research,15(2), 139–168.
Belk, R. W. (2014). Digital consumption and the extended self. Journal of Marketing Management, 30(11-12), 1101–1118.
How about “tool”?
I'm not suprised and, clearly, the examples of Assistant, Master, Partner between humans matters because we can change those perceptions between humans too. The challenge with AI is that when we anthromoporphize, we tend to attribute greater intelligence in the technology than it can support and that can lead us to error.