The Intelligent Friend

The Intelligent Friend

Share this post

The Intelligent Friend
The Intelligent Friend
Personality traits in LLMs

Personality traits in LLMs

And the other findings of the week.

Riccardo Vocca's avatar
Riccardo Vocca
Jul 03, 2024
∙ Paid
1

Share this post

The Intelligent Friend
The Intelligent Friend
Personality traits in LLMs
1
Share

The Intelligent Friend - The newsletter about the psychological, social and relational aspects of AI, based only on scientific papers.


Hello IF readers! This is the ninth issue of Nucleus, where you find insights from research papers, links to articles and useful resources of various kinds related to the social, psychological and relational aspects of AI. Enjoy this issue!


Trust me, I’m a bot

One of my favorite topics: how revealing the nonhuman identity of chatbots affects customers. In this study, researchers conducted two studies examining the impact of chatbot disclosure under different levels of service criticality and focusing on varying service outcomes.

As you know, I don't often mention the methodology mostly for reasons of length or simplicity, but this time I found it really intriguing: partecipants imagined moving to a new apartment and contacting their energy provider via an online chat to reregister their electricity contract. However, half of the participants were informed at the end of the chat that their conversational partner was a chatbot.

The findings indicated that chatbot disclosure generally reduces trust, especially in high-criticality service scenarios. However, in situations where the chatbot failed to resolve the customer's issue, disclosure had a positive effect on retention, as it allowed customers to better cope with the failure by attributing it to the chatbot. This nuanced understanding challenges the predominantly negative view on chatbot disclosure, suggesting that its impact varies significantly based on the service context and outcome.

Title: Trust me, I'm a bot – repercussions of chatbot disclosure in different service frontline settings. Author(s): Mozafari, Weiger, Hammerschmidt. Year: 2022. Journal: Journal of Service Management. Link.


The personality of LLMS

There is a line of research that is increasingly trying to use LLMs to simulate or analyze different personality traits. One of the studies I read recently along these lines intrigued me and I decided it was absolutely worth bringing it here on Nucleus. It is called "Personality Traits in Large Language Models" and the authors investigated the capability of large language models (LLMs) to simulate human personality traits reliably and validly, trying to understand whether these traits can be intentionally shaped. Using structured prompting, the researchers administered personality tests to various LLMs, including the IPIP-NEO and Big Five Inventory (BFI), to measure traits such as extraversion, agreeableness, conscientiousness, neuroticism, and openness.

Inside Out 2' Review: Pixar Sequel Gives Us All the Feels—Plus Anxiety

The results demonstrated that larger, instruction fine-tuned models showed stronger evidence of reliable and valid personality trait measurements, showing a capacity of accurate reflection of human personality traits in their outputs. Furthermore, the study found that personality traits in LLMs could be shaped along desired dimensions, influencing their behavior in subsequent tasks like generating social media posts.

Title: Personality Traits in Large Language Models. Author(s): Serapio-Garcia et al. Year: 2024. Journal: / (preprint). Link.


The issue(s) of the week

A heartfelt issue from

Marie Vandoorne
about the adventures of anyone trying to create something online. To read.

The Bored Millennial
The truth about building an online business: it's f*cking hard
After 18 months of writing online and 18 weeks of The Bored Millennial, I thought this was the perfect moment to share a state of affairs. My current verdict on building something online? It's not easy. It's f*cking hard. (Spoiler: it’s worth it! Keep reading…
Read more
a year ago · 73 likes · 34 comments · Marie Vandoorne

I always read Pranath's newsletter,

The FuturAI
and enjoy listening to his podcasts. In this recent issue I had the honor of being the interviewee! We talked about many things, from using AI to find ideas for your newsletter to how to overcome initial skepticism about these tools.

The FuturAI
Dispatch: Riccardo Vocca Podcast, AI Transforming Education for Kids
Hello Friend, I hope you had a good week? I was really pleased to finally launch my online courses last week as part of my paid membership (available for free preview as well). These courses show you how to understand the basics of Generative AI but also show you how to use common AI free tools such as Microsoft CoPilot to help…
Read more
a year ago · Pranath Fernando

Lastly, I cannot help but recommend to you, in my opinion, the most fun Intellibox simulation I have created: you can become President of the United States and implement policies and then even receive evaluations on different aspects of political life, such as health, education, economics and much more.

Intellibox #4: you are the US President

June 21, 2024
Intellibox #4: you are the US President

The Intelligent Friend - The newsletter about the AI-humans relationships, based only on scientific papers. Intellibox - Issue 4 Hello IF readers! This is the fourth issue of Intellibox, the new weekly issue of The Intelligent Friend where, beyond insights from an interesting paper, you immerse yourself in a creative and intriguing challenge

Read full story

A ‘cute’ AI assistant

This study explores the role of trust and service-related context factors in the impact of chatbot disclosure on customer retention. The research is based on two experimental studies designed to examine the effects of disclosing a chatbot's nonhuman identity in different service contexts. The first study looks at how service criticality influences the effect of chatbot disclosure, while the second study focuses on varying service outcomes. The experiments employ analysis of covariance and mediation analysis to test the hypotheses.

The methodology involved recruiting participants who interacted with chatbots under different conditions. The chatbot's identity was either disclosed or not, and the service provided varied in terms of criticality and outcome. The studies measured the participants' trust in the chatbot and their subsequent retention intentions.

The results indicate that chatbot disclosure has a negative indirect effect on customer retention through reduced trust when the service is of high criticality. However, in situations where the chatbot fails to resolve the service issue, disclosing its nonhuman identity can have a positive impact on retention. This suggests that transparency about the chatbot's nature can mitigate some negative reactions, depending on the context.

Keep reading with a 7-day free trial

Subscribe to The Intelligent Friend to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Riccardo Vocca
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share