The Intelligent Friend - The newsletter about the AI-humans relationships, based only on scientific papers.
Hello IF readers! This is the seventh issue of Nucleus, where you find insights from research papers, links to articles and useful resources of various kinds related to AI-human relationships. Recently I launched the new Intellibox format, where every week I create an immersive challenge that you can do on any chatbot, such as creating a Marvel superhero, finding an animal in the jungle or creating a map to write your next book. It is an original and interactive issue and the first two issues are open to everyone (so you can try the format!).
Human and AI Creativity
We have often talked and talk about how AI can 'augment' our creativity and how we can use it for different tasks. However, sometimes the focus of an effective and precise comparison between the creativity of LLMs and human creativity is lost. This study is interesting precisely because of this effort.
The study involved 100,000 human participants and several LLMs, including GPT-3, GPT-4, GPT-4-turbo, Claude3, GeminiPro, Pythia, StableLM, RedPajama, and Vicuna. Participants were assessed using the Divergent Association Task (DAT): they were asked to generate 10 words that were as different from each other as possible. The LLMs received similar instructions adapted to their input format.
The results not only showed that GPT-4 outperformed human participants on the DAT, but creative writing tasks revealed that LLMs like GPT-4 excel in generating creative content. However, in my opinion, the consideration that emerges from scholars is interesting: despite the strong performance of LLMs, human-generated texts maintained a significant edge in certain aspects of creativity, such as thematic depth and adherence to traditional formats like haikus.
Title: Divergent Creativity in Humans and Large Language Models. Author(s): Bellemare et al. Year: 2024. Journal: / (working paper). Link.
Psychological barriers to AI adoption
One of the topics that fascinates me most regarding AI, especially from a marketing perspective and more specifically Consumer Behavior, is the one that concerns the study of the factors that prevent or reduce the adoption of these technologies. With particular attention to psychological ones. In this paper - named precisely "Psychological Barriers to AI" - the authors focus on the psychological biases that influence people's perceptions and acceptance of AI, examining how these biases can create barriers to the effective implementation of AI in various contexts.
The results really fascinated me:
Firstly, the uncanny valley (which we have often talked about and which is, in summary, the feeling of strangeness we feel when an AI begins to resemble a human more and more but we perceive its 'non-humanity') effect significantly influenced participants' comfort levels with AI, where more human-like AI systems were often met with discomfort and distrust;
Secondly, algorithm aversion was prominent, with participants showing a preference for human judgment over AI, even when the AI's accuracy was demonstrably higher. This aversion was partly mitigated when participants were given control over some aspects of the AI's decision-making process, highlighting the importance of perceived agency in AI acceptance. This is by far the most interesting result in my opinion and is really insightful for a large amount of companies. This idea, in my opinion, provides stimuli to give reasons for the fact that people, for example, strongly prefer and appreciate when there are lists of recommendations - even in newsletters - which are created and curated by humans, even if perhaps those created with technology they provide an equal level of accuracy or interest in the content.
Finally, the study found that transparent and understandable explanations of AI processes significantly increased trust and acceptance, emphasizing the need for explainable AI systems.
Title: Psychological factors underlying attitudes toward AI tools. Author(s): De Freitas et al. Year: 2023. Journal: Nature Human Behaviour. Link.
The issue(s) of the week
I can't begin except by recommending the collaborative issue between two of my favorite AI writers here on Substack:
and . If you don't follow them yet, sign up for their newsletter. If you haven't read this collaborative issue yet and are interested in AI, take 5 minutes of your time tonight to read it!Among the dozens of issues I read this week I sincerely recommend this reflection by
of regarding training, ChatGPT and much more.Finally, an issue worth reading if you are interested in social media, trends, culture, or want to have an example of a well-written piece, by
.More human? Better service.
In the latest Sunday issue of The Intelligent Friend we talked about how interacting with a robot in embarrassing situations can reduce the discomforting feeling we feel. This study is partly related, but is based on the so-called Humanness-Value-Loyalty (HVL) theoretical framework, which posits that higher perceived humanness of a chatbot increases its expected service value and thus enhances customer loyalty and willingness to disclose personal information. In this paper, researchers focus on how chatbots' anthropomorphic features and gaze direction affect consumer behavior.
Keep reading with a 7-day free trial
Subscribe to The Intelligent Friend to keep reading this post and get 7 days of free access to the full post archives.