The Intelligent Friend - The newsletter about the AI-humans relationships, based only on scientific papers.
Hello IF readers! This is the second issue of Nucleus, the weekly issue dedicated to payment subscribers with summaries of papers, links and more. Today…we also have the first author interview! Remember that, from tomorrow, the 'Society' chat will also become exclusive for subscribers, who will be able to discuss ideas and issues and talk about their projects. Ready? Let's go straight... to today's Nucleus! P.S. The access to all this costs the minimum on Substack, 5 USD per month, and you support me in my daily research and writing work! ☕
A new test for human replication
One of the most interesting topics in AI concerns the use of this technology for scientific research. This study in this regard is really a must-have, and when I saw it, I was fascinated. The authors developed the so-called 'The Turing Experiment (TE)’, that explores whether language models can mimic human behavior in scientific experiments. Unlike the classic Turing Test, renowned in the field of computer science, the TE involves a broader range of human interactions, drawing from psychology and economics. Although it might seem very difficult to understand at first glance, the objective is very clear: assessing the machines exhibition of behaviors that mirror real people. This involves simulating scenarios like the Ultimatum Game (if you are not familiar with this type of experiment, I strongly advise you to go and read the paper linked here that describes it) and deciphering complex sentences.
Results show that language models have performed remarkably well in some cases, replicating human behavior with surprising accuracy. However, some models exhibit a "hyper-accuracy distortion" suggesting they might be missing subtle nuances of human behavior. Gati Aher, co-author of this paper, is the very first interviewee of the IF Nucleus!
Title: Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies. Author(s): Aher, Arriaga, and Kalai. Year: 2023. Journal: / (working paper). Link.
A better design for a better relationship
In this newsletter, we often talk about the factors that can foster a more benevolent adoption of robots and AI by consumers. This research really caught my eye: it introduces the concept of 'congruence'. In short, consumers' perceptions improve when the function and design of the robot are aligned. Intuitive you say? Not as it sounds. Consumers are more likely to adopt AI robots that feel like a natural fit for their intended purpose. This means that the design of the robot should align with its function. For instance, hedonic robots designed for fun and entertainment should resemble familiar, real-world objects, evoking feelings of joy and playfulness. On the other hand, utilitarian robots designed for practical tasks should embody a more machine-like appearance, conveying a sense of efficiency and reliability.
Interestingly, the study also found that individuals with strong dialectical thinking skills, those who consider multiple perspectives, are less swayed by the appearance factor. This suggests that they are able to look beyond superficial cues and focus on the intrinsic value of the robot. This study is one of those that has HUGE implications for the making of these products and for the relationship with the consumer (I hope this will be an assist to one of my favorite publications on robotics here on Substack,
's - check it out if you are interested in the subject).Title: The effect of artificial intelligence (AI) robot characteristics and dialectical thinking on AI robot adoption intention. Author: Kim et al. Year: 2023. Journal: Journal of Consumer Behaviour. Link.
AI detects deepfakes better than humans
The concern of the deepfakes is something really worrying. In this video, pointing at a ‘fake’ Barack Obama is virtually impossible. We probably don't think about it often, but our society sees photos as a fairly important pillar. We take pictures of everything. What happens when our ability to recognize the authenticity of these photos fails? This study gave me the cue that perhaps the solution, as is often the case, comes from the problem, evaluating how well humans and AI algorithms distinguish real photos from AI-generated ones.
Keep reading with a 7-day free trial
Subscribe to The Intelligent Friend to keep reading this post and get 7 days of free access to the full post archives.