The Intelligent Friend

The Intelligent Friend

If the task is more difficult you will use an algorithm

And the other findings of the week.

Riccardo Vocca's avatar
Riccardo Vocca
Jul 24, 2024
∙ Paid

The Intelligent Friend - The newsletter about the psychological, social, and relational aspects of AI, based only on scientific papers.


I have slightly changed the style of Nucleus, which you will now find more 'captivating' from a graphic point of view and much more direct and 'juicy' from a content point of view, with many exclusive and straight to the point articles. Enjoy the reading!


More difficult? I prefer the algorithm

Think about choosing a birthday gift for your boss and your friend. All these tasks are different, especially because we perceive them as having different difficulties. In this sense, this study is really interesting: it found that humans increasingly rely on algorithms over social advice as tasks become harder. Through three online experiments, participants guessed the number of people in photos, receiving advice from either an algorithm or a group of peers. Methodologically, these experiments varied the task difficulty and advice source.

The results were striking: as tasks grew more challenging, participants placed greater trust in algorithmic advice, with reliance increasing by 11% for easy tasks and an additional 3.6% for difficult ones. Additionally, participants were more likely to disregard inaccurate advice if it was labeled as algorithmic, demonstrating a nuanced trust in the perceived precision of algorithms.

Title: Humans rely more on algorithms than social influence as a task becomes more difficult. Author(s): Bogert et al. Year: 2021. Journal: Scientific Reports. Link.


Can I Ask You a Personal Question?

We have often talked about the topic of disclosure, i.e. how AI can inflict how much and what type of information, especially private, we tend to share. This study brought important evidence in support of a preferred AI-disclosure: results showed that participants disclosed more sensitive information to AI than to humans, driven by lower concerns about social judgment from AI. However, when emotional support was needed, participants preferred humans over AI, as they perceived humans to be more empathetic. Furthermore, when AI was seen as having human-like traits, the fear of judgment increased, reducing disclosure.

Title: Do You Mind if I Ask You a Personal Question? How AI Service Agents Alter Consumer Self-Disclosure. Author(s): Kim et al. Year: 2021. Journal: Journal of Service Research. Link.


Great Issues: The Age of Unprecedented Rivalry

'Great Issues' is the section where I recommend issues that have impressed me from other Substack authors.

We always hear about competition in the market and, having a marketing background, this is a topic I have always been close to. This is why I really liked this issue with an original perspective on it by

J. Michael Wahlen
of
Byte-Sized History
.


Great Issues: 5 Learnings from the Biggest Band in the World

I am absolutely a Beatles fan, and have been since I was 16. Therefore, as soon as

Giacomo Falcone
announced the release of this issue I rushed to read it, and I couldn't help but be enthusiastic.


Make the conversation smoother!

We have often talked about how anthropomorphism, i.e. the 'human' perception of things that are not (such as chatbots) can greatly influence our behavior. But have you ever wondered what actually makes us perceive something that is not human as such? This study attempted to shed new light on the topic.

Keep reading with a 7-day free trial

Subscribe to The Intelligent Friend to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Riccardo Vocca
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture