The Intelligent Friend - The newsletter about AI's psychological, social, and relational aspects, based only on scientific papers.
Intro
One of the first uses of Artificial Intelligence was for creative purposes: writing, image design, and content re-elaboration. From there, at least reading comments and posts, it seems that opinions have been quite polarized: people have been divided into lovers of this AI-supported creation and those who have lashed out against it. In today's new paper, we try to obtain some scientific insights - and surprising ones - on the opinions regarding art and content created with AI's support (so not entirely). I think you will like it.
The paper in a nutshell 🔦
Title: Who Made This? Algorithms and Authorship Credit. Authors: Jago & Carroll. Year: 2023. Journal: Personality and Social Psychology Bulletin.
Main result: producers who are helped by algorithms tend to receive more credit compared to those assisted by humans.
It is difficult not to know the art of Michelangelo. He is considered one of the greatest artists in history, and his works are among the most important and influential of the following centuries, capable of becoming icons of the Renaissance and of human genius. We often imagine that Michelangelo worked alone, that he spent many hours locked in his studio in Tuscany or the Sistine Chapel. The truth, of course, is that Michelangelo was assisted by a significant number of people. Real 'assistants'. Assistants that, naturally, we do not consider in our judgment, which always and only focuses on the 'main' artist1.
Are you, really, the author?
This, of course, is a widespread practice throughout the history of art. The same problem occurs for example with Rembrandt2, where art historians have identified possible attribution problems for many of his works. Jeff Koons, a contemporary artist, ignited public debate when he disclosed that he employs around 150 people and never personally touches a paintbrush. His assistants follow a system he developed, using stencils to paint according to his directions.
This topic of the correct assignment of credit to the author of a work is precisely defined as the 'authorship credit problem'.
While we immediately think of the Mona Lisa, Picasso, or Dali, it also concerns other work outcomes, such as decision-making, product design, or idea generation. Even your Substack! With the advent of ChatGPT and the use of AI for creative purposes of various kinds, this problem has 'evolved' to also include the possibilities of perception and judgment related to the attribution of something that was created with the assistance (so, not exclusively) with AI. And that's what today's issue is about. The scholars' question, in this regard, is very direct: "When algorithms assist or augment producers, does this change individuals' willingness to assign credit to those producers?". It's a fascinating question for me. As scholars have pointed out, people's responses to what is not created by a single person can be varied.
Often, people evaluate in a binary way, reasoning about whether the painting was created by the artist or whether it is a fake. But as we can easily imagine, the evaluations to be made are more nuanced than that. Artists could create the outline of a painting and have an assistant who is very good at painting eyes paint that specific part, or they could do the entire painting and leave the crucial finishing touches for the coloring to their assistant. Or, to give a different example, you could have a friend of yours who is particularly good at summarizing and writing captivating things write the introduction to your book.
The question to ask yourself, however, is: what if that friend is not John, Tony, Francis, or Riccardo, but ChatGPT, Midjourney, or Gemini? As we have also reported in other issues, a substantial line of inquiry explores how individuals react to decisions taken by algorithmic systems34. Alongside this perspective, there is a focus on people's reactions to the complete creation of creative outcomes by algorithms.
A picture 'painted' by Midjourney or Dall-E, a chapter 'written by Claude', and a business decision 'suggested' by Gemini. But as we said earlier, the evaluations to be made are more nuanced. Also varying the degree of 'weight' that AI can have, our evaluations could vary in consistency.
To reflect on this, I will illustrate some examples: imagine writing an essay for your newsletter. You write it entirely, then use ChatGPT to correct the English (or for example Grammarly). You tell your readers publicly. How will they judge you? Probably the evaluation of the issue will not change much. Now try to say that you have written the entire conclusion. What will they feel? And finally, if the chatbot has written a substantial part, which you have only modified, what will they perceive? The study of these dynamics is not only crucial in this historical period, but it has very important managerial implications for the creative industries and beyond.
However, the intuition of the authors is surprising.
My assistant is ChatGPT
According to Jago & Carroll, although AI may not receive benevolent judgments, its assistance for a creative outcome may be received less negatively than the assistance of human people. It is important to specify once again that we are talking about assistance, not total creation. As counterintuitive as it may be - and how fascinating! - this hypothesis is understandable.
Let's always take our newsletter example. Imagine that the person who writes it has a team behind him that he does not talk about on any occasion and that you discover inadvertently contributes to every single issue. Now compare it with another writer who used ChatGPT to rephrase some sentences. Which of the two will you appreciate more - given the same content? To find out, let's dig deeper into the experiments conducted by the authors.
The researchers conducted four studies to test their hypotheses. As usual, we will not delve too deeply into the methodology, focusing on the results. However, I think it is very useful and interesting to talk to you in a little more detail - but still briefly - about the first study, aimed precisely at verifying whether individuals give more credit to producers who use 'algorithmic assistance' instead of human assistance. The other studies have slightly different objectives, being aimed above all at understanding where the effects come from and what can change their intensity and direction.
Furthermore, I think that since this is a newsletter based on papers, it is also useful and engaging to sporadically look at more 'technical' aspects (apparently!) related to the research, to also get more into the minds of the authors.
In the first study, participants were introduced to an artist who posted a drawing on social media and included it in their catalog.
They viewed a color drawing of a building at night generated using the Deep Dream Generator algorithm. Each participant was randomly assigned one of five artist names but viewed the same artwork;
After viewing, they rated the artist's authorship. Next, participants were randomly assigned to one of two conditions: "person" or "algorithm";
They read that either another person or an algorithm helped with the drawing's design and coloring. This manipulation kept the agent's contribution constant. Following this, participants completed additional measures and scales.
What will the result have been? In this situation, from 1 to 7, how would you have judged the authorship credit of our artist? Let's find out.
You hid your assistants from us!
In a counterintuitive manner, producers who are helped by algorithms tend to receive more credit compared to those assisted by humans. This trend is shown and observed by the authors across various work domains, levels of contribution, and economic contexts. But what is the reason?
People often assume that producers need to oversee algorithms more closely than human assistants, leading to a perception that the producer is more involved in the work. This ‘mediator factor’ (i.e. a factor that explains why something happens, unlike a moderator which, in short, influences the relationship between two factors) is exactly called ‘oversight’.
The results of this work, however, as one can imagine, have a possibly very broad scope. As algorithms become more involved in producing art and other creative works, traditional definitions of what constitutes "real" work are being challenged. For instance, these shifts necessitate a reevaluation of theories of authenticity, as they must now account for the role of algorithms in collaborative work.
Before moving on to the research questions, I remind you that you can subscribe to Nucleus, the exclusive weekly section in which I send 4 paper summaries, links to resources and interesting readings, and interview the authors. It comes out every Wednesday.
In this (open to everyone), for example, we talked about the political opinions of ChatGPT, a new environment for human-robot interactions, and interesting links such as the similarity between “the child you” and the “current you”.
Takeaways 📮
Algorithm or human in art. Judgments about producers of a creative outcome or not can change if it is stated that a result was obtained with the help of an algorithm or a human.
I prefer the chatbot. In today's study, it is shown that the assistance of an algorithm leads to a less negative judgment than human assistance (not complete creation).
The oversight role. This effect is due to 'oversight': people believe that the author had more control over what the algorithm did.
Further research directions
Future research should explore how the relationship between the producer and the assistant (e.g., whether the assistant is an employee, a student, or an algorithm programmed by the producer) affects perceptions of credit.
There is a need to investigate how flexible social attributions of credit are in cases of augmented work. Understanding this elasticity can reveal deeper insights into how credit is assigned and perceived in collaborative efforts involving algorithms.
Studies on how people perceive the minds of human and algorithmic assistants can provide insights into why algorithms might be deprived of authorship credit despite their contributions.
Thank you for reading this issue of The Intelligent Friend and/or for subscribing. The relationships between humans and AI are a crucial topic and I am glad to be able to talk about it having you as a reader.
Has a friend of yours sent you this newsletter or are you not subscribed yet? You can subscribe here.
Surprise someone who deserves a gift or who you think would be interested in this newsletter. Share this post to your friend or colleague.
P.S. If you haven't already done so, in this questionnaire you can tell me a little about yourself and the wonderful things you do!
Jauregui R. (1996). Rembrandt portraits: Economic negligence in art attribution. UCLA Law Review, 44, 1947–2011.
Wheelock A. K. Jr. (2014). Issues of attribution in the Rembrandt workshop. Dutch Paintings of the Seventeenth Century, NGA Online Editions.
Castelo N., Bos M. W., Lehmann D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56, 809–825.
Dietvorst B. J., Simmons J. P., Massey C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology, 144, 114–126.
Great points in the essay and an interesting research paper. My big frustration right now is that there is so much shaming around any use of AI in any manner in art and writing. Yet these same artists and writers overlook that they're inspired by art. In fact, we learn to do art by studying other artists. We learn to write by studying other prose. We aren't uniquely creative, we are derivatively creative. It's something I wrote about here in AI Is['nt] Killing Artists:
https://www.polymathicbeing.com/p/ai-isnt-killing-artists