The Intelligent Friend - The newsletter that explores how AI changes the way we think and behave, only through scientific papers.
Welcome back IF readers! How are you? We are approaching the end of the year. And like all year-ends, it's time for year-end lists.
As I admitted recently, I'm a big fan of them. I read all kinds of them. From the Times' book list to the New Yorker's movie list to the amazing author roundups on Substack.
As you know - or maybe not- this was my first year on Substack where I wrote in a continuous way. The Intelligent Friend has given me enormous satisfaction.
I have met people. I've interacted with great people. I have learned so much. I read a great amount of papers and interesting things. So at the end of this 2024, I also wanted to give my gift to the wonderful community of readers that I am blessed to have.
This year, I’ve read numerous papers on artificial intelligence. As a Marketing Research Assistant, this year I also started to train myself for my work and discover in depth the "behind the scenes" of research. Regularly reading research papers is, after all, a fundamental part of any academic journey, and an “activity” I deeply love. Unsurprisingly indeed, I found myself deeply captivated by it.
Whenever possible, I also make an effort to explore papers outside the field of AI. This curiosity was one of the reasons I started The Intelligent Friend, not only a newsletter, but in my vision, a platform where I can share what I learn about academic research on AI.
🎁 Therefore, as my Christmas gift and first list of The Intelligent Friend, I can not but give you a highly curated and absolutely personal list of papers.
They have captured, enlightened, fascinated me. They cover so many aspects of AI, mainly social and psychological, but not only. As you will see, many refer to consumer research (as I am a marketing assistant), but the reflections expand to individuals in general. In addition, I also tried to select them for the broad scope they may have. It is also for me a way of virtually thanking the al work of all these researchers who inspire me in my work, daily learning and for my future.
Before diving into the list, a note. I know that for many people a paper might not be at first glance the most exciting thing to read.
But I say, go beyond the "first obstacle". 🏇
These papers are deeply fascinating, arouse curiosity, and captivate you as you read. Be inspired by the titles, read the abstracts, and if something catches your eye, move on to the introduction.
You will find that this process will be much more engaging than you think. The list is constructed without a particular order, so that you can discover it as you go along and be surprised.
I hope it is an appreciated gift, and that it reciprocates at least in small part the enthusiasm and appreciation I have for all you readers.
P.S. For some papers I will introduce a little note "Why read" to make them want to read even more!
🖍️ The List
Choice engines and paternalistic AI.
Sunstein, C. R. (2024). Choice engines and paternalistic AI. Humanities and Social Sciences Communications, 11(1), 1-4.
Unveiling the Mind of the Machine.
Clegg, M., Hofstetter, R., de Bellis, E., & Schmitt, B. H. (2024). Unveiling the Mind of the Machine. Journal of Consumer Research, 51(2), 342-361.
We often try to understand how consumers respond to humans vs. algorithms. But a really intriguing question is: how do we respond to different algorithms?
When combinations of humans and AI are useful: A systematic review and meta-analysis.
Vaccaro, M., Almaatouq, A., & Malone, T. (2024). When combinations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour, 1-11.
Studying and improving reasoning in humans and machines.
Yax, N., Anlló, H., & Palminteri, S. (2024). Studying and improving reasoning in humans and machines. Communications Psychology, 2(1), 51.
Deploying artificial intelligence in services to AID vulnerable consumers.
Hermann, E., Williams, G. Y., & Puntoni, S. (2023). Deploying artificial intelligence in services to AID vulnerable consumers. Journal of the Academy of Marketing Science, 1-21.
AI could have an impact in helping different people who are facing different kinds of problems. These are vulnerable consumers. In this paper, the authors illustrate the ways in which this can happen.
The consequences of generative AI for online knowledge communities.
Burtch, G., Lee, D., & Chen, Z. (2024). The consequences of generative AI for online knowledge communities. Scientific Reports, 14(1), 10413.
The potential of generative AI for personalized persuasion at scale.
What happens if more and more AI-generated messages start spreading. Can they persuade us? How?
Matz, S. C., Teeny, J. D., Vaid, S. S., Peters, H., Harari, G. M., & Cerf, M. (2024). The potential of generative AI for personalized persuasion at scale. Scientific Reports, 14(1), 4692.
Algorithmic management diminishes status: An unintended consequence of using machines to perform social roles.
Jago, A. S., Raveendhran, R., Fast, N., & Gratch, J. (2024). Algorithmic management diminishes status: An unintended consequence of using machines to perform social roles. Journal of Experimental Social Psychology, 110, 104553.
AI-induced dehumanization.
Kim, H. Y., & McGill, A. L. (2024). AI‐induced dehumanization. Journal of Consumer Psychology.
AI Companions Reduce Loneliness.
De Freitas, J., Uğuralp, A. K., Uğuralp, Z., & Puntoni, S. (2024). AI companions reduce loneliness. (working paper)
I talked about this study in a dedicated issue! You can find it here.
Psychology of AI: How AI impacts the way people feel, think, and behave.
Williams, G. Y., & Lim, S. (2024). Psychology of AI: How AI impacts the way people feel, think, and behave. Current Opinion in Psychology, 101835.
Generative AI enhances individual creativity but reduces the collective diversity of novel content.
Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28), eadn5290.
I talked about this study in a dedicated issue! You can find it here.
Climate-invariant machine learning.
Beucler, T., Gentine, P., Yuval, J., Gupta, A., Peng, L., Lin, J., ... & Pritchard, M. (2024). Climate-invariant machine learning. Science Advances, 10(6), eadj7250.
Applying AI to Rebuild Middle Class Jobs.
Autor, D. (2024). Applying AI to rebuild middle class jobs (No. w32140). National Bureau of Economic Research. (working paper).
Imperfectly Human: The Humanizing Potential of (Corrected) Errors in Text-Based Communication.
Bluvstein, S., Zhao, X., Barasch, A., & Schroeder, J. (2024). Imperfectly Human: The Humanizing Potential of (Corrected) Errors in Text-Based Communication. Journal of the Association for Consumer Research, 9(3), 000-000.
Artificial Intelligence in Marketing: From Computer Science to Social Science.
Puntoni, S. (2024). Artificial Intelligence in Marketing: From Computer Science to Social Science. Journal of Macromarketing, 44(4), 883-885.
AI is Changing the World: For Better or for Worse?
Grewal, D., Guha, A., & Becker, M. (2024). AI is Changing the World: For Better or for Worse?. Journal of Macromarketing, 02761467241254450.
What are the grand challenges that AI poses? This paper will enlighten you with a broad perspective (and it has been followed by several interesting commentaries).
The health risks of generative AI-based wellness apps.
De Freitas, J., & Cohen, I. G. (2024). The health risks of generative AI-based wellness apps. Nature Medicine, 1-7.
Chatbots and mental health: Insights into the safety of generative ai.
De Freitas, J., Uğuralp, A. K., Oğuz‐Uğuralp, Z., & Puntoni, S. (2024). Chatbots and mental health: Insights into the safety of generative AI. Journal of Consumer Psychology, 34(3), 481-491.
Frontiers: Can Large Language Models Capture Human Preferences?
Goli, A., & Singh, A. (2024). Frontiers: Can Large Language Models Capture Human Preferences?. Marketing Science.
Large language models can infer psychological dispositions of social media users.
Peters, H., & Matz, S. C. (2024). Large language models can infer psychological dispositions of social media users. PNAS nexus, 3(6), pgae231.
Does thinking about God increase acceptance of artificial intelligence in decision-making?
Moore, D. A., Schroeder, J., Bailey, E. R., Gershon, R., Moore, J. E., & Simmons, J. P. (2024). Does thinking about God increase acceptance of artificial intelligence in decision-making?. Proceedings of the National Academy of Sciences, 121(31), e2402315121.
How does religiosity influence our acceptance of AI?
Theorizing with Large Language Models.
Tranchero, M., Brenninkmeijer, C. F., Murugan, A., & Nagaraj, A. (2024). Theorizing with large language models (No. w33033). National Bureau of Economic Research. (working paper)
For people who really enjoy - like me - digging deep into theories, this pèaper will be a delight.
Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict?
Yao, F., Li, C., Nekipelov, D., Wang, H., & Xu, H. (2024). Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict?. arXiv preprint arXiv:2402.15467.
Durably reducing conspiracy beliefs through dialogues with AI.
Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science, 385(6714), eadq1814.
How developments in natural language processing help us in understanding human behaviour.
Mihalcea, R., Biester, L., Boyd, R. L., Jin, Z., Perez-Rosas, V., Wilson, S., & Pennebaker, J. W. (2024). How developments in natural language processing help us in understanding human behaviour. Nature Human Behaviour, 8(10), 1877-1889.
AI can help humans find common ground in democratic deliberation.
Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Koster, R., ... & Summerfield, C. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6719), eadq2852.
Promises and challenges of generative artificial intelligence for human learning.
Yan, L., Greiff, S., Teuber, Z., & Gašević, D. (2024). Promises and challenges of generative artificial intelligence for human learning. Nature Human Behaviour, 8(10), 1839-1850.
Perils and opportunities in using large language models in psychological research.
Abdurahman, S., Atari, M., Karimi-Malekabadi, F., Xue, M. J., Trager, J., Park, P. S., ... & Dehghani, M. (2024). Perils and opportunities in using large language models in psychological research. PNAS nexus, 3(7), pgae245.
Can AI really help psychological research effectively and responsibly? If so, how?
The Caring Machine: Feeling AI for Customer Care.
Huang, M. H., & Rust, R. T. (2024). The caring machine: Feeling AI for customer care. Journal of Marketing, 00222429231224748.
An enlightening holistic view on the relationship between consumers and chatbots with increasingly advanced response capabilities.
Avoiding embarrassment online: Response to and inferences about chatbots when purchases activate self‐presentation concerns.
Jin, J., Walker, J., & Reczek, R. W. (2024). Avoiding embarrassment online: Response to and inferences about chatbots when purchases activate self‐presentation concerns. Journal of Consumer Psychology.
Bright and dark imagining: How creators navigate moral consequences of developing ideas for artificial intelligence.
Hagtvedt, L. P., Harvey, S., Demir-Caliskan, O., & Hagtvedt, H. (2024). Bright and dark imagining: How creators navigate moral consequences of developing ideas for artificial intelligence. Academy of Management Journal, (ja), amj-2022.
Do creators integrate their feelings about AI into what they actually make?
A new sociology of humans and machines.
Tsvetkova, M., Yasseri, T., Pescetelli, N., & Werner, T. (2024). A new sociology of humans and machines. Nature Human Behaviour, 8(10), 1864-1876.
Quantifying the use and potential benefits of artificial intelligence in scientific research.
Gao, J., & Wang, D. (2024). Quantifying the use and potential benefits of artificial intelligence in scientific research. Nature Human Behaviour, 1-12.
We need to understand the effect of narratives about generative AI.
Gilardi, F., Kasirzadeh, A., Bernstein, A., Staab, S., & Gohdes, A. (2024). We need to understand the effect of narratives about generative AI. Nature Human Behaviour, 1-2.
Generative AI in innovation and marketing processes: A roadmap of research opportunities.
Cillo, P., & Rubera, G. (2024). Generative AI in innovation and marketing processes: A roadmap of research opportunities. Journal of the Academy of Marketing Science, 1-18.
How Can Deep Neural Networks Inform Theory in Psychological Science?
McGrath, S. W., Russin, J., Pavlick, E., & Feiman, R. (2024). How Can Deep Neural Networks Inform Theory in Psychological Science?. Current Directions in Psychological Science, 33(5), 325-333.
The inversion problem: Why algorithms should infer mental state and not just predict behavior
Kleinberg, J., Ludwig, J., Mullainathan, S., & Raghavan, M. (2024). The inversion problem: Why algorithms should infer mental state and not just predict behavior. Perspectives on Psychological Science, 19(5), 827-838.
Your Netflix recommendations should suggest the Christmas comedy similar to the last movie you watched, or that indie film you had on your watchlist — but you can't bring yourself to start watching?
Being Human in the Age of AI.
Puntoni, S., & Wertenbroch, K. (2024). Being Human in the Age of AI. Journal of the Association for Consumer Research, 9(3), 000-000.
One of the perspectives that I return to read periodically.
On the Future of Content in the Age of Artificial Intelligence: Some Implications and Directions.
Floridi, L. (2024). On the Future of Content in the Age of Artificial Intelligence: Some Implications and Directions. Philosophy & Technology, 37(3), 112.
What does it mean to reflect deeply about the content and ethics of AI?
People see more of their biases in algorithms.
Celiktutan, B., Cadario, R., & Morewedge, C. K. (2024). People see more of their biases in algorithms. Proceedings of the National Academy of Sciences, 121(16), e2317602121.
The Simple Macroeconomics of AI.
Acemoglu, D. (2024). The Simple Macroeconomics of AI (No. w32487). National Bureau of Economic Research. (working paper)
Considerations that open up a world of reflections.
Navigating the Future of Work: Perspectives on Automation, AI, and Economic Prosperity.
Brynjolfsson, E., Thierer, A., & Acemoglu, D. (2024). Navigating the Future of Work: Perspectives on Automation, AI, and Economic Prosperity.
Cyborgs, Centaurs and Self Automators: Human-Genai Fused, Directed and Abdicated Knowledge Co-Creation Processes and Their Implications for Skilling
Randazzo, S., Lifshitz-Assaf, H., Kellogg, K., Dell'Acqua, F., Mollick, E. R., & Lakhani, K. R. (2024). Cyborgs, Centaurs and Self Automators: Human-Genai Fused, Directed and Abdicated Knowledge Co-Creation Processes and Their Implications for Skilling. Directed and Abdicated Knowledge Co-Creation Processes and Their Implications for Skilling (August 08, 2024).
When using ChatGPT, Gemini, Claude etc., are you a Cyborg or a Centaur (or something else)?
How artificial intelligence constrains the human experience.
Valenzuela, A., Puntoni, S., Hoffman, D., Castelo, N., De Freitas, J., Dietvorst, B., ... & Wertenbroch, K. (2024). How artificial intelligence constrains the human experience. Journal of the Association for Consumer Research, 9(3), 000-000.
One of the papers that inspired me the most this year.
Protecting scientific integrity in an age of generative AI.
Blau, W., Cerf, V. G., Enriquez, J., Francisco, J. S., Gasser, U., Gray, M. L., ... & Witherell, M. (2024). Protecting scientific integrity in an age of generative AI. Proceedings of the National Academy of Sciences, 121(22), e2407886121.
Artificial intelligence and illusions of understanding in scientific research.
Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627(8002), 49-58.
Who Made This? Algorithms and Authorship Credit
Jago, A. S., & Carroll, G. R. (2024). Who made this? Algorithms and authorship credit. Personality and Social Psychology Bulletin, 50(5), 793-806.
I talked about this study in a dedicated issue! You can find it here.
Giving AI a Human Touch: Highlighting Human Input Increases the Perceived Helpfulness of Advice from AI Coaches.
Zhang, Y., Tuk, M. A., & Klesse, A. K. (2024). Giving AI a Human Touch: Highlighting Human Input Increases the Perceived Helpfulness of Advice from AI Coaches. Journal of the Association for Consumer Research, 9(3), 000-000.
The impact of generative artificial intelligence on socioeconomic inequalities and policy making.
Capraro, V., Lentsch, A., Acemoglu, D., Akgun, S., Akhmedova, A., Bilancini, E., ... & Viale, R. (2024). The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS nexus, 3(6).
A broad perspective to understand, really, and for crucial areas, what is the potential impact of AI on social and economic inequality.
Thank you for reading this issue of The Intelligent Friend and/or for subscribing. The relationships between humans and AI are a crucial topic and I am glad to be able to talk about it having you as a reader.
Has a friend of yours sent you this newsletter or are you not subscribed yet? You can subscribe here.
Surprise someone who deserves a gift or who you think would be interested in this newsletter. Share this post with your friend or colleague.