7 Comments
User's avatar
imthinkingthethoughts's avatar

Control really is a central theme of the human experience. Having an internal locus of control is at the crux of mental health. And yet it is a great and mysterious irony that we fundimentally only have the perception of control - we didn't choose our genes, or environment, let alone any decisions that followed. No wonder hard determinism is so uncomfortable for many to consider, when it is the reality that we all inhabit. AI is very much pulling up this deep rooted reality, that control is a perception, many prefer to ignore or deny. We don't control how AI develops internally, and as these systems develop more agency and ability we likely control them less and less.

My preference is to hold everything I know lightly. Yes I experience having control over my day to day actions (sometimes), but I don't really know this for sure. Sure, I tried to do what I can within my powers to make decisions that help others and myself, and this is that internal locus of control that is very important, but there is a thread of humour that is always with me. A giggle I have with myself, when then and again realise that we never actually had any control to begin with.

Expand full comment
Riccardo Vocca's avatar

Thank you for your comment and sorry for the delay. I really appreciate it! I think that control represent probably the most interesting factor among these, and that ways to elicit perceived control can be very important from a theoretical and practical point of view. Thanks again for the time dedicated to the issue and for this comment!

Expand full comment
Philip Tschirhart's avatar

Very interesting post! A recent pew report found that something like 63% of Gen Z respondents felt that “ethical AI” was an oxymoron.

I think the idea you’ve developed of algorithmic aversion is particularly concerning in the face of the pressing need for AI governance.

I’m working on a forthcoming piece about how instances of AI systems circumventing human programmed rules signal a deep concern with the strength of the contemporary social contract.

Maybe there is a follow up piece aligning these ideas!

Expand full comment
Riccardo Vocca's avatar

Hi Philip (nice to meet you!) and sorry for the delay in the response. Thank you sincerely for this comment. I think that how AI can change our "internal rules" of conduct in certain situations is indeed an interesting topic! Let me know if your piece is already out, I'd be glad to read it. Furthermore, if this can be useful, there is an interesting paper by Gill (2020) related to autonomous vehicles and moral judgement. Maybe it could represent a twist for your article, let me know!

Expand full comment
Philip Tschirhart's avatar

Also - just checked your page and you're at exactly 1K subs!

Congrats!!!

Expand full comment
Philip Tschirhart's avatar

Hey Riccardo - thanks for following up! After some extended tinkering - the piece is now available :)

Would love your thoughts! 🤓

https://strategyandsignal.substack.com/p/when-ai-cheats-digital-deceptions?r=57niui

Expand full comment
Riccardo Vocca's avatar

Great Philip!!

Expand full comment