AI mythologies
notes on technological misunderstandings, critical thinking, and the cult of Silicon Valley
summer is coming to an end. the AI bubble still has not burst.
things seem a bit shakier these days — openAI recently rolled out GPT5, a new model of ChatGPT which received huge criticism from users and fans of the large language model who had come to rely on previous models which have been described by users as everything from confidante to therapist to best friend to romantic partner, and who view GPT5 as cold and not personable. A reddit post entitled “I lost my only friend overnight” is one of many describing grief over the changes made to the AI. The head of AI development at Microsoft recently spoke about his concerns around the new phenomena of ‘AI psychosis’. Meta is planning to downsize its artificial intelligence division and slow AI development efforts — even within the devoted cults of Silicon Valley, some are beginning to lose faith in the gods of artificial intelligence and infinite growth.
I’ve been having more and more conversations with colleagues about AI recently. Suddenly, I’m having to correct people’s AI generated documents and hear advice about asking ChatGPT to answer questions instead of googling them, much less asking a real person in the office for help. When we do discuss AI, the conversations are strange and full of incorrect assumptions and misinformation about what AI even is.
After a recent particularly unsettling conversation where a colleague very confidently told me (only after I had disputed his assertion that AI was “the perfect new technology”) that AI was “invented in 2004” and that during World War II “a code breaker guy”(Alan Turing?) figured out how to integrate humans and computers for the first time(???), I was in a state of shock and despair. My attempts to argue against these ‘facts’ were quickly talked over in this conversation, but even so, where does one even begin with a mythology of AI so misinformed?
For my own sanity, and in the hopes that I will be able to combat some of the AI gospel that is currently spreading through my workplace in future, I started putting together a fact sheet of some of the most common false claims I hear made about AI. Especially as we have upcoming, currently unspecified AI training days looming ahead of us, I think it will be nice to keep on hand at work. I’m sharing some of that here, which I hope might be of interest !
SOME COMMON MYTHS ABOUT AI
“AI knows everything!”
Technically, AI doesn’t ‘know’ anything. Large language models(LLMs) like ChatGPT generate text using mathematics and probability, and are able to do so thanks to the absolutely massive amounts of data that power them, allowing them to predict what word should go next in a sentence. With good data and training, AI is often able to generate believable text and can produce reasonable and even useful responses. However, because of the way text is generated, AI is often described as ‘hallucinating’ its responses and providing incorrect information, false facts, and low quality results. AI is a tool, and like with any technology, it should be used thoughtfully.
“AI is brand new technology”
While the types of AI that are most popular today have only been available for a few years, the term artificial intelligence was actually first used by roboticists and computer scientists in the 1950s, and landmark research by figures like Alan Turing paved the way for future research and innovations in AI, including large language models and chatbots, more common things that we no longer even think of as AI(spellcheck, speech-to-text, autopilot in planes, etc.), and even technologies that do not exist, like Artificial General Intelligence(AGI).
“AI is magic”
AI is not magical. It may be able to do impressive and surprising things, but it’s nothing more than a magic trick. The science and mathematics behind them can be explained. The results you see are not the product of anything magical or mysterious, but the hard work of computer scientists, researchers, and people around the world working in data centres and behind the scenes to power AI.
Also: The AI you see in movies is not real and does not exist. AI is not sentient. AI is not evil. AI is not your friend. AI is not alive. AI is not a science fiction fantasy. It’s computers. Everything’s computer.
“AI is better at [……]than I am!”
AI may be able to generate an email or a report quickly, but the best person to do your job is you. You know your projects, the work you do, and the progress you have made. You have personal relationships with the people you work with. You have skills, experience, intuition, and abilities that are unique to you, and cannot be replaced.
“There’s no harm in using AI!”
Using AI has an ecological cost. AI relies on massive amounts of water and energy. Here in Ireland and elsewhere around the world massive data centres have been constructed that result in environmental damage and resources being used, which harms local communities and wildlife.
There can also be personal and professional consequences to using AI. If you are writing in a professional environment, you are responsible for AI generated mistakes or misinformation, and submitting AI written works for work or school could be considered plagiarism, misconduct, or cheating. Ongoing research also suggests that relying on AI can lead to cognitive decline, memory problems, impaired critical thinking skills, and can damage productivity and performance.1
AI is the future, we all just need to get used to it.
Why should big tech companies get to decide our future? The AI bubble we are in today is still new, and it will not last forever. It is up to every one of us to decide what our futures look like, and we start to shape the future of our work and our relationships with technology through the decisions we make today.
Thanks as always for reading cloudtopia! AI and its mythologies have been weighing heavily on my mind lately. From day to day conversations to research work I’ve been doing, the AI question seems to be endlessly messy and more and more unavoidable. And meanwhile, I’m getting ready for the start of another school year and the next stages of my research work in AI, religion, and ethics. That old back-to-school feeling of excitement and apprehension is back. The summer is not yet over, but I can feel something beginning to shift.
This week, I’ve been reading Adam Becker’s book More Everything Forever which I highly recommend to anyone interested in AI issues, trying to understand what’s going on with big tech, and making fun of Elon Musk. Becker breaks down concepts of infinite growth and the ethics of effective altruism and longtermism, while exploring topics like artificial intelligence, transhumanism, and space colonisation, and critiques the cult-like world of Silicon Valley and the dangerous, destructive worldviews that emerge from it.
If you enjoyed this newsletter, why not subscribe, share with a friend, or have a conversation about AI with your chatGPT obsessed coworker?
from across cyberspace,
isobel
Andrew R. Chow, “ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study”, TIME, June 2025. https://time.com/7295195/ai-chatgpt-google-learning-school/
Chris Westfall, “The Dark Side Of AI: Tracking The Decline Of Human Cognitive Skills”, Forbes, December 2024. https://www.forbes.com/sites/chriswestfall/2024/12/18/the-dark-side-of-ai-tracking-the-decline-of-human-cognitive-skills/