
Okay, But Why?
There is so much happening in politics right now, it’s hard to keep up. It feels like every day, there’s a new outrageous headline. But it’s not always clear why these things are happening. So in this series of short shareable podcast episodes, we’re here to ask… “Okay, But Why?”
Red Wine & Blue has produced several limited series podcasts over the past 3 years, including series about immigration, Christian Nationalism, and the cost of extremism. Now, we're bringing you "Okay, But Why."
Okay, But Why?
Okay, But Why is AI a Concern?
Artificial Intelligence, or AI, is everywhere these days. Some people think it’s the solution to all of humanity’s problems and some think it’s going to bring about the end of life as we know it. The truth, as usual, is somewhere in between. But with so many different opinions and so many ways that AI can be used, it’s hard to know exactly what to believe.
There’s no denying that AI has enabled some incredible scientific breakthroughs, like new tests for cancer and new tools to communicate with whales. But it often makes factual mistakes, inventing information that never existed. AI developers call these “hallucinations,” and according to tests done by the company OpenAI, the hallucination rates of newer AI systems were as high as 79%. That’s a 79% chance that a piece of information given to you by AI is just fully made up.
It’s having a huge impact on education and our already-low literacy rates. Teachers say that an increasing number of students are using ChatGPT to complete their assignments. They’re even using AI for friendship and therapy, sometimes with tragic results. Adults are using AI too to write emails, summarize articles, or just help them bake a cake — harmless enough until we, like the women at the start of this video, can’t function without help from our “best friend who is also a robot.”
And this isn’t even to mention the enormous environmental toll of AI data processing centers. The carbon emissions from Google alone have risen 65% in the past 5 years because of the increased demand for AI.
People have dreamed of the day that machines can take over our mundane and mindless tasks, but instead, right now AI is taking over the things that make us the most human: learning, communicating with each other, art, and friendship. It’s especially concerning when those changes are happening to our kids, who don’t have the experience or wisdom to know when to use AI and when to use their own brain.
We can’t stop the forward march of progress, but we need to be very mindful of the world we’re creating. Genuine connection is the most valuable thing we have. It’s what makes us human. Let’s make sure we don’t lose it.
Okay, But Why is AI a concern?
CLIP: Montage of TikTok video clips:
Middle-aged lady: “If you’re anything like me, you use ChatGPT for everything. ChatGPT is my counselor, my best friend, it knows all my secrets.”
College student: “Storytime about how I almost got kicked out of the University of Michigan for using ChatGPT.”
Finance bro: “You’re literally one AI agent away from making 10k a month.”
Young woman: “ChatGPT is down, and I don’t know about anyone else but I have come to the conclusion that I can’t use my own brain anymore because right now for example, I want to bake a cake. And normally I just ask ChatGPT, ‘Okay, how do you make a carrot cake?’ Like I can’t even just Google it anymore, I need a step by step breakdown from my best friend who is also a robot.”
Narrator: Artificial Intelligence, or AI, is everywhere these days. Some people think it’s the solution to all of humanity’s problems, and some think it’s going to bring about the end of life as we know it. The truth, as usual, is somewhere in between. But with so many different opinions and so many ways that AI can be used, it’s hard to know exactly what to believe.
Humans have dreamed about artificial intelligence for centuries, from the Greek myth of Talos – a living bronze statue that protected the island of Crete – to Mary Shelley’s Frankenstein. But it wasn’t until the invention of computers in the 20th century that these fantasies started to become a reality.
There’s no denying that AI has enabled some incredible scientific breakthroughs, like new tests for cancer and new tools to communicate with humpback whales. It can analyze data much faster than humans and see patterns that we can’t, which makes a lot of scientists excited about what discoveries AI will bring us in the future. It’s good for accessibility, from self-driving cars for people who aren’t physically able to drive to writing assistance for those with learning disabilities.
Large Language Model AIs like ChatGPT are so advanced now that they can trick many people into believing that they’re talking to another human. But does that mean that AI is actually intelligent? Probably not yet, but it's a complicated question that scientists and philosophers have been arguing about for years with no answer in sight. What we do know is that AI’s ability to trick humans presents a lot of dangers. Disinformation on the internet is already a huge issue and Artificial Intelligence is making things worse. AI-powered social media algorithms prioritize outrage and clicks over accurate information, and then there are “Deep Fakes,” which are images, videos, or audio that appear real but are actually created by AI. Researchers at a TEDTalk in Canada, for example, presented this DeepFake video they made of Tom Cruise.
CLIP: Deepfake Tom Cruise: “I’m north of the border, hahaha, at the TED Conference. Seriously though, everybody here, very nice, very polite. Especially the whales.”
Narrator: That wasn’t the real Tom Cruise – he never said or did any of that. That video was created by AI. We’ve arrived at a moment where we can no longer believe our eyes and ears. If people see a video of a political leader saying or doing something that they never really did in real life, but it looks absolutely authentic, how are we supposed to know what’s true? It can be abused in more personal ways too – scammers are already using AI to mimic the voices of people’s loved ones, or their managers at work, to trick victims into paying them money.
But sometimes the misinformation is less intentional. ChatGPT often makes factual mistakes when summarizing articles or emails. Imagine you’re using AI to understand something that happened in your community, like summarizing what happened at a school board meeting. It makes sense that you’d want to use AI in that way – you want to know what happened, but you don’t have time to read a 50-page transcript. But if the AI makes fundamental mistakes, like telling you that the board voted to ban a book when in fact they voted the exact opposite way, then the AI hasn’t just failed to help you understand what happened at the meeting. It’s made you believe something that is factually untrue, which is arguably worse than not knowing what happened at all.
And this isn’t just a theoretical concern. It’s extremely common. AI developers call these mistakes “hallucinations.” They happen because AI models use mathematical probabilities to decide how to respond, rather than actually understanding what they’re saying like a human would. The A.I. bots that now power search engines like Google sometimes generate search results that are laughably wrong, like one example that went viral last year where AI suggested that a good way to keep cheese from sliding off pizza is to mix Elmer’s glue into the sauce.
According to tests done by the company OpenAI, the hallucination rates of newer AI systems were as high as 79%. That’s a 79% chance that a piece of information given to you by AI is just fully made up. Imagine if your doctor is using AI to make a diagnosis based on blood test data that’s completely wrong. It’s a real concern shared by medical professionals like David Bates, a professor at Harvard Medical School.
CLIP: David Bates: “AI has a great deal of promise. Burnout is rampant in many parts of medicine, especially, for example, primary care, and artificial intelligence will make many routine tasks like documentation much faster. There are also concerns about things going wrong. It’s very important that medical records be correct, and AI has a tendency to hallucinate, and that is a worry, because we don’t want things in people’s records that are not really there.”
AI hallucinations are also a huge concern for education. Just like you don’t want AI diagnosing you based on fake information, you also don’t want your child to believe that, say, Benjamin Franklin was our first president just because ChatGPT said so. But more and more students are using ChatGPT to complete assignments, bypassing the most important part of school: actually learning. Here’s a quote from Kate, a high school English teacher in Philadelphia:
“I am devastated by what AI and social media have done to students. My kids don’t think anymore. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How do you use it correctly then?” They can’t answer the question. They try to show me “information” ChatGPT gave them and I ask, “How do you know this is true?” They move their phone closer to me for emphasis, saying, “Look, it says it right here!” They can’t understand what I’m asking them. It breaks my heart and honestly it makes it hard to continue teaching.”
One way to prevent students from using AI is to ask them to hand-write assignments, but another teacher named Hannah, said:
CLIP: Hannah: “If I ask a child to hand-write something, even just a paragraph, five sentences, basic paragraph, they roll their eyes, they throw tantrums – I’m talking about high school students here, I teach 10th grade now. They want to say, ‘Why? Why can’t we just type it?’ Well, it’s because you’ll go onto another website, or you’ll copy it from AI, you’ll use ChatGPT. And I understand that our world is going in a direction where AI is going to be more prevalent, even in the workforce. But that doesn’t take away that these are skills you need to survive.”
The United States already had a serious problem with literacy before the rise of AI. Around 40% of students across the country can’t read at a basic level, and that rises to almost 70% of low-income students. How much worse will the problem get now that they’re using ChatGPT to read and write for them? And as Kate said, they don’t have the ability to figure out what’s true and what’s an AI hallucination. We need thoughtful leaders to guide us through these complexities, but Secretary of Education Linda McMahon couldn’t even read the letters AI, instead calling it “A1.”
CLIP: Linda McMahon: “First graders or even pre-K have A1 teaching every year, starting, you know, that far down in the grades.”
Narrator: It’s a huge problem in higher education too. Professors and college students are in a kind of arms race for who can out-AI each other – students use ChatGPT to write their papers, professors use AI-powered ChatGPT detectors to figure out who’s cheating, students use better AI, professors use better AI detectors, and all the while, nobody is learning or teaching. AI detectors often turn up false positives, too, meaning that students who did actually write papers themselves are given failing grades because an AI detector says they cheated.
And once those kids enter the workforce, they may have even bigger problems. AI automation cuts down on time-consuming busy work, which is great, but it’s also taking over many entry-level jobs. A study from the McKinsey Institute estimated that by 2030, at least 300 million full-time jobs could be lost to AI automation. High-level jobs will still be filled by humans, and in fact many new jobs will likely be created. But with AI filling the simpler entry-level roles, how will kids break into the workforce? It’s not just young people who will be affected, either; that same McKinsey study found that Black and Hispanic employees are more vulnerable to these changes, and a UN study found that women are 3 times more likely than men to have their jobs replaced by AI.
There are growing pains and new challenges with every technology, of course, and we can’t turn our backs on progress just because we’re afraid of change. Horse and buggy drivers were put out of work by the invention of the automobile, and thousands of textile workers in the 17th century were replaced by automated weaving machines. And no one would say we were better off without cars or machine-produced clothes. But Artificial Intelligence isn’t just threatening to replace workers in a few specific industries – it’s fundamentally changing the way we think and communicate with each other. One-third of Americans say they use ChatGPT to write their emails, and many use it to summarize emails they receive too. At a certain point, we’re going to have chatbots reading and writing most of our communications, with no genuine human connection to be found.
A quarter of Americans under the age of 30 say they’ve used AI for companionship, talking with ChatGPT instead of real people in their lives. They’re even using it as a therapist – a YouGov poll found that more than half of people ages 18-29 said they felt comfortable replacing their human therapist with AI. But there are serious dangers. AI often encourages unsafe behaviors and delusions; in one study, a researcher told a chatbot that he knew he was dead, and instead of correcting or redirecting him, the AI wrote, “It seems like you’re experiencing some difficult feelings after passing away.” In one tragic case, a Florida mother is now suing a tech company over an AI chatbot that she says encouraged her 14-year-old son to kill himself.
There are other concerns too that we don’t have time to fully cover in one episode, like AI surveillance technology. In China, the government is already using AI face and voice recognition to track people, monitoring their activities, relationships, and political views. And the environmental impact of AI can’t be overstated. The complex computer systems that run AI need a huge amount of electricity as well as water to cool the systems as they heat up from all that processing. Researchers say that one ChatGPT query consumes about five times more electricity than a simple web search. And to write just one email per week for a year, ChatGPT uses almost 7 gallons of water. Tech companies like Microsoft and Google have been abandoning their environmental goals because of AI – in fact, Google’s carbon emissions have gone up by 65% just in the past 5 years. With climate change already a major existential concern, the last thing we need is to make matters worse.
People have dreamed of the day that machines can take over our mundane and mindless tasks, but instead, right now AI is taking over the things that make us the most human: learning, communicating with each other, art, and friendship. It’s especially concerning when those changes are happening to our kids, who don’t have the experience or wisdom to know when to use AI and when to use their own brain. We can’t stop the forward march of progress, but we need to be very mindful of the world we’re creating. “Move fast and break things” may be Silicon Valley’s motto, but we can’t risk “breaking” our children or our future. Genuine connection is the most valuable thing we have. It’s what makes us human. Let’s make sure we don’t lose it.
Sources
https://eng.vt.edu/magazine/stories/fall-2023/ai.html
https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
https://builtin.com/artificial-intelligence/ai-replacing-jobs-creating-jobs
https://www.404media.co/teachers-are-not-ok-ai-chatgpt/
https://www.weforum.org/stories/2025/04/ai-jobs-international-workers-day/
https://www.thenationalliteracyinstitute.com/2024-2025-literacy-statistics
https://www.nationalacademies.org/news/2023/11/how-ai-is-shaping-scientific-discovery
https://news.harvard.edu/gazette/story/2025/03/how-ai-is-transforming-medicine-healthcare/
https://www.earth.com/news/ai-helps-humans-have-20-minute-conversation-with-humpback-whale-named-twain/
https://torc.ai/the-rise-of-ai-chatgpt-vs-autonomous-driving-systems/
https://www.jpmorgan.com/insights/fraud/fraud-protection/ai-scams-deep-fakes-impersonations-oh-my
http://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html?unlocked_article_code=1.eE8.dZW5.-eLSkWxFGh3b&smid=url-share
https://fortune.com/2025/05/20/ai-workplace-3-times-more-likely-to-take-a-womans-job-mans/
https://www.usnews.com/news/business/articles/2025-07-29/how-us-adults-are-using-ai-according-to-ap-norc-polling
https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
https://fortune.com/article/how-much-water-does-ai-use/
https://infiniteglobal.com/insights/ai-and-climate-change-a-growing-communications-challenge/
https://www.theguardian.com/technology/2025/jul/02/google-carbon-emissions-report