Voice of Sovereignty

High on AI

The Foundation for Global Instruction

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 13:55

Send a text

HIGH ON AI: THE HYPE, THE RISKS, AND THE REAL FUTURE

The most powerful drug humanity has ever created isn't fentanyl or cocaine. It's Artificial Intelligence. And right now, we're dangerously high.

In this episode of Voice of Sovereignty, Dr. Gene Constant—74-year-old Navy veteran, Doctor of Business Administration, author of 120+ books, and founder of Global Sovereign University—delivers a sobering 20-minute intervention for a world intoxicated by AI hype.

This isn't another technophobe rant. Dr. Constant uses AI daily to deliver free education worldwide through GSU's AI tutoring platform. But he's also seen enough market manias and "revolutionary" technologies crash and burn to recognize the symptoms of collective intoxication.

THE PARTY (November 30, 2022)

When ChatGPT launched, something unprecedented happened. 100 million users in two months. For the first time, a machine didn't just calculate—it spoke, wrote poems, and debugged code. Silicon Valley smelled blood. Google issued "Code Red." Microsoft poured billions in. Every boardroom got the same panicked memo: "Get us an AI strategy. Now." The gold rush was on. And like every gold rush, the winners weren't digging for gold—they were selling shovels.

THE HANGOVER (What They Didn't Tell You)

Every high has a price. Dr. Constant introduces "Sarah"—a composite of dozens of real people—a paralegal who lost her job to an AI legal research tool. The brutal irony: the AI was wrong 30% of the time, hallucinating case law that didn't exist. But it was cheap. In the "good enough" economy, cheap beats correct.

This is the White-Collar Shock. We spent decades pushing kids into college, chasing white-collar careers... and now those are the jobs getting automated first. Turns out, it's easier to simulate a lawyer than a plumber—because fixing pipes requires navigating physical chaos, while legal briefs follow patterns AI excels at.

But job losses are just the beginning. These systems don't understand truth—they're probability engines predicting the next likely word. When they don't know something, they don't say, "I don't know." They hallucinate with calm authority. Lawyers have submitted briefs with fabricated citations. Students turn in essays with invented sources. We're flooding our information ecosystem with synthetic garbage at an industrial scale.

Layer on bias (Amazon's hiring algorithm taught itself that being male was a qualification), the accountability vacuum (everybody's involved, nobody's responsible), and the ecological cost (training one AI model produces emissions equivalent to hundreds of trans-Atlantic flights)—and the hangover hits hard.

THE MORNING AFTER (What You Can Actually Do)

The future isn't stopping AI—that horse left the barn. It's domesticating it. Drawing boundaries. Remembering we're the humans and it's the tool.

Dr. Constant introduces the Human-in-the-Loop philosophy: AI is the most enthusiastic, well-read, but slightly delusional intern you've ever met. You wouldn't let that intern sign contracts or talk to clients unsupervised. You'd give them defined tasks and check their work.

The traits that make us irreplaceable: Accountability (someone has to take responsibility when AI fails), Taste (recognizing excellence in a flood of mediocrity), Empathy (you can simulate conversation but not caring), and Skepticism (the professional who questions every output and smells bullshit). 

Available now on Amazon.

"The best time to sober up is before you hit rock bottom. The second-best time is right now."

• Global Sovereign University: g

Support the show

"HIGH ON AI" 

Welcome to Voice of Sovereignty. I'm Dr. Gene Constant, and today I need to have an uncomfortable conversation with you about the most powerful drug humanity has ever created. No, I'm not talking about fentanyl or cocaine. I'm talking about something far more addictive, far more expensive, and far more dangerous to our collective future.

I'm talking about Artificial Intelligence.

Now, before you roll your eyes and think, "Oh great, another technophobe ranting about robots," let me be clear: I'm not anti-technology. I'm a Doctor of Business Administration. I've authored over 120 books. I run Global Sovereign University, which uses AI tutoring to deliver free education worldwide. I use these tools every single day.

But I'm also a 74-year-old veteran who's seen enough hype cycles, enough market manias, and enough "revolutionary" technologies crash and burn to recognize the symptoms of collective intoxication when I see them.

And friends, we are high right now. Dangerously high.

That's why I wrote my new book, "High on AI: The Hype, The Risks, and The Real Future." And over the next 20 minutes, I want to take you on a journey—from the party, through the hangover, to the morning after. Because whether you're a business owner terrified of being left behind, a worker worried about losing your job, or just someone trying to figure out if that customer service chatbot is lying to you... this affects your life right now.

Let's start with November 30th, 2022. That's the day the party started.

 When OpenAI released ChatGPT to the public, something unprecedented happened. Within five days, a million people signed up. Within two months, one hundred million users. To put that in perspective, it took TikTok nine months and Instagram two and a half years to hit that milestone.

But this wasn't just another viral app. This was different. For the first time in human history, a machine didn't just calculate—it spoke. It wrote poems. It debugged code. It drafted emails that sounded more human than half the emails you get from actual humans.

People were mesmerized. I was mesmerized. We all sat there watching the text stream across our screens, letter by letter, as if typed by some invisible, hyper-intelligent presence. And in that moment, something snapped in our collective consciousness.

We stopped asking, "Is this useful?" and started asking, "Is this alive?"

Silicon Valley smelled blood in the water. Google issued a "Code Red"—their trillion-dollar search empire was suddenly obsolete. Microsoft, the boring uncle of tech, saw a chance to kneecap their rival and poured billions into OpenAI. Meta pivoted from the Metaverse disaster. Amazon scrambled. Every CEO in every boardroom from New York to Singapore got the same panicked memo: Get us an AI strategy. Now.

The gold rush was on. And like every gold rush in history, the winners weren't the people digging for gold—they were the ones selling the shovels.

NVIDIA's stock went vertical. Venture capital money flooded into anything with "AI" in the pitch deck. Suddenly, every company was "AI-powered"—your toothbrush, your refrigerator, your dog collar. It didn't matter if the AI actually did anything. What mattered was the buzzword. The magic spell that made stock prices jump.

We were drunk on possibility. The Silicon Valley prophets told us this was just the beginning. They promised us a future where AI would cure cancer, solve climate change, end poverty, and liberate us all from the drudgery of work. We'd live in a post-scarcity paradise where robots did the labor and humans pursued pure creativity.

It was intoxicating. It was seductive. And it was mostly bullshit.

Because here's what they didn't tell you while you were taking those first hits of digital dopamine: Every high has a price. And we're about to get the bill.

Let me tell you about Sarah. She's a composite of dozens of people I've talked to, but her story is real. Sarah was a paralegal at a mid-sized law firm. Smart, competent, and twenty-eight years old with law school plans and $60,000 in student debt.

Last year, her firm bought an AI legal research tool. Suddenly, work that took Sarah six hours took the machine six minutes. Her boss was thrilled. The partners saw dollar signs. Three months later, Sarah and four other junior staff got their pink slips. The firm kept one paralegal—to fact-check the AI.

And here's the terrifying part: The AI was wrong thirty percent of the time. It hallucinated case law. It cited legal precedents that didn't exist. It was confident, articulate, and completely full of shit. But it was cheap. And in the "good enough" economy, cheap beats correct.

This is what I call the White-Collar Shock. And it's not coming—it's here.

For decades, we were told that robots would take the blue-collar jobs first. We imagined self-driving trucks and burger-flipping machines. We thought the trades—the plumbers, the electricians, the carpenters—were in danger.

We were looking in completely the wrong direction.

Turns out, it's far easier to simulate a lawyer than a plumber. Why? Because fixing a leaky pipe in a 100-year-old house requires navigating physical chaos—every job is different, every pipe is unique. But writing a legal brief? That follows a structure. It's patterns and precedents. It's exactly what AI excels at.

The irony is brutal: We spent decades pushing our kids into college, racking up debt, chasing white-collar careers... and now those are the jobs getting automated first.

But job losses are just the beginning of the hangover. Let's talk about something even more insidious: the death of truth.

These AI systems don't understand truth. They can't. They're probability engines. They predict the next most likely word based on statistical patterns. When ChatGPT tells you George Washington crossed the Delaware in 1776, it's not because it knows that fact—it's because those words statistically tend to appear together in that order in its training data.

But here's where it gets dangerous: The machine has learned that humans trust confidence. So when it doesn't know something, it doesn't say, "I don't know." It hallucinates. It invents. It generates plausible-sounding lies with the same calm authority it uses for actual facts.

Lawyers have submitted legal briefs with fabricated case citations. Students have turned in essays filled with invented sources. Journalists have published articles with AI-generated "facts" that never happened. And all of it sounds right. It's grammatically perfect. It's formatted correctly. It's just... false.

We're flooding our information ecosystem with synthetic garbage. And unlike human liars, these machines lie at an industrial scale—billions of words per day, polluting the commons of human knowledge.

Now layer on top of that the bias problem. In my book, I detail the story of Amazon's hiring algorithm—the one that taught itself that being male was a qualification for engineering jobs. Because it analyzed a decade of Amazon's hiring data and noticed: mostly men got hired. So it optimized for maleness.

The machine wasn't sexist. It was just accurate about Amazon's sexist hiring practices. And that's the horror: AI doesn't just perpetuate our biases—it industrializes them. It scales discrimination to millions of decisions per second.

And here's the kicker: Nobody's accountable. When a human manager rejects your resume because of your race, that's illegal. You can sue. You can demand an explanation. But what about when an AI rejects you? The company says, "The algorithm scored you as high risk." The engineers say, "It's a black box; we don't know why." The vendor says, "Blame the data."

Everybody's involved. Nobody's responsible. Welcome to the Accountability Vacuum.

But wait—it gets worse. Let's talk about what this is doing to our planet.

The "cloud" isn't a cloud. It's a coal-fired power plant disguised as a data center. Training a single large AI model produces carbon emissions equivalent to hundreds of trans-Atlantic flights. And that's just the training run.

Every time you ask ChatGPT a question, you're burning electricity equivalent to leaving a light bulb on for an hour. Every AI-generated image consumes the water of a 500ml bottle for cooling. Multiply that by billions of users and billions of queries per day, and you're looking at the energy consumption of a small country.

We are literally burning the planet to generate marketing copy and surreal cat pictures.

Microsoft's carbon emissions jumped 30% in one year—directly from AI data centers. And this is the same company that promised to be carbon negative by 2030. The AI arms race shredded that pledge like tissue paper.

 So here we are. Jobless. Lied to. Biased against. Planet burning. Starting to feel that hangover yet?

But here's the thing about hitting rock bottom: It's where you finally get honest. It's where you finally see clearly. And that's where Part Three of my book—and the real hope—comes in.

If you're listening to this thinking, "Great, Gene, thanks for the nightmare fuel. What am I supposed to do about it?"—I hear you. And I've got answers.

The future isn't about stopping AI. That horse has left the barn, jumped the fence, and is halfway to the next county. The future is about domesticating it. It's about drawing boundaries. It's about remembering that we're the humans and it's the tool—not the other way around.

Think of AI like this: It's the most enthusiastic, well-read, but slightly delusional intern you've ever met. This intern has memorized the entire library. It can draft reports at superhuman speed. But it also lies when it doesn't know something, copies other people's work, and has zero common sense.

You wouldn't let that intern sign contracts, make strategic decisions, or talk to clients unsupervised. You'd give them clearly defined tasks: "Summarize this report. Draft these ten headlines. Translate this document." And then you'd check their work.

That's the boundary. AI is not the architect. It's the bricklayer. You still need to read the blueprints, understand the structure, and make sure the foundation is solid.

This is what I call the Human-in-the-Loop philosophy. And it's not just about protecting jobs—it's about protecting quality, protecting truth, and protecting humanity.

Here's what that looks like in practice:

First, expertise becomes more valuable, not less. In a world flooded with synthetic content, the signature of a trusted human expert is the ultimate seal of quality. "Reviewed by a human" will become the "organic food" label of the information age.

Second, we reclaim the skills we were told to abandon. Writing isn't just about the final product—it's about the thinking that happens during the struggle. When you outsource the process to AI, you truncate your own intellectual development. You become a manager of mediocrity instead of a creator of excellence.

Third, we focus on augmentation, not automation. The goal isn't to replace the human with the machine—it's to make the human superhuman. A lawyer using AI to analyze ten thousand documents to find the winning argument? That's augmentation. That lawyer wins the case and justifies their fee. A lawyer using AI to write the brief? That's automation. That lawyer gets fired.

The traits that make us irreplaceable are the ones the machine can't digitize:

Accountability - When the AI screws up, someone has to take responsibility. That's worth paying for.

Taste - In a world of infinite mediocre options, the ability to recognize excellence is the ultimate skill.

Empathy - You can simulate a conversation, but you can't simulate caring. Real human connection will become the premium product.

Skepticism - The professional skeptic who questions every AI output, traces facts to their source, and smells bullshit? That person is gold.

These aren't "soft skills" anymore. They're the only competitive moats left.


 Look, I wrote "High on AI" because I'm watching brilliant people make catastrophic decisions based on hype instead of reality. I'm watching companies fire their institutional knowledge to chase a quarterly stock bump. I'm watching students graduate without learning to think because they let the machine do their homework.

And I'm tired of it.

This book is a survival manual. It's 10 chapters of straight talk about what's actually happening—not the sanitized press releases from Silicon Valley, but the messy, expensive, sometimes ugly truth.

Part One shows you how we got drunk—the hype cycle, the FOMO in the boardroom, and the financial bubble.

Part Two walks you through the hangover—the job losses, the truth crisis, the bias, the surveillance state, and the ecological cost.

Part Three hands you the roadmap for the morning after—how to set boundaries, how to augment instead of automate, and how to reclaim the traits that make us human.

Whether you're a business owner trying to navigate this chaos, a worker trying to protect your livelihood, or just someone who wants to understand why your teenager's essay sounds like it was written by a robot... this book will give you clarity.

"High on AI: The Hype, The Risks, and The Real Future" is available now on Amazon, at your local bookstore, and as an audiobook. The link is in the show notes.

 Remember: The best time to sober up is before you hit rock bottom. The second-best time is right now.

I'm Dr. Gene Constant. This is Voice of Sovereignty. Stay sovereign, my friends—and for God's sake, read the fine print before you hand your life over to an algorithm.