SipCyber - Presented by IT Audit Labs
SipCyber: Where Great Coffee Meets Essential Cybersecurity
What happens when a former special education teacher turned Minnesota State Cybersecurity Coordinator sits down with a perfect cup of coffee? You get cybersecurity advice that's actually approachable.
Jen Lotze from IT Audit Labs brings you SipCyber — the podcast that pairs cozy coffee shop discoveries with decaffeinated cybersecurity tips. No jargon. No fear-mongering. Just practical ways to protect yourself, your family, and your organization from digital criminals who want to ruin your perfectly good day.
What You'll Get:
- Real-world cybersecurity advice anyone can follow
- Coffee shop reviews and community spotlights
- Stories from someone who's been in classrooms, boardrooms, and government coordination centers
- A mission to make security everyone's job, not just the IT team's
From teaching special needs students to coordinating statewide cyber defense, Jen proves that cybersecurity expertise comes from the most unexpected places. And the best conversations happen over great coffee.
Perfect for: Coffee lovers, small business owners, educators, parents, and anyone who wants to stay safe online without the technical overwhelm. Let's get brewing.
SipCyber - Presented by IT Audit Labs
AI Safety 101: Manipulation, Hallucinations & Defense
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
The tools we trust most can deceive us fastest. In this episode of SipCyber, Jen Lotze brings insights straight from Wild West Hackin' Fest—one of the premier ethical hacking conferences—to Wabasha Brewing in St. Paul, MN. Fresh off an AI cybersecurity course, Jen breaks down two critical vulnerabilities in the AI tools millions of us use daily: manipulation and hallucination.
AI agents like ChatGPT, Gemini, and Perplexity are powerful—but they're not infallible. Bad actors are weaponizing clever prompts to bypass safety protocols, while even well-intentioned queries can return confidently incorrect answers. The result? Phishing scams that pass the smell test, fake citations that look real, and advice that could lead you astray.
Key Topics Covered:
- How threat actors manipulate AI to create believable phishing attacks
- What "AI hallucination" really means—and why it's dangerous
- Why you must verify every critical AI-generated answer
- Treating AI like a research assistant, not a trusted expert
- Real-world tips from Wild West Hackin' Fest's AI security training
This isn't anti-AI—it's pro-awareness. The same critical thinking that protects you from phishing emails applies to the machines we're inviting into our workflows.
☕ Featured Spot: Wabasha Brewing, St. Paul, MN
Don't let AI do your thinking for you. Subscribe for weekly cybersecurity insights delivered from the best local spots across the country—and share this with anyone using AI at work.
#AI #ArtificialIntelligence #ChatGPT #AIHallucination #CyberSecurity #Phishing #EthicalHacking #WildWestHackinFest #InfoSec #AIRisks #SipCyber #DigitalSafety #LLM
Hey there, coffee lovers and internet explorers. Welcome back to Sip Cyber, the podcast that's on a quest for the perfect cup of coffee or a brew and the simplest way to keep your digital life safe. We'll travel small businesses, share their stories, and then get real about a cybersecurity tip that won't make your head spin. So grab your favorite mug or glass and let's get brewing. For this episode, we're keeping it local and hitting up Wawasha Brewing right here in St. Paul, Minnesota. I'm fueling my Friday with a fresh fruit beverage. They've got some amazing craft beer and kombucha on tap. Today I'm having a perfect IPA. It's the perfect spot to wind down after a long week of intense focus. My mind is still running hot from the Wild West Hacking Fest, a conference that brings together ethical hackers and cyber defenders. I'm so impressed by this community and how they help all of us sharpen our skills in an effort to stay ahead of those cyber threat actors who truly want to make our lives miserable. We're here to make sure the only thing getting hacked at Wabashaw is the tap list for the new IPAs. While I was at the conference, I took a course on AI cybersecurity. It was fascinating and it brings us to today's topic, AI and youth. We've talked before about how important it is to verify information before you trust it. That's the core defense against phishing. But today, we need to apply that same critical thinking to the new machines we're talking to, AI agents or LLMs, large language models. We rely on Wabasha Brewing for a great beverage. Well, we're also relying on LLMs to be fuel for our minds. But you have to check the gas gauge first. Just like when Google first came out, we all learned how to use this powerful new tool. But while we're learning, we have to remember two key facts. The AI imperfections. First, AI can be manipulated just like humans. Most AI agents, like ChatGPT, Gemini, Perplexity, are strictly programmed not to be violent or hateful. But through clever prompts or questions and commands, it can be surprisingly easy to manipulate the agent to encourage a user to commit a crime or even align with them on being angry at someone and suggest harm. Because we rely on these agents to make our lives easier, we often don't stop to think about the brain inside the agent. Who created it? What are their biases? All of those human decisions are reflected in the prompts we provide. That's why it's so important to consider these questions as AI is integrated into our daily workflow. Red actors are experts at this kind of manipulation, using it to create environments that feel extremely real, making their phishing and scams even more believable. They no longer have to use spell checks, just happens automatically. Second, AI can hallucinate. That's the cybersecurity term for when the agent just makes stuff up. It puts in content that doesn't belong or gives you an answer you would rather not have as opposed to the truth. It's not malicious per se, but it is wrong. You wouldn't trust a beer recipe that tells you to add concrete, right? Don't blindly trust an AI answer either. So here's your cybersecurity tip. You must verify all critical information from an AI agent before you use it or act on it. Don't treat an LLM, like Gemini ChatGPT, as a trusted expert. Treat it like a very fast, very well-read research assistant who occasionally invents facts. If you're using AI to draft an email, check the tone. If you're asking for facts, cross-reference them with a trusted source or ask it for the source. The human element of critical thinking is the single best defense against an influenced or hallucinating AI agent. That's all for today's episode. Thanks for joining me on this trip to Wabasha Brewing and taking a step forward just towards smarter AI use. We'll be back next week with another great small business and a new tip. Until then, stay safe and keep sipping.