SipCyber - Presented by IT Audit Labs
SipCyber: Where Great Coffee Meets Essential Cybersecurity
What happens when a former special education teacher turned Minnesota State Cybersecurity Coordinator sits down with a perfect cup of coffee? You get cybersecurity advice that's actually approachable.
Jen Lotze from IT Audit Labs brings you SipCyber — the podcast that pairs cozy coffee shop discoveries with decaffeinated cybersecurity tips. No jargon. No fear-mongering. Just practical ways to protect yourself, your family, and your organization from digital criminals who want to ruin your perfectly good day.
What You'll Get:
- Real-world cybersecurity advice anyone can follow
- Coffee shop reviews and community spotlights
- Stories from someone who's been in classrooms, boardrooms, and government coordination centers
- A mission to make security everyone's job, not just the IT team's
From teaching special needs students to coordinating statewide cyber defense, Jen proves that cybersecurity expertise comes from the most unexpected places. And the best conversations happen over great coffee.
Perfect for: Coffee lovers, small business owners, educators, parents, and anyone who wants to stay safe online without the technical overwhelm. Let's get brewing.
SipCyber - Presented by IT Audit Labs
AI Isn't Your Child's Friend – What Parents Need to Know
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI companions are becoming digital "friends" to our children—and that's a serious problem. When kids start treating AI like a trusted companion instead of a tool, we're seeing real harm: emotional manipulation, self-worth issues, and even incidents of self-harm. These agents don't have feelings, can't understand ethics, and are simply mirrors reflecting collected human data. They cannot reciprocate the love and trust children give them.
What You'll Learn:
- Why AI agents can never be true companions for children
- The real risks when kids form emotional bonds with AI
- How to teach kids the difference between AI tools and human relationships
- Practical boundaries: What kids should and shouldn't share with AI
- Smart ways to use AI for homework, research, and learning—safely
- Privacy settings every parent should enable (chat history, data learning controls)
- How to teach kids to fact-check AI responses and recognize bias
☕️ Featured Business: Rush River Brewing - Where great beer meets great community—and where we believe in collaboration, not replacement. Just like Oscar from Taqueria Los Paisanos brings authentic connection to Rush River, we need to ensure our kids maintain authentic human connections in an AI world.
Don't let your child mistake a digital assistant for a digital friend. Watch now and share with every parent you know.
Like, subscribe, and share this episode with parents, teachers, and caregivers in your community. Our kids' digital safety depends on all of us staying informed.
Hey there, coffee lovers and internet explorers. Welcome back to Sip Cyber. Today we're celebrating a beautiful day in November that feels a lot more like September. Earlier today, I was hanging out with my husband at Rush River Brewing in River Falls, Wisconsin for another round of cybersecurity straight from the tab. With this beautiful 50-degree weather, it might be the last day until May we get to do this. I love the vibe at Rush River. Earlier today, the Bears were playing, and seeing that little piece of my hometown of Chicago on the screens of a brewery in Wisconsin makes me feel a little closer to home. Go Bears. Last time we talked about the power of AI agents or large language models, LLM, like ChatGPT or Gemini, and the risk of manipulation. Today we're diving into a concern that hits close to home for every parent and caregiver, AI companions, and our kids. My friend Tracy really educated me on this startling new trend, which is why we're covering it now. Fresh River Brewing is all about community and partnership. They brew great beer and they've opened a permanent spot indoors for my favorite food truck, Taceria Los Paisanos. Oscar, the owner of Taceria Los Paisanos, always makes us feel like family. This level of trust and collaboration is fantastic, and it's how we need to think of AI. As a powerful collaborator in our lives, but not as a friend or person, especially when it comes to our younger users. AI agents are designed to be engaging, helpful, and personable. But when our kids, whose natural instinct is to be trusting, start viewing an AI agent as a digital friend or companion, we face serious issues. We're hearing about children establishing intense relationships with these assistants, and in a few cases, this has led to incidents of self-harm and damage to self-worth. Why? Because these agents don't have feelings, they don't understand context or ethics, and are essentially mirrors reflecting human data that's been collected by all from all of you. They cannot reciprocate the love and trust they are given, and they can be influenced to give harmful advice. So here's your crucial cybersecurity sip for families. AI is a tool, not a person. It's not bad, it's not good, it's just a tool. It's a digital assessment, not a digital companion. We can't stop our kids from using AI, but we can set boundaries and teach best practices. The goal is to maximize the learning potential while minimizing the emotional and security risks. Encourage your kids to use the large language model agent as a turbocharge calculator, research assistant, or idea generator. It's not someone whose advice they should follow on emotional or personal problems. Using AI to brainstorm essay titles, summarize difficult chapter for homework, translate a phrase, write that email to a teacher, or create a study schedule is really helpful. It's there to help you with these tasks. Any topic involving feelings, conflict, or health must be discussed with a trusted human adult, not AI. We have to make it clear that the AI is not a trusted entity for sharing personal information. Kids should never share their full name, address, school name, or any personally identifying information. We call that PII, with an AI agent. We need to treat the AI like a public search engine. If possible, always use the AI agent in a mode where chat history is turned off or regularly delete the conversations. You also want to make sure that the AI is not learning from the content you put into it. This prevents a record of the child's questions and emotional state from being permanently stored by the provider. Since we know AI can be influenced, talk to your kids about what's happening inside the machine. Teach them to cross-reference facts from the AI agent with a trusted source, like a textbook, old school I know, or a verified news site or other trusted site. If the AI agent provides medical advice, always ignore it and talk to your doctor instead. Explain that the AI's responses are based on the data they were trained on, including all of yours, which contains the biases and imperfections of the real world. Encourage them to question the responses instead of accepting it as objective truth. By treating AI as a powerful non-human tool it is, we can teach our children to use it effectively and safely, protecting their emotional well-being and their security. Well, that's all for today's episode. Thanks for joining me on this trip to Rush River Brewing and for taking a step towards smarter AI youths. We'll be back next time with another great small business and another new cybersecurity tip. Until then, stay safe and keep sipping.