That Tech Pod
Welcome to That Tech Pod, a podcast co-hosted by Laura Milstein, Gabi Schulte and Kevin Albert. Each Tuesday, That Tech Pod will feature in depth discussions about data privacy, cybersecurity, eDiscovery, and tech innovations with heavy hitters in the industry. Subscribe so you don't miss an episode! Visit thattechpod.com for more information.
That Tech Pod
Why “Trust Me” Is the Most Dangerous AI Feature with Dr. Jonathan Schaeffer
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of That Tech Pod, we sit down with Dr. Jonathan Schaeffer, a longtime computer scientist who didn’t arrive in AI chasing demos or hype, but by trying to solve a much harder problem: how to keep data safe.
Jonathan walks us through his path from privacy and security research into modern AI, and why those early concerns feel even more urgent now. While everyone is fixated on hallucinations, he argues the bigger risks are quieter and more structural, from loss of user control to systems that appear trustworthy while subtly eroding human judgment. We dig into the growing concentration of AI power among a handful of companies and whether that outcome was inevitable or the result of choices we made along the way. Jonathan reflects on the human skills he worries we may stop exercising as AI gets better, and the low-key decisions happening right now that could shape the next decade far more than any flashy model release. Finally, he shares what he’s building with Synsira: privacy-first, local AI tools designed to work with your own data without shipping it to the cloud, leaking sensitive information, or inventing answers. It’s a conversation about control, responsibility, and what trustworthy AI actually looks like when you have to live with it.
Dr. Jonathan Schaeffer is a computer scientist and AI innovator who works at the intersection of artificial intelligence, data privacy, and security. He is the founder of Synsira and the creator of KIND, (Knowledge In Depth AI), a privacy-first desktop AI that lets users search, analyze, and interact with their own knowledge bases, documents, notes, and proprietary data, without sending information to the cloud, risking data leaks, or encountering hallucinations. With a career spanning systems design and secure computing, Jonathan focuses on building AI tools that maintain true control over sensitive and regulated data, exploring what responsible, trustworthy AI looks like in practice and how organizations can innovate without surrendering autonomy. He earned his Bachelor of Science at the University of Toronto and a Master’s and Ph.D. at the University of Waterloo, then spent more than 35 years at the University of Alberta as a Distinguished Professor of Computing Science, leading pioneering AI research before retiring in 2024 to focus on AI innovation with Synsira.