Plaintext with Rich
Cybersecurity is an everyone problem. So why does it always sound like it’s only for IT people?
Each week, Rich takes one topic, from phishing to ransomware to how your phone actually tracks you, and explains it in plain language in under ten minutes or less. No buzzwords. No condescension. Just the stuff you need to know to stay safer online, explained like you’re a smart person who never had anyone break it down properly. Because you are!
Plaintext with Rich
AI Is an Umbrella Word (And That's the Problem)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Every company says they're using AI. Some mean chatbots. Some mean automation. Some mean statistics with a new logo. If everything is AI, the word stops meaning anything.
This episode untangles what people actually mean when they say "AI" by breaking the umbrella into its real components. It covers machine learning (systems that learn patterns from data), deep learning (layered neural networks that made modern recognition possible), large language models (text prediction engines driving today's headlines), RAG or retrieval-augmented generation (connecting models to specific documents instead of relying on training alone), and agentic AI (systems that don't just respond but take action). The episode explains why these distinctions matter for risk, why a fraud detection model making probability estimates is fundamentally different from an agent allowed to move money, and how to filter the hype with a simple mental checklist: is this prediction, generation, retrieval, action, or branding?
Whether you're evaluating AI tools for your organization, sitting through vendor demos full of buzzwords, or just trying to have a smarter conversation about what AI can and can't do, Plaintext with Rich sorts the categories.
Is there a topic/term you want me to discuss next? Text me!!
YouTube more your speed? → https://links.sith2.com/YouTube
Apple Podcasts your usual stop? → https://links.sith2.com/Apple
Neither of those? Spotify’s over here → https://links.sith2.com/Spotify
Prefer reading quietly at your own pace? → https://links.sith2.com/Blog
Join us in The Cyber Sanctuary (no robes required) → https://links.sith2.com/Discord
Follow the human behind the microphone → https://links.sith2.com/linkedin
Need another way to reach me? That’s here → https://linktr.ee/rich.greene
Why “AI” Means Too Much
SPEAKER_00Every company says they're using AI. Some mean chatbots, some mean automation, some mean statistics with a new logo. The same word gets used for prediction engines, image generators, fraud detection, and autonomous agents. But if everything is AI, then the word stops meaning anything. Welcome to plain text with Rich. Today we're untangling what people actually mean when they say AI. And as always, I think it's really great to start in plain text. Rich Green says AI is an umbrella term. It covers a range of techniques that help computers do tasks that used to require human judgment. That's it. In my opinion, that's it. But under that umbrella are very different tools. And when we don't separate them, conversations get messy really fast. So let's break this down as clearly as I possibly can. First, machine learning. Machine learning is when a system learns patterns from data instead of being explicitly programmed with rules. The plain version of that is instead of writing if this then that, for every case, you feed it examples and it learns the patterns. Our traditional spam filters are machine learning, fraud detection, machine learning, recommendation engines are machine learning. It's mostly prediction. Given this input, what's the most likely output? Now, inside machine learning is a subset called deep learning. Deep learning uses layered neural networks, right? More data, more compute, more complexity. It's what's made modern image recognition, speak recognition, and large language models possible. You can think of it like this: all deep learning is machine learning, but not all machine learning is deep learning. Deep learning is one engine inside the larger car. Now, let's talk about the thing dominating headlines, large language models. LLMs are a type of deep learning model trained on enormous amounts of data. They predict the next word in a sequence over and over very, very well, for the most part. That's why they can write emails, summarize documents, generate code, and answer questions. They don't know facts the way a person does, they generate probable language. And that distinction honestly matters. Now, here's another acronym entering the chat, and that's RAG. RAG stands for Retrieval Augmented Generation. Plain version here is instead of letting the model rely only on what is learned during training, you let it fetch relevant documents first. Then it generates an answer using those documents. So instead of guessing from memory, it checks a notebook. It makes it better grounded in specific data. If a company says we built an AI assistant for our internal policies, there's a strong chance they built an LLM with RAG. And that's very different from inventing a new model from scratch. Now let's add another term that's getting thrown around, and that's agentic AI. And this is where things feel futuristic. An agent is a system that doesn't just respond, it acts. It might look something up, decide what tool to use, take an action, evaluate the result, and then try again. Agentic systems combine models with tools and memory. It's less answer this question and more complete this task. Now that's powerful and also, as you can assume, a little bit riskier. Because once AI systems can take action, permissions become the real story. Now, let's take a step back. Machine learning, deep learning, LLMs, RAG, agents, all of that gets labeled AI in marketing copy. But they are not interchangeable. And this is where confusion creates noise. When someone says AI can do X, your first question should be well, which kind? Prediction model, language model, rag system, autonomous agent, automation script with a new label. The word AI hides the architecture, and architecture determines the risk. A fraud detection model making probability estimates is very different from an agent allowed to move money. An internal rag assistant reading policy documents is very different from a generative model trained on public internet data. Same umbrella, very different blast radius. So why is everything being called AI right now? Again, in Rich Green's opinion, because it signals innovation, because investors understand it, because customers click it, and because the line between automation and intelligence is blurry enough to stretch. But here's the stabilizing thought. Most AI systems today are not autonomous superintelligence. They are still pattern engines connected to workflows. When you see a headline, filter it like this. Again, is this prediction? Is this generation? Is this retrieval? Is this action? Or is this branding? That mental checklist cuts through most of the hype. Now let's ground this with a reality check. Machine learning has been around for decades. Deep learning matured in the 2010s. LLMs became usable at scale, semi-recently. RAG is an architectural pattern, and agents are orchestration layers around models. The speed feels new, the math mostly isn't. And here's the most important takeaway. AI is not one thing that gets better. It's a stack of techniques involving at different speeds. Some are very good at narrow tasks, some are impressive at language, some are brittle outside their training data, some become risky when overconnected. So when someone says AI will replace X, the better question is which technique under what constraints connected to what systems? That's how you stay grounded, not by memorizing every single acronym, but by knowing the categories. So a quick recap for us AI is an umbrella. Machine learning learns patterns from data, deep learning is a more complex form of that. LLMs generate language, RAG connects models to specific documents, agents add action and autonomy. Same buzzwords, very different behaviors. And once you separate the pieces, the noise gets a little quieter. If there's a concept you understand halfway and want the rest of it, send it. Email me, DM me, or drop it in the comments. I honestly wish we had owls like in Harry Potter. I will read them all and I will get back to you. If this episode helped, share it with someone who'd actually benefit. This has been Plain Text with Rich. Ten minutes or less, one topic, no panic. I'll see you next time.