Privacy Please
Welcome to "Privacy Please," a podcast for anyone who wants to know more about data privacy and security. Join your hosts Cam and Gabe as they talk to experts, academics, authors, and activists to break down complex privacy topics in a way that's easy to understand.
In today's connected world, our personal information is constantly being collected, analyzed, and sometimes exploited. We believe everyone has a right to understand how their data is being used and what they can do to protect their privacy.
Please subscribe and help us reach more people!
This podcast is part of The Problem Lounge network — conversations about the problems shaping our world, from digital privacy to everyday life.
Privacy Please
S7, E265 - Don’t Trust, Verify: Even Your Update Button Might Be Lying
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Autonomy sounds like progress until the system turns your choices against you. We dive into how AI agents change the risk equation, why “don’t trust, verify” now beats “trust but verify,” and what to do when the update button itself becomes the attack vector.
We start with the Ivy League leak tied to Harvard and UPenn, where attackers exposed admissions hold notes that map influence rather than credit cards. That context turns routine records into leverage for extortion, social pressure, and geopolitical targeting. From there, we trace the surge of agentic AI in the workplace as employees paste code, legal docs, and sensitive files into chat interfaces. The real accelerant is MCP, the model context protocol that standardizes connections across Google Drive, Slack, databases, and more. Like USB for AI, MCP makes integration simple and powerful, but a single prompt injection can pivot across everything the agent can reach.
Security gets messier with supply chain compromise. A China‑nexus campaign allegedly hijacked the Notepad++ update mechanism, handing a bespoke backdoor to developers who did the right thing. We unpack how to keep patching while reducing risk: signed updates, independent checksum checks, tight egress policies for updaters, and strong monitoring around update flows. On the policy front, Rhode Island’s vendor transparency rule forces companies to name who buys data. It is a nutrition label for privacy, and it lets users and watchdogs finally connect the dots between friendly interfaces and aggressive brokers.
We close with concrete defenses that raise the floor. Move high‑value accounts to FIDO2 hardware keys or platform passkeys to block phishing at the protocol level. Scope agent permissions narrowly, isolate MCP connectors by function, and require explicit approvals for sensitive actions. Log everything an agent touches and review those trails. Autonomy should be earned, minimal, and observable. If AI is going to act on your behalf, it must prove itself at every step.
If this conversation helps you think differently about agents, influence mapping, and how to lock down your stack, subscribe, share with a teammate, and leave a quick review telling us the one control you plan to implement this week.
Alrighty then, ladies and gentlemen. Welcome back to another episode of Privacy Please. Brought to you by the Problem Lounge. I'm your host, Cameron Ivy, alongside Mr. Gabe Gums. How are we doing, Gabe? Doing well. We're thawed out. Thawn out? We are thawing out. It's getting warmer here. Thank goodness.
SPEAKER_01:Yeah.
SPEAKER_02:Some are still thawed. Now unthought, I should say, actually. Well, summer thawed too, I guess. Well, you know.
SPEAKER_00:We're getting there though. We're gonna we don't get uh winter very long, but um, before we get into everything, uh, just want to remind everyone listening that to head over to the problemlounge.com if you haven't yet. And we got fresh blogs, newsletters. Please sign up for those. We'd love to have you guys on board with it. And yeah, just wanted to make that reminder. Let's dive into this. So, Gabe, on the Problem Lounge, if those listening on Privacy Please, if you haven't checked out the Problem Lounge yet, we have two new episodes that are already out. Go check them out, go subscribe, jump on the train. Gabe's been talking about, we've been talking about agency over there and our ability to choose. So today we're gonna go into a series on Privacy Please where we're talking about the architecture of autonomy. So we've stopped using tools, we started using agents, and and when you give AI agency to act for you, you're handing over the keys basically to, you know, something that you might not get back. So you got to be careful about that. Gabe, you got anything to add there?
SPEAKER_02:Yeah, a lot to add there. There is there's a few problems on that front from a hacker's perspective, such as myself, depending on who you're giving it to, you're also giving attackers the ability to possibly prompt inject those platforms and extract things like your access from those platforms. So, you know, just like you wouldn't, presumably you wouldn't just like put your password into some random like forum on the interwebs and leave it there. Like you shouldn't, you shouldn't do that also with uh the keys to your infrastructure.
SPEAKER_00:Right, exactly. So let's go ahead and dive into some of the big topics in the past couple weeks. Um, so we have the Ivy League social graph, Harvard and UPen. On February 5th, the group Shiny Hunters leaked records from Harvard and UPenn. So this wasn't uh a credit card heist, it was exposed emissions holds. These are internal notes that halt fundraising because a donor's child is currently applying for admission. So, Gabe, they didn't really steal data. They didn't they didn't really just steal data, they stole a map of influence, so to speak. Uh, they found out which billionaire's kid is trying to get in and whose donation is on hold because of it. So it's a roadmap for a high-level extortion, like you were just mentioning. Let's let's kind of talk about that a little bit. What when you hear that, what is that? What are your first thoughts?
SPEAKER_02:Well, I've got many. The first one is, you know, when I see these types of attacks, they don't feel accidental, they feel targeted, right? Like going after just the data associated with the admissions hole. Well, one of two things. Maybe that, maybe that data itself was what was exposed. And so maybe the attackers were opportunistic. If we're lucky, that's what happened. If we're not so lucky, they were intentionally targeting that particular data set for some purpose. And I can speculate as to the purposes. It easily could be blackmail, it could be, you know, it could just be good old-fashioned just scamming. For all we know, it is it is a bit more nefarious than that. It it it could be it could be nation states, you know, pulling information on some of our country's wealthiest folks. And for what it's worth, when you talk about Ivy League schools and the students applying, they represent a very diverse group of international students, also, whose parents are also usually those with some level of influence from sometimes absolutely little to no influence at all. They're just good old-fashioned parents and students, to extremely wealthy and influential folks across the globe, right? Like targeting UPenn and Harvard, that's those are the motherloads of influence peddling, so to speak. I don't use that word in a negative way, um, but maybe it has those connotations. I do not mean it to be so. The second thing that comes to mind here is, you know, this was, again, a this was a phishing attack. This one was a voice phishing attack. And so we talked about our predictions for this year and where we saw AI playing a role in increasing attackers' success rates. And this is a great example of that. Leveraging AI for the purpose of these voice phishing attacks, because that's what got to this was a voice phishing, uh, a voice phishing attack. It might be the first most notable public example of that this year. It's possible there were others, but you know, hell, it's only February 9th, and that prediction is already coming to fruition. So that doesn't bode well for the rest of 26, so to speak, and beyond. We're really going to have to get super diligent, all of us, about what we trust as information. Believe nothing you read, and half of what you see used to be the old mantra. Right. Now I think it needs to be updated to believe nothing, period. Full stop. Or revert back to the trust but verify. But I think even that needs to be altered to don't trust, verify.
SPEAKER_00:Yeah, that's that's a good point. We've talked about, I don't know how many times we've mentioned this in the show, but humans will always be the weakest link when it comes to these types of things. Whether the attack is AI, if it's human, if it's voice, if it's text, if it's email. Don't trust and verify. Is that what it was? That's it. Don't trust verify. That's it.
SPEAKER_01:Okay.
SPEAKER_00:I like it.
SPEAKER_02:That's where we're at.
SPEAKER_00:Well, let's move on to the second headline. Um, agentic AI leaks. So reports this week show a massive surge in data transfers to AI apps. It's not surprising, totaling over 18,000 terabytes this year already. People are using AI agents everywhere, companies, people, it doesn't matter, everybody's using them to write code, summarize legal documents, and even as digital therapists. And we've had episodes on this in the past. So, Gabe, we're seeing millions of policy violations where employees are pasting source code and medical records into these agents. We gave the AI the agency to help us, but we didn't realize we were feeding it into the company's crown jewels. What um what are your thoughts on this one? This is this is pretty heavy.
SPEAKER_02:So I am surprised that and I shouldn't be, that most folks are still not really kind of grokking the idea that when you when you give, when you put information into an LLM, you are giving it. You are giving it to that LLM. I think folks think they're using a service and it's contained. And that's a part of the challenge. But in these larger corporate infrastructure, you know, there's there is there's a lot more happening than just like copying and pasting a few lines of code. Although that is problematic in and of itself. And also not a new challenge. If you go onto any of the coding websites or places like Stack Overflow where developers seek help from other developers, you will find lots of developers posting examples of codes trying to get help with it. And in some of those cases, they've they pasted a little more than they should have. But where this starts getting dangerous is with what's known as MCPs, right? Like model context protocol. It's essentially a universal plugin system for AIs. So think of it like this. Think of it like a USB, USB drive. Before USBs existed, every device had its own wired cable and connector, right? Like you had printers, cameras, keyboards, and all those different things. And so you would connect them all to your machine differently. USB hard drive leverages that now standardized USB interface, but before you would plug in a printer through literally a printer port. You would plug in a keyboard literally through a keyboard port. You would plug in a mouse through literally a mouse port, right? Like those are all things. And so then USB came along and it's standardized universal serious bus. It's standardized, well, you can plug anything now in to this one port. And like that, that developed over time. Then you've got like, you know, you've got USB-C and Firewise, like you have these standardized things. So MCPs do the same. Model context protocol does the same thing, but for AI apps. So right now, if you want an AI like Chat GPT or Claude to connect to your Google Drive or your Slack or a database or some other tool, every single connection has to be built custom from scratch. Like it, they all have to be. Boo. MCPs came along. MCPs create a standard way for AI to talk to other software. So instead of building a unique integration every time, a developer just builds one MCP connection, and then any AI that can speak MCP can use it. So here's a way to picture it. So without an MCP, you build a special adapter for every single AI to tool connection. I want my AI to plug into this, I want it to plug into that. You build one for each of those. So if you've got 50 tools and 10 different AI apps, that's 500 custom adapters. No one wants that in their life. With MCP, each tool, you build one MCP plugin. So each AI builds one MCP port. Now they just all work together. 50 tools, 10 AI apps, only 60 things to build. 60 versus 500. But here's why that matters. Because it means that AI can actually do stuff in the real world more easily. It can pull files, it can search databases, it can send messages, it can update spreadsheets. So instead of just being this chat bot that only knows what you paste into it, Anthropic, for example, the company that made Claude created and open sourced those MCPs, meaning anyone can use that standard for free. So they're the ones that created that MCP standard. The bet is that if everyone adopts it, AI tools become way more powerful and faster. Basically, the same way when USB-C was adapted, the adoption of all of the different peripherals that you can plug in became a lot quicker to adoption. But they expose so much more data. Once you build that MCP connection, it's no longer what you just copy and paste into there. These agentic workflows rely on MCPs, that model context protocol, they rely on them to get to information, to do things on your behalf, that very agency we're talking about. But its very ability to plug into lots of things means if I manage to prompt inject your session, all the things that your MCP is connected to are now at risk of my getting access to that. It just exponentially increases the attack surface. Again, yet another great place where we're going to see AI from a security and privacy perspective. We're going to get a lot more black eyes before this gets better.
SPEAKER_00:So do you think we're going to see more of this happen this year?
SPEAKER_02:Oh, a lot more of this happen this year. MCPs were being pushed like cotton candy at a carnival last year. Like 2025, the number of like everywhere.
SPEAKER_00:Yeah, and now that companies have AI implemented, you know that they're a huge target. Like all these AI, well, not all the AI companies, but getting in by just, let's just say one person gets breached or the company gets breached, and they're able to get into your Slack or your email. I mean, the amount of information that people are putting into these AI um LLMs has a lot more data than probably any company has had before, I would imagine, don't you think? Because people are using it for all their tools to try to streamline and be more efficient. And I'm sure they're giving it more than they probably need to. Sure. So, yeah, that's pretty serious. Let's go on to the next headline. So the update betrayal with Notepad. China Nexus group called Lotus Blossom hijacked the update mechanism for Notepad Plus Plus for six months. Developers who did the responsible thing and clicked check for updates were served a custom backdoor called uh Chrysalis. So this is the ultimate hit to our to your agency, Gabe. So you chose to be secure, you chose to update, and the system used that choice to infect you. Um if you can't trust the update button, what what's what's left to trust?
SPEAKER_02:It's not wrong. It's not a lot left to trust that there is an answer though, but but the problem is it's a messy it's a messy answer. So this is the classic supply chain attack, right? Like I want to get to these end users, these developers, and instead of attacking them directly, I will attack the tools that they use, plant backdoors into those things, and then get to them. And so you would expect that in the product that you use, when you click the update button, it goes and grabs the right update from the right place and installs it. What you don't expect is that, like, okay, I saw an update file on some torrent site, and so I downloaded it myself and then I updated my noteplus. Like that would be naughty, but that's not what these developers did. They trusted the very system that they are supposed to trust to do that. And when those bad guys attacked the uh update mechanism, and not the first time we've seen something similar, but this one is uh nefarious in that uh well, lots of developers use Notepad Plus Plus, and it's using a what was a trusted communication mechanism. Your answer is you can go back to the old school downloading those updates manually from your update provider. In this case, it's just like Notepad Plus Plus. You can go to their website, you can find the update, you can check the checksum of that update against what you download to make sure it's the thing that Notepad Plus Plus said that they were actually sending. The problem is that also gets compromised along the way in this attack scenario. And we've also seen those types of protection mechanisms be circumvented as well. Right.
SPEAKER_01:Yeah, it's scary out there. Don't trust. Verify. Trust at all. Trademark, trademark, problem lounge, privacy, please.
SPEAKER_00:Um, all right, so let's talk about some of the things that you can do that could be helpful. Um talking about the Rhode Island audit. So if you're not familiar, Rhode Island's new law, which was active on January 1st, requires companies to name the specific vendors they sell data to. Advice here, go to your favorite app's privacy policy, look at the Rhode Island notice. It's the first time we've been seeing the name names transparency. It's eye-opening to see which random data broker in a strip mall is buying your history, for an example. So anything, anything that you want to add there, Gabe, on that one?
SPEAKER_02:Over on the problem lounge, we started that series on you know the price you pay for your privacy. And through that series, we are discussing how we ourselves, you and I, have taken this podcast network through the process of updating or expanding our tech stack. Because we had an extremely light tech stack. We didn't really, you really have a tech stack. We we published a podcast and that was that. Now, now there's a bit more to it. We've got uh we've got some components that help us push the newsletter. People can go to the site and sign up for that newsletter. They can contact us there. There's a website, so you know you can track people and things on the websites. But if you go to theproblemlounge.com, scroll all the way back down to the footer, and find our privacy policy page. If you click on that privacy policy page, we also list the specific name of the vendors that we sell data to. There's no one on that list, by the way. You know, you don't need to actually go visit it, but there's no one on that list. But on that page, we describe exactly all of this, the solutions that we use and what we, how, what data we collect, how we use it, et cetera. And so we are obviously committed to never, ever violating the trust and privacy of our listeners or sponsors. Like that is that that is core to our mission as privacy advocates. I think this is interesting. I think it's, I think it's a great move. I think it requires some follow-up action on the consumer side. We're gonna need some advocacy groups to help aggregate this information and call fouls where they're at, right? Like call calling balls and strikes. Because I don't expect your average person to go to a website and find out like what random third broker has the data or what it means or who that broker is. And so we're gonna have to help connect those dots. Because here's what's going to happen those folks are gonna obfuscate who they're who they are. They'll just rename themselves to something, you know, like, you know, happy go lucky trust guys LLC. And it's like, oh, look, they sell my data to happy go lucky trust guys. Well, that's fine. You know, so I think I think this is one of those regulatory moves that I am in favor of. And I'm not a huge fan of regulation, you know, kind of dictating these types of behaviors. I I think it's the wrong move frequently, but all we're asking for is transparency. Yeah. It's just transparency. Just tell, just tell us who you're selling the data to. And then consumers get to choose. Like, you know what? I don't want to do business with that. I don't want to tune into that podcast. I don't want to visit their website because they collect too much data, because they sell that data. Like, there's I promise you the following. I know the Privacy Lounge Network, the Problem Lounge Network is obviously your favorite network. And the Privacy Lounge and uh Privacy Please are your favorite shows. You can trust that we don't. But there's a very high likelihood that almost all of the other podcasts you listen to are sharing some data somehow with others, especially demographic data. Which is why, for example, we we lay this out on our privacy page. We don't use things like Google Analytics. Because even if we don't sell it to Google Analytics, doesn't matter. Google Analytics has it and they sell it to others. We don't use any of those types of tools that just they forget slippery slope. It's a freaking 90-degree angle, brother. Yeah.
SPEAKER_00:From top to the bottom, straight down. This reminds me of the food industry in terms of um where you have that classic advertising on the front of something where it says it's organic or it's fresh or it's low something, la la blah blah blah. But turn it around. Look at the ingredients on the back. I'd love to verify. Love the food actually.
SPEAKER_02:They added 30 more grams of sugar.
SPEAKER_00:Exactly. This is it's the first ingredient is modified food starch. Oh, so it's it was modified in a factory. It's not even real food. Like you need to pay attention to that kind of stuff. Like, have that same sense. If you care about your body and what you're putting in it, it should be the same way about your privacy and the things that you're uh allowing others to uh to take from you.
SPEAKER_02:It's it's that's why this audit is useful because it's that's very hard for lots of people to do. And I will tell you something, as someone who reads labels myself, and you too too, we've talked about this offline. It's difficult. And it's not difficult because you don't understand what's in it, it's difficult because they change it all the time. There's a lot of lies that are built into it. There's things that they're allowed to say and not say, right? Like less than 1% of certain things they don't even have to list and whatnot. It's like I want to know if there's less than 1% of poo in my pool. The percentage of poo I like in my pool is zero. Like, I like zero poo in my pool. It's not not less than one percent, and I don't have to uh tell you about it. But it but it's difficult. Some of my favorite brands over the years have been acquired food brands, they've been acquired. And then six months later, I just pick back, pick back up the back again and look, because I haven't looked in a while just to just to look. And it's like, oh crap. The number one ingredient used to be water and oats, and now it's water and palm oil are the first two ingredients. Yep. I don't sign up for all the palm oil. I just want the water and the oats. Exactly. That's a good point. You got it, Chat?
SPEAKER_00:That was the first thing that I thought of when when you were talking about it. So it I just felt like for those, for those if needed a simple analogy, but um I digress here. Okay, so another another good tip here, move to hardware keys. So Harvard new pin hacks happen because someone was tricked into a password reset. So the advice would be, and I'm I want to get your take on this, Gabe, if you feel like this is the best advice, but switch to uh FIDO2 security keys, so Yubi keys. Yeah, they are physically impossible to fish because the key won't talk to a fake website, even if a vocal deep fake tells you to do it. That's huge because AI is gonna be big with that, the voice stuff this year.
SPEAKER_02:Yeah. So we've talked about Yubi keys in some past episodes, and in the show notes for this show, we'll we'll link to that last episode and some of that stuff too. But let's let's just talk about what it is. So Fido keys, FIDO, it stands for fast identity online. They're physical security devices that you use to prove who you are when you're logging into stuff. So think of it like a house key, but from your online record. So instead of typing in a password, which password can be stolen, you plug in or you tap this little device and it handles authentication for you. How it works is actually pretty straightforward. So when you set up one of these FIDO keys with a website, say your email, the key in the website do a handshake. And they do that using some kind of cryptography. Your key generates a unique pair. So a private key that never leaves your device, and a public key the website stores. So now that's two things. You have something private that only you have, and the website has this other public thing that only they have. When you log in, the website essentially challenges you and says, Hey, I have this public key. Show me your private key. That key proves that you're the rightful owner of that secret. And so no passwords are ever transmitted that can be stolen, hijacked, compromised, reset, etc. They're just simply checking against that thing. And because it's a physical Physical thing that you have, someone else can't just like, you know, jack it. So what do they actually look like? Usually small little USB drives or some kind of keychain thing. YubiKey's the most well-known YUBI key. Happy to plug them. I'm a user. I literally, a few years back, I bought multiple Yubi keys for different family members. I had one family member who was dabbling in some crypto and decided to prank him one day and and uh and make him think he was getting scammed out of all of his crypto and he's freaking free. He was just freaking out.
SPEAKER_01:He's like, oh my god, I think all my crypto's gone.
SPEAKER_02:I was like, chill. I was just I was just messing with you. But but I bought him to make up for that prank time. I bought him a Yubi key. Like, here's what you're gonna do. You're gonna use this key, and this is how you're gonna make sure you keep your wallet safe. An extremely non-technical older gentleman who just wants to dabble in low crypto, just able to pick up a Yubi key and you know, just get get get at it. But we talked about this last year also. The decline of passwords, it's happening and it's gonna happen more, and it needs to continue to happen. If you're using a modern laptop or or desktop, many of them already have now these built-in chips into the hardware that allow you to create pass keys. You've probably seen this create a pass key for blank, create a pass key for blank. Those are those are similar. Those pass keys get stored in hardware, they're similar, they get stored, and so then they just perform this challenge. So you don't have to exchange passwords any longer. And this is great because they're phishing proof. So even if someone builds a perfectly fake Google login page and tricks you to logging into that perfectly fake Google login page, the FIDO key won't authenticate because it checks the actual website domain crypto cryptographically and it knows it ain't Google. Like it can't fool the FIDO key, even if your eyes were fake by it, like it just it's a perfect replica. You're gonna put that FIDO key up and it's gonna be like, no, no, nothing. There's nothing to steal. So there's no password stored anywhere to have. The secret itself lives in the physical device in your hand, and they're also easy to use. You can literally you can tap it to your iPhone, to your to your Samsung device, tap it to your laptop. They're just it's that simple. To do that, I would definitely advocate for more of the use of those. There is one catch, maybe two catches. You have to have it with you. You must, must, must, must have the key with you. So if you need a login and it requires that key, you gotta keep the key with you. That's not that bad. You can deal with keep it on the keychain. I'll tell you something else also worth doing, because I did this for myself and for those members that I bought. I actually bought two for everyone. So we have a primary, and as a backup, stays locked away in a box somewhere else. So if you lose the primary, and this has happened to me I don't know, two years ago. I don't know where I lost the primary one, but lost it. Lost it. But I have a backup, luckily. And so, and again, being a physical backup, the only way to steal that physical backup, well, you have to hit me over the head with a wrench, is really the only way you're gonna steal XK. Don't give it away, Gabe.
SPEAKER_00:Don't let them know what you want them to hit you with. My weakness.
SPEAKER_02:No mouse wrench to the head.
SPEAKER_00:Get to what you're looking for. That's great. Great tips, great stories too. Thanks for sharing that. I think that's pretty much it for today's episode. This is this is a good deep dive into the into some of that stuff around autonomy. Um, it's the start of this new series, Gabe. I I would say that um agency is a big deal. So if you have you don't know too much about agency and you want to learn a little bit more, we've been talking about it on the problem lounge again. Go go check that out. It's on Spotify, iTunes, subscribe. We're gonna have episodes from that and Privacy Please weekly. So join the join the ride with us. Go over to the problemlounge.com. Um, again, we have blogs, newsletters, sign up. If you have guests or anyone that you want to come on the show, just email us and we'll get you we'll get you scheduled and have some some conversations. But other than that, man, I hope everyone has a great week. And Gabe, thanks uh thanks again for the time, man. As always, be safe out there in this digital world. Be safe, choose love over hate.
SPEAKER_01:You know what I'm saying? All right.
SPEAKER_00:Don't trust. Verify. Don't trust. Verify. All right, we'll see you guys next time.