The Lars Larson Show Interviews
Lars Larson has been asking the hard questions for decades and he's not stopping now. Every weekday, Lars hosts two of the most listened-to talk radio programs in the country.
From noon to 3pm PT, he anchors a Northwest-focused program heard across more than 100 affiliates in Washington and Oregon, covering the stories and policies hitting closest to home.
Then, from 3 to 6 pm PT, he takes it national with a syndicated program reaching listeners from coast to coast.
No talking points. No agenda-driven nonsense. Just the news, the debates, and the conversations that actually move the needle. Subscribe and find out why millions of listeners keep coming back.
The Lars Larson Show Interviews
Matt Rosen - This AI Was Too Dangerous To Release
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
An AI model so powerful its creators refused to release it is now being used behind closed doors and raising serious questions about control, access, and risk. If it can break into systems on its own, what happens if it falls into the wrong hands?
Matt Rosen, CEO of Allata, joins the program to break down what this technology can actually do and whether the public should be concerned about who’s using it.
Welcome back to the Lawrence Lawrence and Co. It's a pleasure to be with you. I want to tell you about AI, and I've always made my position on AI very clear. I'm very suspicious of it. I understand it has wonderful implications when it comes to medical care and positive uses. The problem is AI sounds like a technology that is so dangerous that some of its own creators are saying it's too dangerous to use. And I've done this great story by Matt Rose. And Matt is the CEO and founder of Aleta. And if I'm saying that wrong, Matt, please correct me. But Matt, welcome to the other.
SPEAKER_00No, you got it right, Lars.
SPEAKER_01Okay.
SPEAKER_00Thanks for having me on the show today.
SPEAKER_01A-L-L-A-T-A, just in case people need to spell it, but Aleta is the way you pronounce it. Did anthropic just hand one of its most dangerous hacking tools ever directly to the U.S. government?
SPEAKER_00Well, not only the U.S. government, but uh about 50 large corporations as part of what something they call Project Glasswing. And so, you know, I've been reading up a lot on this. Obviously, I don't have access to it. You know, they're claiming that it has found bugs, like security bugs in software that they thought had been hardened for 30 years. You know, the you know, could they be overplaying this a little bit? I mean, they have been known to create a lot of marketing hype. And, you know, what they are is creating a scarcity theory saying, hey, it's too dangerous, you can't have it, so all of a sudden people want it. So I I do think this is part marketing, but I do think these engines, these AI models are getting to a point where they are super powerful and there do need to be some controls put in place and more experimentation done with them before they get you know released to the wild. Because I think the fear is that hackers could use them to exploit flaws in critical infrastructure and different software programs, and they want to make sure that doesn't happen.
SPEAKER_01Well, and and Matt, it almost sounds not like science fiction, but things we've all seen depicted in fiction where you know you have a special little box and you put it up to the safe and it figures out the combination. Or you have another box that allows you to open a door that would only open with a pass key. This mythos thing sounds like it would autonomously break into computerized systems. Uh if if that kind of tool is out there, then then nothing's secure, is it?
SPEAKER_00Well, there's a lot of security experts out there that, you know, spend a lot uh their entire careers making software hardened and putting up firewalls, and that's not an area we play in, but it's an area where when we write code, we do make sure it is built securely. But there is a whole group of professionals, cybersecurity professionals and firms out there that spend all day, every day thinking about how a hacker's gonna attack. Now, Mythos isn't gonna just magically hack into the Pentagon or take down a plane or file or a nuclear missile or something like that. Someone has to tell it to do something. And so I think part of the big thing is making sure that these tools don't fall into bad actors' hands. And the problem is when you release release when it is publicly available, anyone can access it, anyone can use it. So, you know, are these tools gonna be used or for bad? And you know what I've found with every technology revolution, nothing's ever as good as people think it's gonna be, and it's never as bad as people think it's gonna be. But you know, the these tools definitely do require some caution. And it's it truly did find this bug that nobody had found, and you know, uh it's finding vulnerabilities in software. It does need to be handled carefully, or you know, the software publishers need a chance to run it to try and shore up some of the holes in software. But what are you gonna do? Stop developing software to wait for this thing to find all the bugs over the last 50 years? That's not reality. So at some point, they're gonna have to either release this to the wild with some uh warnings, because I don't think they're not gonna release it. I think what they're doing is creating a lot of hype around it, and everyone's gonna want it now. I I want to check it out.
SPEAKER_01I'm talking to Matt Rosen, and Matt is CEO and founder of Aleda. Matt, would you mind since I uh uh begged your time today, tell my audience what OLEDA is all about.
SPEAKER_00Yes, so Aleda started as a custom development firm that got then got great at data and machine learning, which was an early part of AI, but now what we do is we use the skill sets there to really help organizations adopt AI at an enterprise level. And what that means is a lot of these tools are great for individual use, but a lot of organizations we work with have a hard time getting past individuals being really good and you know, really doing things well as an organization, building AI into the processes and also having an AI that works within their walls, keeping their data documents, systems secure. And so we deploy a platform into their environments that they own and then can help them do all sorts of great things to empower salespeople, marketing, help with contracts, help with their supply chain, you name it. And we've got everyone from you know city governments to large universities to some of the uh some of the large companies uh in the areas we serve starting to deploy our technology and have and be a partner to them in helping them really get the most out of it.
SPEAKER_01So, Matt, can you trust this thing to tell you the truth? And I guess what I wonder about is is if you're if you're using it to do things uh like determine, well, are there flaws in my system? Will the AI necessarily tell you the truth? Or will it tell you a fab? Because these things are kind of renowned for making stuff up, aren't they?
SPEAKER_00You know, that was a problem with some of the early AIs is it would hallucinate. Like if it couldn't find the answer, it would make up an answer. In fact, there was a funny case where a lawyer used AI, didn't check the cases, and all the cases he submitted were all made-up cases, and he had to pay a fine, and you know, you didn't get dispared, but got in a lot of trouble, it was really embarrassing. That stuff's not happening that much anymore. These models have gotten a lot better, and a lot of it, I say data is your differentiator. If you give the model access to good data and ask it questions, it will find the answers. I mean, we've proven this within our organization and for some of our clients. Heck, we have it writing our reviews, human still reviews it, human still delivers it, but it took the process from 10 hours down to an hour. And it's a huge time savings for something that most people don't like doing, and the reviews are better. So if you give it the right data and just don't let it set it free to the internet where it's searching Reddit and all sorts of other nonsense out there, it will give you good results. So that's really it's really like what you feed it will give you better answers. So if it's looking at good data and not just everything, uh, and you give it a scope of what it needs to look at to give you the answer, it's actually really darn accurate. In fact, you know, I've got a client who's using it looking at radiological scans, and it's it's like 95% accurate detecting cancer growth. I mean, it's better than some radiologists out there. It's it's truly amazing what it can do when it's being used in the right way. Trevor Burrus, Jr.
SPEAKER_01Well, we're heading up to a midterm election, and Matt, one of the things that occurred to me was we have we've had already bots that go out there and pretend, you know, so you'll find out somebody had what looked like great numbers, and then it turns out the majority of it was bots. Could you literally take a uh you know an AI model and say, listen, uh I've got a brand new restaurant. I want people to to think it's the best place ever. Go out and r start writing some reviews for me and put them up so my restaurant will succeed. Would that would would would you be able to and and camouflage what you're doing so nobody knows it's just an AI pretending to be a customer or thousands of customers?
SPEAKER_00You you you know, most people look at Google reviews or Yelp. I mean, they do have some security protocols built in, but there's definitely ways you could use it for something like that. It could definitely be used for misinformation. Heck, I mean, there's AI, the bots calling with the the voice of someone. I I heard an awful story on a uh interview I did where they it had called as someone's son saying they'd been in a wreck into wire money. Oh my god. And luckily the person that got that call knew their son was not there, but it sounded just like it. So that's where it gets kind of scary. So that's why like I encourage like families and corporations to have like passwords and things that only they know because it can clone your voice, it can clone your likeness, uh, and it can create a lot of you know just information. So I think we're living in the Terminator movie, Matt, aren't we? Aren't we living in the turn?
SPEAKER_01How's how's how's what's wrong with Wolfie? Why is he barking? Wolfie's floating.
SPEAKER_00So I can't do that, Hal.
SPEAKER_01I can't let you into the pod bay, Hal. Or or Hal won't let you into the pod bay because that would endanger the motion. That's Matt Rosen. He is the CEO and founder of Aleta. We'll be back in just a moment. You've got the Lars Larson show.