Security Cryptography Whatever

Platform Security Part Deux with Justin Schuh

Security, Cryptography, Whatever

We did not run out of things to talk about: Chrome vs. Safari vs. Firefox. Rust vs. C++. Bug bounties vs. exploit development. The Peace Corps vs. The Marine Corps.

Transcript:
https://securitycryptographywhatever.com/2021/08/21/platform-security-part-deux-with-justin-schuh/

Find us at:
https://twitter.com/scwpod
https://twitter.com/durumcrustulum
https://twitter.com/tqbf
https://twitter.com/davidcadrian


"Security Cryptography Whatever" is hosted by Deirdre Connolly (@durumcrustulum), Thomas Ptacek (@tqbf), and David Adrian (@davidcadrian)

Justin:

my sister works in urban policy and planning and she's like, we don't need people building bridges. We people fixing we have

Deirdre:

Hello, welcome to security, cryptography, whatever. I'm Deirdre.

David:

I'm David.

Thomas:

I'm Tom and we have a, we have a special guest today. I'll let our guests introduce himself.

Justin:

I'm Justin Schuh. I'm the whatever.

Thomas:

Okay.

Deirdre:

Sure. and today, this is basically platform security, redux? Part Deux? Because we put out— this is in response to our first episode when we were trying to talk about why is iOS always on fire? If it's supposed to be one of the best, the most secure, operating systems in the world, quote unquote. and now we have more people to talk about what makes a secure platform? Are we wrong? Why are we wrong, Justin? Why are we wrong? And where are we not wrong?

Thomas:

Justin, before you tell us why we're wrong, us what would make you qualified to tell us that.

Justin:

Gotcha. Yeah. I'll try not to run down my whole history. There are a few key points. so, starting my career, was, intelligence community, listed the Marines as a teenager ended up, doing also a few years as a civilian in NSA followed by a few years of civilian in CIA, had a lot of exposure to, offense defense, et cetera, from that perspective, then switched over number of years, of, uh, security consulting, uh, which is where Tom and I first ran into each other, coauthored, book, uh, the art of software security assessment. Some of you might be familiar with, with, frankly better security people than me. and then last 10 plus years, until just earlier this year, I was, say. Helped build out Chrome security, kind of, depending on your definition of founding member, I was a founding member of the Chrome security team, ended up being responsible for pretty much all of, or actually be all of Chrome security and counter abuse, by the time I left. and that was, earlier this year and now I'm retired.

Thomas:

I feel like we've, established now that you're qualified to tell us where we're wrong. I'm uh, I'm, trying to get my head around what it was that I said on that first thingy that we recorded. Cause my normal MOS, just to like hear a couple words, then randomly reply guy stuff. But I remember us, David, you might remember this better than I do, but like we, we had some thoughts about, bug bounties. Like I think I'll look up a perennial thing that comes up, in these discussions it's like, are the major tech firms paying enough for vulnerabilities and it's the problem here that they're just not taking a researcher seriously enough. And then we had a lot of thoughts on Rust and whether we can just code our way out of this problem, I'm missing some things now. Cause we babbled for like an hour about that.

David:

I think roughly kind of where we landed is that it is possible to have major security focused initiatives at large companies. We pointed to some of the things that Microsoft did in the past, their root and branch, so to speak, efforts, that you described as removing stir copy everywhere. You, and I think we're a little more skeptical that anyone could buy their way out of this problem, like buy the entire volume market because you know, markets are markets. Even if Apple decided to spend a billion dollars a year buying all the exploits, they probably couldn't. and then we pull the numbers out of thin air saying— well, not entirely out of thin air, based off of empirical results from Alex Gaynor and then our own opinions that of rewriting and memory safe language would probably remove 80% of the vulnerabilities in the code base. And then we had this week kind of notion that and iPad iOS are kind of easier to secure than say less because they are a more constrained operating system as compared to a Mac book,

Deirdre:

With like specifically express capabilities per application and things like that.

David:

and built in sandbox.

Deirdre:

Yeah.

Thomas:

would come out of the gate here saying like, I have like this ironclad conviction that I formed in 15 seconds when the question was put to us in that first podcast. But now it's something I believe forever, that, that like the basic idea of a state sponsored adversary is that they have an unlimited budget and that goes for the United States, but also like, this, uh, shells, um, or any Billy's any country that you can imagine, the dollar amounts that we're talking here versus the payoff that you get, um, for, you know, what an exploit buys you versus for instance, having to pay for the health insurance of all the people that will do the human intelligence, those numbers are so small relative to the value that there's no number that the market could realistically come to, where that would make a dent on the market. So. guess that's like my first assertion. wonder, Justin, I wonder if you'd want to knock that down. Am I, you

Justin:

Yeah.

Thomas:

direct experience on the state funded adversarial side of this as well. Am I crazy to say that?

Justin:

Okay. So I, yeah. This is not where I disagreed with you. Um, I actually like the way that I put it, is that really you can't buy or bounty your way out of these problems. You're correct. From the perspective of a person actually building the software and trying to secure it, they don't have unlimited money, but from your perspective, they do. Bug bounties, what they're really, really useful for is after you have some level of maturity and process, the problem is that security teams start to develop like group think, uh, they start to, they start to get very, very blind to anything that they don't see day to day. And bug bounties are amazingly helpful, at getting you a steady stream of input, on the vulnerabilities that you are not seeing, the stuff that your team wouldn't look for, but then the right response to a bug bounty is to go back and, you know, fix your process. Find the scary areas of the code that you didn't know were there, fixing them up, et cetera. On Chrome, we would, you know, you'd see a trend of reports coming in and it's like,"okay! It turns out that, DOM objects are very, very loosely bound to JavaScript, um, and people are just getting used to after freeze there". It's like, oh, okay. Then you designed a way to deeply root them. It's like, oh, it turns out you can blow up the render tree stuff in the render and, and you get stale pointers all over the render tree. It's like,"oh, okay". Then you do a partition based allocator structure, where you can dispose of the whole undertreat at once. And it's like, okay, now you've done that. But now— it's yeah, it just, it piles on, and what it's useful. that's that is why like bug bounties. Okay.

Deirdre:

very cool. So it's basically a signal rather than like really the kind of, trying to use the market dynamics to try and like push one way or the other, it's for you, the defender of the platform or the software project or whatever to be like, tell me what I'm not seeing. And if you're seeing a ton of little things in one area, you can triage that and prioritize that and what you have to do to make that better. Whereas if you're seeing like one or two really bad things in like one or two areas, like you can, it, that that's kind of what you're. Yeah,

Justin:

Yeah, exactly. It's like you develop a sense of smell for it? These are the one-offs it's like, okay, this was just a screw up, but it doesn't seem like a systemic problem. Um, but it's, we're glad people found those, but the real use is the,"oh wow. This is a systemic issue. We have to solve this".

Deirdre:

Cool. All right. How do you use that to inform How you plan? if you're just like getting a constant stream of like, could see it being very vulnerable to just like, kind of run around with like a chicken with their head cut off and I'd be like, oh, damn, like, there's like a stream of this. how do you that to inform your planning of what to do when and rearchitect things, if they need rearchitecting.

Justin:

So this is, into the whole discussion of like, embedded security teams versus what I call the,"throw it over the wall"security teams. and the answer is that, uh, so I refer to"throw it over the wall" security teams and like the extreme case of this is, external security consultants, where you hire someone to come in, do an audit, you know, every six months here or whatever. And, I did that for years. I did it in the federal government. I did it as a consultant. I do not like it. Uh, I learned that it just doesn't tend to work as well because the questions that you're asking about prioritizing and all that, you're not able to do it, right? The people with the expertise, part of those architectural decisions and that, and so from very early on, uh, we were spinning up the Chrome security team, our intent was to have a deeply embedded security team where like large chunks of Chrome code, were, uh, owned or are owned by Chrome security. For many years I was one of the eight route owners of the Chromium repository. So I was one of the, you know, one of the people who could, approve any change in that.

Deirdre:

okay.

Justin:

There are teams in Chrome security that are like deeper experts on, on some of the core, uh, Chrome code infrastructure, Then. regular feature teams are. Uh, and so that's, that's how you do it. You have to, I think it only works if you have a, an embedded security team and you've resourced them well enough so that they can make big changes. Yes.

Deirdre:

Yeah,

David:

I was gonna say. The bug bounties, have that same sort of third party take use case to drive rearchitecture or for things that aren't necessarily a web browser or an operating system? Like have a large piece of SAS software or a mid-sized SAS software, can you get value out of a bug bounty, beyond PR? I guess is the, like the hot take way of phrasing it?

Justin:

So if you, if, if you're just talking about sort of. what does a bug Barney tell you? It basically tells you,"have you staffed up a halfway decent security team?" Like I will say when we started the Chrome bounty, we were not ready for the onslaught of WebKit bugs that we got. and it was just, yeah, it was, it was, uh, yeah, I don't know how to describe it. Um, it was a tidal wave, and he gave us the insight that, oh, we need to beef up our security investment more, on this corresponded with the whole, uh, Aurora incident. Um, it, yeah. so I was part of the response team on the resort Aurora incident. Um,

Deirdre:

us what is the word incident,

Justin:

oh yeah. Sorry. The Aurora incident was, um, back in 2009 and 2010, it span 2009, late 2009 to 2010. it was when, Chinese intelligence, compromised, not just Google, but compromised. Google was detected and we found lots of other compromises, et cetera, as part of running it down. One thing that happened after the Aurora incident was— Google took security seriously, but, post Aurora is, it was a world of difference. They was saying Google ignored security. They had some very good people, but like the massive focus and investment, I mean, had Sergei Brin, saying he wanted the name Google, to be synonymous with security and throwing the investment necessary to make that happen. And so, conveniently, as there was a big push to invest in security, we were also, able to respond to that Pope bounty that we started by staffing up.

Deirdre:

you happened to read Nicole Perloff spot? this is how they told me the world ends'? they have an account of the Aurora is, at the time I sort of like a history of the zero day market. I think it's quite good, but you were there, so maybe you could tell me about.

Justin:

I read the book. I will admit that I've seen several excerpts from the book, and like heard interviews in that that made me less inclined to read the book.

Deirdre:

Okay.

Thomas:

All I know. All I know about the book is that they were mean to Dave I tell That's like the

Deirdre:

I

Thomas:

thing I have about the book is that it

Deirdre:

mean.

Thomas:

Okay.

David:

have the book on my shelf, but I haven't read it yet.

Justin:

mean, a lot of people have been mean today. I think he almost, uh, rebels at it a bit. At some point you have to have Dave on. but, uh, what I would say is one of the fundamental premises of the book seemed to be, you can buy your way out uh, security vulnerabilities, which, I guess maybe like the NSA or whatever could audit their way out of it. And, uh, and yeah, I've already expressed, I disagree with that fundamental privates, Unless I'm misreading. I see an expression. It makes me feel like a misreading.

Deirdre:

like this is happening and we are all vulnerable. Like we, users of software are vulnerable it's just sort of it's

Justin:

Okay.

Deirdre:

know, it culminates with like this wave of ransomware attacks across the world in the past a year plus or whatever. there's this sort of like, oh, we've got a Charlotte brokers just like releasing these exploits based on this, like five-year-old, uh, zero day windows they sat on for ages. And now we're all screwed because someone got that out and it got

Justin:

Okay.

Deirdre:

I forget which one it was not. Petia one of those. and we're all screwed and this is happening and that's kind of the thing. Like there are bug bounty markets, it's not so much that, they are good and they solve the problem. It's just that they exist. And also, oh my God, everything is insecure. We're all gonna die.

Thomas:

I guess that there's like interesting like dynamics Justin and I have like a shared background in, security, like, software security consulting work. Right. And like, there are some really basic things you learn quickly when you're doing consulting. And one of them is that, even for really excellent teams that have really great track records of, you know, doing assessment work, right? Like If you take three different teams and throw them at the same target, there'll be like a 60% overlap or maybe a 70% overlap in what they find, but different sets of eyes will definitely find, vulnerabilities. Also. So like one thing bug bounties do is they optimize for the number of different eyes that you get there. Also, like I think the incentives of bug bounties are probably really good for finding systemic issues because bug bounty people are motivated in a way that consultants aren't to find variants of things. So, as a security consultant, like. What you're really kind of fighting against, um, or you're working against is boredom, when you're going after a target. So like the temptation is always you find a pattern of vulnerabilities, you find the game over expression of that vulnerability, and then you report it as systemic. Right. And then you've done your job, right? Like I found more than one and here's the game over version. So you have to take it seriously. And now you guys, you're the engineers, you'd go do the work. and bug bunny people have exactly the opposite incentive.

David:

Are you making a cathedral and bazaar argument

Thomas:

Yeah. You know, I said, I use, use and I immediately regretted it. it was, it was in my

Deirdre:

Okay.

Thomas:

at it, and like, you know, it's very bad and I find a different metaphor so that we're not invoking, uh, that. My experience with bug bunny, people that been like they're incredibly motivated to find and get paid for the most minute variations of things. So like, we've had clients where like, before we got to those clients, they had paid out, know, several rounds of bounties for like the same redirect vulnerability. YOu know, it was like an open redirect on a web application. And then like, they break it with a filter and the filters broken. Like they report that five or six times. And like the first thing you do when you come in there and say like, you know, okay, we're done with the redirect vulnerabilities. We'll go track them down. But that's the dynamics of how that works, right. Is like you find the pattern of vulnerabilities, then you scour everything for it. Cause you're making money on it. Right. So like that the incentives kind of line up with, trying to flush out— I don't know how true that is in large scale, like, product work, like Chrome, cause I thought were worked on a team like that, you know, Manny Mounties for SAS products, that was definitely one of the values that you got out of it. Also like, bug bounty people find different bugs.

Deirdre:

Yes.

Thomas:

There's less status involved the work that they're doing. definitely a bias in consulting to look for high status vulnerabilities, or interesting vulnerabilities and bugged Donnie people are, I was going to use the word shameless, but it's not shameless. It's like what we're doing is shameful with the bug bunny people that aren't doing it, it's probably correct.

Deirdre:

if they

Thomas:

Okay.

Deirdre:

and they'll get paid for it, they'll submit it. It doesn't matter if it's like

Justin:

Okay.

Deirdre:

or the shiniest of the coolest or novel or whatever it's like, does it work? Does it count? Gimme. here it. is.

Justin:

Yeah, they're just, coaching, coaching, coaching. that is their goal. And yeah, there's a— consultants We'll find you a different class of vulnerabilities. And yet I mean, I spent a lot of time as a consultant. you're kind of going for the splashy report, as, as Thomas said, and yeah, you get, you see very different things, from consultants that you, than you do from, uh, from bug bounties.

David:

The exploit developers who are like selling to NSO and so on are finding yet another I'll maybe not a completely different class, but they have different incentives again, because they're trying to, one, they're only looking for the game over versions, or they're more interested in things that could turn into the game over versions. And they don't have the PR aspects associated with submitting to the bug bounty, They're trying to develop a product almost.

Justin:

Yeah. I don't know if it's so much a different class. it, it depends on like which shop came out of. Right. it's not a huge number of people that are actually doing this, right. You tend to have different people, different styles. I think it, a way it's almost more like, personal style plays into it, but at the end of the day, I think when you were talking about it, uh, you were talking about how it's now become like a service industry. And it's actually been a service industry for a long time. Um, where it was a long time ago that people would just sell a proof of concept, and hand it over to, you know, whoever's going to use it in the wild. And now it's like, they gotta keep these things up to date. There are service contracts, stuff like that. It's a, it's a big fricking business. Um, and they have to, they have to support this software. You're not just,"Hey, I found this bug. Here you go." No, there's a, there's a lot more to it.

Thomas:

I think we all know that there's like a, the work of actually producing reliable exploits, that's, there's a division of labor. Right.

Justin:

Yep.

Thomas:

So. There there's like the finding of vulnerabilities and qualifying and vulnerabilities, and then writing a proof of concept and then fully weaponizing and making them reliable, I guess, like here's a place where I'm flying completely blind and all of you might know better than I do, but like how much of that division of labor is expressed in the marketplace. Right. do we think, do we believe that like NSO has a team of people that can take really bad. proof of concept exploits, then they can internally just weaponize them. Like they don't really, they don't have to care about how easily the vulnerabilities are weaponized or is it the researcher themselves that has to do that work. figuring out how to like bounce through this set of allocations and you know, all that work. And the reason I ask is that like, if it's the latter, if it's the researcher that has to figure out how to make the exploit reliable, then there might be an argument for, um, reducing the supply of those experts, you know, by paying more to bounties and things like that. But on the other hand, if it's NSO doing most of the or whatever the thing is, right, I'm

Deirdre:

be in between.

Thomas:

work um, you know, you have a much broader pool of people that can potentially find those vulnerabilities. And then NSL can just full-time that people that can take that work and turn them into reliable X plates.

Justin:

So accepting that any detailed knowledge I have on this is fairly stale. I think what ginger said about it might be in between: from my old past understanding, that's it? Yeah. It. It depends was, was my experience, but this, this is, this is knowledge that today might be stale. Like it's entirely possible things are much more productionized, than, uh, than they used to be.

David:

My understanding from the black hat talk a couple of years ago. more reliable, your exploit is the more you will get paid by the firms buying it as

Deirdre:

yeah.

David:

they will spend effectively engineering effort productionizing your ex— but obviously the less they have to do of that, like

Deirdre:

The better the ROI for

Justin:

Okay.

David:

prefer to spend the money to do less of

Deirdre:

Yeah.

David:

because everybody wants to spend money instead of doing software development. but I imagine that they have to do some of it anyway, because they need to, they want to like hook it to their command and control

Deirdre:

Yes.

David:

what payloads there's something, even if the after free is completely reliable to get whatever they want to send in and they still need to decide what they want to do with it.

Deirdre:

exactly.

Thomas:

Do we think like the white market for vulnerabilities, whatever the word is. Cause white market sounds terrible, but whatever, like board market for vulnerabilities is, we think those vulnerabilities are undervalued right now? I don't know. Right.

Deirdre:

Apple and like

David:

what do you mean by above board?

Deirdre:

Yeah.

Justin:

You're just talking like bounty programs. Right?

Thomas:

Bounty programs. Yes.

Justin:

I don't think it's undervalued. Cause I think you're, cause you're trying to do something different. I mean, like we're at a place where, depends on the vendor. I would say that people who I talked to, they find Apple's terms on their bounty program to be onerous. so that is one thing, I would say that generally consider, Chrome's terms to be fair. fairly amenable.

David:

And those are like the conditions and the terms, not the like pricing,

Deirdre:

correct.

David:

you submit and when you get to talk about it and when you don't and how they respond to you, type stuff

Justin:

and also frankly, what, what counts is in scope? What counts is out of scope, et cetera? Chrome has, my perspective on Chrome is that Chrome has always been fairly open-minded on what counts is. Yeah.

Thomas:

What would you change about the Apple rules, if you could like move in right now and just fix them?

Justin:

so this is the problem. I haven't actually sat down and read their terms. I've mostly heard grapes. and so I can't say exactly what I would change. my impression was that— actually, you know, I probably shouldn't even give an impression that might be totally uninformed. Uh, all I know is that the people,

Thomas:

show

Justin:

Basically the big complaint that you would get, is like haggling over what counts as in scope, um, haggling over as what's rewardable, et cetera, being very ambiguously defined.

David:

What does it mean to like have the use of the chromes was more open as to what account? I didn't scope. What does it mean to do that versus.

Justin:

so with Chrome, there was, um, a lot of effort, put in to get pretty specific about what was rewardable, why, what counted as a security boundary, et cetera, both from an engineering standpoint, cause it made it easier internally and externally it made it easier for people reporting bugs. Like for a bounty hunter, they— bounty hunter for polka honey hunter. They don't want to, uh, they don't want to burn a lot of cycles on something it's not going to pay money. Um,

Deirdre:

is hard to tease out is, there's prices and it's hard to tease out with the prices for bug bounties are actually moving markets, are moving people to report, to vendors, to the white market, as opposed to the gray market or black market, when you've got these different terms between bug bounties that may also be affecting what gets reported to whom.

Justin:

Yeah, but I think you're dealing with different groups of people

Deirdre:

Okay.

Justin:

these markets. I don't think there's a, there is obviously going to be some overlap, but I don't think there's a ton of overlap between people who are, uh, who are doing bug bounties, and people who are selling stuff to NSO, et cetera.

Deirdre:

Okay.

David:

On my sample size of one, the one person I know who's like sold exploits does not submit to bug bounties. Yeah.

Justin:

The people

Deirdre:

Oh,

Justin:

you interact with the people who do the bounties and they're like, so Chrome's, as an example, actually a perfect example for Crohn's it's like, you show a memory corruption. we don't require you to prove that it's exploitable. and in fact, I think there's a good chance that Chrome rewards, lots of memory corruption vulnerabilities that are not pragmatically exploitable. Um, it's like, this is the line. If you can demonstrate memory corruption, boot. And that means that, It just makes it easier. Cause you don't have to write an exploit or anything else like that. You just have a simple proof of concept, uh, and chromes, uh, like the chromosome systems we'll do the minimization and everything to try to create a reduced test case. So it's, it's relatively low friction. there are a lot of things that Chrome does to incentivize people reporting, um, and to make it easier for reporters, it was just a cultural, it was part of the culture that created the program and it's carried its way through.

Deirdre:

I love that.

Justin:

Well, like I said, it's an input channel you want, you want, you want to get that signal? You want to get a strong signal.

Thomas:

I feel like, like the top line of this discussion is always just the idea that like, a reliable RCE or whatever is, I mean, people on message boards think that, you know, log-out Caesar are worth$10 million or whatever, just like come up with like some percentage of the total market value of a company. And then that's the value of a vulnerability. But like, there's like a general idea that like, you know, a reliable drive by, you know, iPhone remote or something like that has some kind of market value that we can assign and we should be paying the actual market value for that vulnerability for it. I'm pretty sure I feel like that's wrong. Right? Like, um, I'm having trouble putting my finger on why I feel that's wrong. Right. I write the second book.

Deirdre:

they're not one market, it's

Justin:

Yeah.

David:

Okay.

Deirdre:

impossible— if they're not playing in your market they're disjoint. So what are you doing?

Justin:

Yeah, exactly. That was my take being responsible for that program for several years is that you're dealing with, with different groups of people. There might be some overlap, but I think there's people have this assumption there's overlap and that's not something that I observed in practice.

Deirdre:

Basically you can't model them as like, oh, if we raise a price over here that will incentivize the same type of reporter to come over to here. And it's like, no, except for outliers. That's not true. Basically

Thomas:

I mean just a devil's advocate, right? there isn't perfect overlap, right? Like there's a lot of work that, somebody who sells functional exploits does that a bug bounty person for the Chrome bounty It doesn't have to do, um, you know, and they can go for breadth and depth and the other person has to go for depth and all that. Right. But in theory, if you dial the incentive up enough, you can have the bounty people, you know, flush out— both groups of people have to do that initial kind of reconnaissance work of finding where the vulnerabilities are in the first place. Right? So in theory, if you dial the price up enough, you can have the bounty people finding vulnerabilities before the exploit people

Justin:

Potentially. Yeah. I should clarify the Chrome bounty program also does have incentives where it's like, Hey, look, we will pay you a lot more. If you can produce a reliable exploit. And if you chain a series of reliable exploits together to produce like a full sandbox escape, et cetera, within a certain time window, we will like pay you for that full sandbox escape. And there's a lot of details to the bounty program to try to sort of maximize, the, the kind of information that you get and to encourage people. but yeah, at the end of the day, most of the reports are, are, are not people or running it down and building a full exploit, etc.

David:

I saw a talk recently, where they were saying that bug bounties need to allow people to like work on a bug over a period of time so that they can take it from,"I've found like initial bug too. I have a full working exploit". like, is that a thing that's feasible to do? Cause it seems like you might just want to go fix the thing. like if you learn that you have a memory corruption, you might just go fix it before the person writes a full exploit. Even if you did have that type of power.

Justin:

No that's the way that the Chrome bounty is actually structured. So that it's like, you were, you know, it was that first, like the engineering team went and fixed it. but you know, you could still finish up. Cause the idea is you want them to report the vulnerability as quickly as it's confirmed. if you want to encourage that additional research, you also have to have that extra flexible flexibility

David:

Yeah. Cause I think I saw someone on Twitter was saying they had found the above, that was used by the NSO group, on the ILS. and they didn't report it because they didn't have a full working exploit yet. They had just located the bug and they were wanting to turn it into a full exploit.

Thomas:

that's not a good term.

Deirdre:

Yeah.

Thomas:

Whatever they're doing.

David:

bump, but it was a bug in Apple that was similar. Something that's been passed recently.

Thomas:

You can see right away the incentives there are terrible. If you've got a, if you've got a condition in your bounty that asks people not to report until they've done all of the work for it, that's awful. See.

Justin:

but that was, that was we, we made that mistake when we originally designed the program and started adding like extra things where it's like, okay, you know, for a full, for a full sandbox escape chain, we will get this much. And so people started sitting on things, um, and we realized there were synagogue things where like,"okay, we are updating the program to account for this".

Deirdre:

okay, so how does it go now? So how's it go? To avoid that?

Justin:

Oh, so people can, some can report um, but there's like a, my recollection is that there's a time window. I'll throw out six months. It might be six months where it's like from when you report the one bug and you can keep reporting bugs and sort of chain it together and develop, uh, you know, full exploit, et cetera, and kind of keep racking up a higher payout as a result. Thus, you're incentivized to report the first one earlier, because they're also worried about, collisions, right?

Thomas:

right.

Justin:

You do get collisions. I think people dramatically overestimate the volume of collisions, but, um, everybody gets collisions and this was the thing that we had been hearing, like what we heard from, good reporters who would regularly reported bugs to us. They're like, oh, I found that one, but, uh, hadn't reported it yet because, I wanted to develop a full exploit or I wanted to try to use it in a chain. And that's why we revise the program.

Deirdre:

So basically if your first you get first dibs, but you get to keep racking up your score. If you get like a fully working exploit, repeatable exploit, kill chain and so on and so on. Cool. I like it.

Thomas:

So if David reports a memory corruption vulnerability to Chrome, without the, you know, without the rest of the exploit chain attached to it. And then I come in, this is a fantasy world where I do browser exploit work, and I'm not totally incompetent at that stuff now.

David:

a fantasy world where I can do memory corruption at all without screwing up the math. Like I taught, intro computer security before, and for the memory corruption part, I would bring someone else in to teach it because I can't subtract two memory addresses to save my life. Even if I could tell everybody where the vulnerabilities were in there.

Justin:

It was so easy when, when Tom and I started out.

Thomas:

I remember feeling very cool and special for coming up with shell code that didn't have any uppercase letters on it. Like for a while. That was like my colleague card. Yes. I'm, I'm definitely, uh, I I'm very solid this stuff. here. It's this exploit. I produced it

Justin:

Or UTF 16 shell code meant you were like a frickin master. Yes. Yeah, Yeah,

Thomas:

gotten, it's definitely gotten, worse. Right. But, so, so in this weird fantasy world, right where like David submits the memory corruption, vulnerability, and then like, we collide on what the vulnerability is, but I submit the bug chain that goes with it. do we both get paid

Justin:

Uh, no. so the program, as far as I recall, is still entirely first come, first different programs have chosen to do this different ways. Some people say, Hey, look, if a group of people reports it within this time interval, they each get a cut of it. but, the decision was made very early on, on Chrome team that the first person to report is the one who gets the bounty. and then, and part of that's just because when you do put bug report, it's just, like there's a certain logistical thing to it. But also I think it's just fair to say first one to report gets it. Cause then you're incentivizing people to report quickly.

David:

And Hey, Boba Fett doesn't get paid if he's not the person that gets the kid.

Justin:

Bounty hunters.

Thomas:

So like, I guess for, years, for awhile, I think David was involved in this a little bit. And so it was dear to him, we did some security work for campaigns, and like the election cycle before this election cycle. Big time. uh, we, we came up with a bunch of security recommendations for, um, kind of ordinary people, like our top line recommendations, like things that we would tell normal people, to try and understand, like the threat landscape would be that, for phones, we'd recommend iPhones, that like every iPhone was, you know, more secure. just generally I'll be more intellectually honest about it and just say that at the time we said iPhones, iOS is more secure than Android. But then on the browser side, I would strongly prefer, chrome over safari. And I would say right now, I still, I kind of stand by the, the, Chrome and the actual Chrome or Chromium project and not spin-outs of Chrome or Chromium,

Justin:

should.

Thomas:

which would be kind of my gospel and what browser to use. And I still tell people, and I believe that iOS is, more secure than Android, although there's a meme going around ever since that, um, that zero diem thing came out where"We're not paying for iOS vulnerabilities anymore!" There are now people that believe that iOS vulnerabilities are worthless. so I guess I, I guess I have two questions, right? so I'm I phone over Android and I'm Chrome over Safari. So I'd be curious to see your thoughts on whether I'm wrong about that. then I have a spicy question to add to this,

Justin:

Okay.

Thomas:

what about Firefox?

Deirdre:

Yeah. I was supposed to say

Justin:

All right. So what I would say, is that I am personally Pixel over iOS. Uh, it is correct that if you're just going to compare, uh, Android to, as a whole iOS, then you are dealing with a massive and extremely varied ecosystem. Uh, but, Android deservedly, Had a bad rap for security, the amount of investment, and the, like everything that went into that over over many, many years now, I feel has, uh, has changed that game. I think, where it gets more complicated though, is it's, it's not like windows, right, where, they control the image. they can— and even in windows, there's like, you know, who did you buy your PC from? It, might've had a bunch of stuff, extra stuff on it, et cetera. Um, and that's, and that's the thing, even on windows, because it's a big ecosystem and not just one vendor that you get, that, you know, you're exposed to a wide variety of, of things. and I, I am very much, uh, on the side of a well configured windows 10, like a decently configured windows 10 machine, is a safer bet than, say

Deirdre:

Hmm.

Justin:

oh yeah. I'll, I'll go on that one too.

Thomas:

I'm with you on that. I'm with you on that.

Justin:

yeah, the, uh,

Deirdre:

I've learned anything from swift on security, that, that that's basically true now.

David:

What does it mean to be decently configured though?

Justin:

A lot of there's a lot of that bundle where it stuff that out of the box— it decently configured, it is basically did they install a bunch of bundleware, and Microsoft has gotten really aggressive at trying to prevent the kind of dangerous bundleware, so I think a new windows 10 machine actually comes relatively safe.

David:

Yeah. So like a clean install by a power user counts as decently configured.

Justin:

But I think most of the stuff you're getting in the store, because Microsoft has been so aggressive with the way, uh, the jumpstart incentives and everything works. they can't do the same kind of bundleware they used to see. but yeah, the reason why I would pick, Android over iOS is that, or a specifically Pixel over iOS is that Pixel you are getting you're getting, monthly security updates, you are getting, essentially the most hardened version of Android. The browser is a huge, is a massive attack surface, and yes, I will go into detail why I would put, Chrome up against, safari any day of the week. but I think that's one of the areas where you have a ton of attack surface, and it's just much stronger. to get into the'why' of, Chrome over safari.

David:

Before you say that I have one question

Justin:

yes.

David:

So, as you know, interned in Chrome security back in 2016, and I kind of remember the general attitude amongst people

Deirdre:

people that were not on

David:

at Google about Android security circa, 2016. Has that tenor— how was that tenor changed? Like, do you think there's been improvements since 2016? Is it, or was

Deirdre:

Pixel

David:

good? Everything else having bundleware issues kind of the state of the world back in 2016.

Justin:

In 2016— they had started the work long before 2016, like point blank Andy Rubin was hostile to the idea of security. Like remember he went and started another phone company and his, he, you know, they listed all of the staff and for the two security roles, it was Andy Rubin's dogs. Uh, like he just, he, he, he didn't conceptually, he, he argued against, security of So I would say that things started dramatically improving after Andy left. And we can all agree that that was, that Andy's attitude towards security was probably among the least of his flaws. Uh, it just, it takes time, right? like, you can't move a ship that big overnight. And it just, it took time and a lot of work and the Android team put in a lot of time put in a lot of work. I don't know, I've seen a lot of improvements, but it took time for those improvements to, to really have an effect.

Deirdre:

Okay. this plays to some of the things we were talking about in our first episode about iOS security. What were some of those things that you had to turn the aircraft carrier around to accomplish: architectural? Practices? give me, give me the meat.

Justin:

Yeah. all right. So I was not on Android, so this will be in reference to Chrome. Uh,

Deirdre:

so.

Justin:

yeah, but honestly it was— Google's— so the thing about Google that I think,

Deirdre:

It's like all these orgs, like Android and Chrome

Justin:

yes,

Deirdre:

then there's like the rest of Google and there's like Search is its own like citadel.

Justin:

Yeah, exactly. org, Google operate. I spent a lot of time going in and out of companies as a consultant. I saw the deep internals of how the us government worked in a lot of places. I have never seen the level of listserv, independent organizational operation, elsewhere that I've seen like Google and I was in the Marines where a general was basically like, my base is my base and I'm still like, yeah, he doesn't hold a candle to, an SVP of a product area at Google. So, yeah. but yeah. I, so, but I can give you examples from Chrome. so site isolation, uh, was like fundamental rearchitecture of huge swaths of the browser. you had, uh, an ASCO Koskoff and his team.

David:

what site isolation

Justin:

yeah. Sorry. So.

David:

isolation was there before?

Thomas:

can you give me a couple sentences that I can just drop on people on hacker news for site isolation?

Justin:

That this is a fair point. So Site Isolation was designed around the notion that, so we have this good process sandbox. And if you can say, Hey, we are going to bind the process sandbox to an origin and say, if you open up something on google.com, only google.com is going to go in this process. So the idea being, instead of having to do a whole bunch of checks all over the code to determine if one origin can interact with another, in ways that, you know, memory corruption in, even the renderer process could bypass you say, no, look, this renderer is google.com. It doesn't get to touch anything outside of google.com and nothing outside of Google got to.com gets to dig into it.

Deirdre:

And it's HTTPS or HTTP. The origin is HTPs, google.com, not HTTP, google.com. Those are different origins.

Justin:

Yes so well, and this is why it's called site— the definition of site is fuzzy because it's not truly an origin. So HTTP is kind of in the, in the doghouse. You know, that's in the dirty bucket where you're like, eh, we can't really make guarantees with HTTP, but yes, for all.

Deirdre:

you forever now.

Justin:

Yeah, that is, that is actually the strategy of solving the HTTP problems to eventually upgrade everything. But, yeah, essentially, effective top level domain plus one, goes in its own process. and that just makes, uh, the security reason around origins so much simpler, but one of the really nice things it gets you is that if you get a code execution inside of a render process, so like you compromise V8 or whatever, um, you still have to find a way to bypass site isolation to manipulate any other origin. Like the old model. And this is frankly, the way that the other browsers work, uh, uh, yes, still is that if you get code execution inside process, you effectively have universal cross-site scripting, but in Chrome, if you get code execution inside the renderer process, you are still bound to ever origin hosted that. Now there are some exceptions where, um, it's, it's funny. Usage patterns on desktop allow you to do a bunch of coalescing of processes, but usage patterns on desktop are very different from usage patterns on mobile. On mobile, you don't have the same kind of coalescing. So instead of applying site isolation everywhere, on mobile, site isolation is— there, there are various heuristics that are used to determine if, oh, this site needs to be isolated. Like one of them being, if we detect that you've logged into the site that it's like, okay, it definitely needs to be isolated, et cetera. Um, they are doing as much as they can to get the resource utilization down. Yes, it's a resource utilization thing. and they keep sort of expanding the set of things that can be isolated. but, Right off the bat. If you're isolating things that the user logged into, that's already, helping a lot. so isolation not uh perfect, but it is a huge win. the other thing I would say in terms of like a big advantage of Chrome over other browsers is Chrome has a much more robustly sandbox process. the Chrome renderer process doesn't have, uh, access to the network. It does not have access to, graphics devices. It does not have access to, the, input events stack, et cetera. Like all of that is split out, because the idea being that, Uh, like you remember the old windows, low integrity mode, or for anyone who doesn't remember the old windows, low integrity and mode. The problem is that there was a lot of ambient authority because you could fire off events. Uh, so there are a lot of ways to, abuse input, to abused, access to graphics stack. Uh, you had the net, like you had all of these things, and that was why people were always finding escapes out of, uh, low integrity mode. So, from the beginning of the design of the Chrome renderer process, the intent was to have none of that. Okay.

David:

the renderer not have graphics card? Is there a separate, actual renderer?

Justin:

There is a GPU process and the GPU process, is, sandboxed to degree it can be. So this is the other piece where it's like people talking about sandboxing, like it's a binary state where it's like, no, no, no sandboxing is an approach. It's like where, what you allow in what you allow out. There's, there's a tremendous amount of variance there. so it's not the kind of thing where you can say? oh, it's just sandbox.

Deirdre:

this goes to something that you might've, you might've been already wanting to talk about, but, you can't mitigate your way out of a failure to isolate your software along. It's a security boundaries. So

Justin:

yes.

Deirdre:

directly be about sandboxing and other things.

Justin:

Well, and it doesn't just have to be sandboxing, right? Like there are other ways to isolate. Memory safe languages, provide forms of isolation, the V8 team, just made the announcement of, I think they're calling it Uber cage or something where, Uh, well, cause I think it's Apple already took like giga cage also. They're a German team. So, you know, uh, but the idea being, that they already had to do this thing called pointer compression where the V8 runtime is actually only dealing with a four gigabyte address space. but what they're doing is it's actually quite similar to the design of the old 64bit NACL sandbox. They are bounding operations within that address space and they're providing large ranges to block the edges of that address space. So that's another, now, now we're not talking about a process level sandbox, but we are talking about something that, uh, provides guarantees about confined execution. You can't touch anything outside this address space. Um, and so like they're working on that right now. Yeah, rewriting parts in Rust are another example. there, there's lots of different primitives that you can use to isolate, reduce your attack surface. one of you said, on that episode, something about, uh, how good when dealing with, bundles of, messages

Deirdre:

yeah.

Justin:

something along those lines. And that's, that's my take on

Deirdre:

And stuff.

Justin:

it. And this is, this is my problem with— so this was one of my complaints with sort of the Apple strategy that I've been observing. It, it has a very similar smell to the Microsoft 1990 strategy where it's, uh, is it sort of, belligerent approach to how they deal with, vulnerability researchers and deal with the security community. and they they're layering on mitigations, but mitigations, aren't going to solve your architectural issues. You just have to get in there and fix the architecture. Yeah.

Deirdre:

Would you consider blastdoor a mitigation or an architectural change?

Justin:

I have not taken it apart to see. Yeah. And that's the, I don't, I don't understand what it does well enough. Right? Like it's like, okay, you know, we wrote this in a memory, safe language and we, and we maximize the seatbelt policy. I would like to see the seatbelt policy to get a sense of what it is. but it depends, right? I don't know if you've anyone seen me go on a rant against Electron, but. Okay. So my problem with Electron is that with Electron people, people would complain. Yeah. They shut off the sandbox. I'm like, I don't care if they shut off the sandbox. I care that every electron app opens these IPC messages that even if you are sandboxing things, it doesn't matter. It's like the thing on the other side, it's still exposing all of its attack surface. And so I would, that's the thing I would have to. So you'd asked about like architectural things, like process wise, it's very important to have your security team deeply involved. So Chrome has a very strong code review, uh, culture. You have to get a code review, um, from an owner before he can land things, the IPC, message system. and when you create your IPC messages, et cetera, that's owned by the Chrome security team. Yes. Interprocess communication. Yeah. because that's your main uh, attack surface between uh, between different sandbox processes, that has to be reviewed the normal code review process has to go through, one of the. qualified reviewers on Chrome security. And so anytime you are bridging that attack surface, you have someone who, you know, should have sufficient security expertise reviewing to try to catch errors, catch mistakes, catch things where like, oh, you just added a backdoor and you didn't know it. Um, because it's really, really easy to do that.

Thomas:

I've I have so many questions that I won't keep us too long on this, but asked you, right. So, with regards to like the success story for, for Chrome, I guess this is three questions, right? So first of all, when you read published accounts of punk chains and things like that, like, I feel like there's, there's a survivorship bias in terms of what you're reading, like the bugs that we're reading about. It would seem from reading it like that, the sandbox thing isn't working at. All right. Like every vulnerability you read about comes with a story about how they then bypass all the, you know, the sandbox and then got a kernel LPE and things like that. Right. so I guess I have two related, first of all, questions about how successful do you feel like, the sandbox is like, what do you think its success rate is at, you know, at cutting memory corruption vulnerabilities off. And then I guess my other related question is if you had to like, do like a hundred percent of Chrome security and divided it up, sandboxing would own what, 60% of that versus runtime hardening, or 80% of it versus runtime hardening.

Justin:

uh, yeah, that's, that is hard to say. There's, there's not a lot of development left on sandboxing. like in terms of the actual, like ongoing work where it's like, it's the architecture it's built it's this way. I would say the next big thing in Chrome is to memory safety, but it's something that you have to, like, it's something you have to roll in slowly and carefully. And, um, you know, I made it, I made a comment to all of you beforehand about, you know, you can't,"rewrite it in Rust" is not a strategy. It's just logistically, it's not a strategy. There's way too much code.

Deirdre:

parts in Rust.

Justin:

Yes. And I think rewrite parts in Rust makes sense. Then again, there's a lot of things that could be done to fix C++,

Thomas:

Okay. Hold on, hold on a second.

Justin:

sorry.

Thomas:

have, we have a good, we have a good segue here, right. But before I let you, before I let you fully take the segue,

Justin:

Yes.

Thomas:

Rust, I'm not gonna let you off the hook on Firefox.

Justin:

Oh, so simply by looking at the security architecture and that I think the Firefox team does really well, but chrome has a more restrictive sandbox. Chrome has proper site isolation. Chrome has, more isolation between different, uh, processes by capability. You know, there's a network processes, GP process, et cetera. So I think Chrome just has a better architecture. I think the Mozilla they have, um, the architecture, the extent to which they've evolved Gecko to make it safer and more multi-process and they, at least they're introducing a form of site isolation, which essentially address space isolation, um, which will address things stuff like that. So I think they're doing great work. but I'm still gonna, when I look at the breakout and the numbers, I, yeah, I'm going to recommend, Chrome.

Thomas:

So w why hasn't Chrome been rewritten in Rust?

Justin:

because it's millions and millions of lines of code. so I spent a large chunk of my time on Chrome, actually managing non-security teams too. There were just different points where, because of situations I took on, I took on the desktop team. Uh, I took on the extensions team, uh, enterprise team, like various different teams where I was responsible for— I say team cause more, it's an org than a, more than a team, for, for many of these. But, uh, I, I had to intimately learn about voting staffing and prioritization and all of that and figure out how to balance all those things. And so I look at Rust, uh, and I say, okay, You could not feasibly rewrite it in Rust. I look at Rust, and, think that I think they're— it's to not be testing out Rust for certain parts of the code, for like, there are tons of places where you could be making targeted uses of Rust. and it's, and it just the right call right now, Rust is mature enough, et cetera. Then again, I also, I have been trying to, I've been playing with Rust since I, you know, uh, I'm no longer employed,

Thomas:

What do you think? I started writing rest like last year. So.

Justin:

I do not like the developer ergonomics of it so far. I do not like the avail the well, and it's, it might take some getting used to, I it logistically I have, C++ programmers, I of things to add safeguards and partition things, et cetera. I can staff teams. It is hard for me to find Rust programmers. the Rust programmers are probably going to be, significantly less productive if only because we, you don't have, the full set of like existing code, et cetera. Like there's, it's, it is just a very expensive proposition. Like even in my side projects that I'm doing right now, where I was trying to use Rust, I've given up on using Rust for the majority of the project, because I would have to rewrite a whole bunch of code that I already wanted to use that, uh, that was available. I'd have to rewrite it Rust and I, and I'm like, this is just an exhausting experience. So instead I partitioned out and it's like, okay, I'll use Rust over here. but I'm not gonna use it for the meat of the code. And I think that's actually a reasonable strategy. I wish that the people that started Rust had taken more of a C++ style approach to it, which is, figure out a way to build a bridge between the languages. Uh, it is, it is painful to.

Deirdre:

Not the FFI interface?

Justin:

Well, the FFI interface though, but it's not going to let you import existing code. Right. So now you can write new code. Um, there's actually someone on the Chrome team who, uh, uh, on a Chrome security, who's been doing a bunch of work on improving, you know, C++ compat with Rust, uh, because there's still a lot more that can be done, but like take the FFI interface. You're not get C++ objects. Right. Because that's not standardized, et cetera. and yeah, it's just, it's hard. Uh, I would say that, if there'd been a way to create essentially a dialect of C++ that you piece meal port with, similar guarantees to Rust, you wouldn't necessarily get all the same guarantees. I feel like that would have, would just be so much easier logistically.

Deirdre:

Just have a giant`unsafe` block and just copy paste your C++, and then just fight with Roth rustc until it compiles

David:

Well, isn't a dialect of C++ also known as gecko. Like isn't that thing whole just written in macros and macros and macros.

Justin:

I mean, if you're using macros in C plus, plus you're almost certainly doing something wrong, but, but yes, there are lots of there's lots of legacy code that there's a lot of ugly, ugly parts of C++. Uh, so I'm not defending, the, like the language in its entirety, but the fact is that pragmatically speaking, it's really hard to find something else for something like a browser.

Deirdre:

You're practically starting from scratch in a lot of areas

Justin:

Yes.

Deirdre:

you are not is like small pieces. And, you know, I've been working on a full Rust project for over two years and have, we were only able to get started we had things like the Tokio async runtime to build on top of, and a bunch of libraries that we built built on top of. we're also doing a lot of architectural stuff from scratch.

Justin:

Yeah. And that's, and that's why I just don't think I mean, look, what happened tried to redo like, or when it became a, you know, it was Netscape at the time and they were like, they were gonna redo their runtime. They did gecko, et cetera. And even that, that still has a lot of weird quirks of the original engine, but that was years of work for that. And they, weren't talking about a different language.

Thomas:

last time you checked Chrome, would you, would you say that the Chrome team is in a place right now where to whatever extent the engineers actually doing the work on the project, want to use Rust for things like if you, if that team spots a component that makes sense to write in Rust, can they do it? Are there any like current logistical obstacles to doing it, or is it like the Linux kernel where there's still a huge permissions process to get that work started?

Justin:

gonna throw anyone under the bus. Uh, I will say that, there are north shortage of strong opinions. Um, and I think adding a new runtime, uh, adding a new language, uh, et cetera, it's a big, it is always for any sort of stable, mature product. and frankly it's a product that has very high development standards, uh, in my opinion, that's always going to be kind of,

Deirdre:

contentious. Yeah.

Thomas:

It's not like Firefox where like the Firefox team spots something they want to do in Rust, they're just going to do it in Rust.

Justin:

I'll say it's certainly not my impression, but this is the thing. I'm not involved in those discussions anymore. Um, and they, when I left those discussions were, you know, sort of rapidly evolving, so they might've evolved quite a bit.

Thomas:

the thing you'll get from like C++ plus programmers and I, I was a C plus plus programmer, but a long time ago. Right. So people talk about modern C plus plus, I assume it's a completely different language than the one I was reading in 2001. Right. but like you hear modern people's plus programmers saying that there's not that much of a security gap between Rust and idiomatic modern, you know, it moderates people's spas kind of the same way you might be able to make that claim about like objective C versus C, right? Like if you're using just the idioms, you're probably right. Okay.

Justin:

I mean, if you're using avoid avoiding raw pointers, you know, you're using like owned pointers and shared pointers, but then there's the question of like your compilation options? Like, is your, are your containers bounds checked or not? Depending on how you're compiling it. I, I would say that, I guess the way I'd put it is that. Idiomatic modern C++ actually opens up the opportunity to introduce some of the core concepts of Rust into C++, but then you're going to have to like break ABI, then you're going to have to get the standards committee on board, et cetera. I think one thing that could just create a sea change is pointer tagging. because like you look at pointer tagging and you could actually, you can essentially write, uh, rewrite your allocator. So you essentially have like a GC allocator. You could do things like have an actual security enforcing, um, aSAN, like address sanitizer, built in, into like a production runtime. If you have that hardware support for something like, pointer tagging, uh, what that will do is it will give you. Many of the memory safety guarantees of Rust. It won't give you all of them. You won't get like that. You won't get the thread safety guarantees, but you will, you will get a number of the riskiest ones. And I, and I do think that that might turn into sort of one of the paths of least resistance for C++.

David:

I don't know. There's still just a lot of ways for stuff to go awry, even

Deirdre:

Yeah.

David:

like just strcomps everywhere. If you don't return a value from a function, the compiler can always warn you and it can still run like that, you can set compiler flags to help with that sometimes. You generally don't know if a pointer needs to be freed or not. Like if you're using it as an output parameter, your constructors might do god knows what, when, like, depending on how you typed in your initialization,

Justin:

But this is where I was talking about. If you have pointer tagging in hardware, you can actually make all of that behave far better. your allocator essentially becomes like a generational GC, and it's just like rotating through the tags and you can have even more fine grain checks I mean, you're paying the cost in the Silicon, right. But once you pay the cost of Silicon, you're not paying, execution time overhead for it. So this is the thing. My question is if you're going to have like widespread support for, the kind of hardware you need, like pointer tagging, soon enough so that rewriting in Rust, isn't something that people are seriously considering for large-scale things like, remember Mozilla tried to do the rewrite in Rust with Servo, um, and eventually had to give it up and it was a huge undertaking.

Thomas:

I mean, I think a thing you'll get from Rust is,

Deirdre:

well, also the pandemic hit and they had to lay out, lay off like half of Mozilla. So I don't want to, conflate why Servo has not succeeded with just, it was hard to rewrite it in the brand new language we came up with

Justin:

I don't think it was just, I think the pandemic was a smaller piece of it, but, I don't see Mozilla's financials. So I can't say,

Thomas:

Rust has like a security activist, you know, language core.

Justin:

Yeah,

Thomas:

And I, you could tell me that C++ has a security activist standards committee, but I wouldn't believe

Justin:

I mean, I'm not gonna, I'm not gonna ever say that because it's not true. Now the standards I, my concern is that the C++ standards committee does not take security anywhere near as seriously as they should. I mean, there's the whole, uh, what's the.

Deirdre:

Sound like Andy

Justin:

rubin. Well, no, because, um, Andy Rubin was actively hostile to security.

Deirdre:

Okay. Different.

Justin:

but, uh, but yeah, There's the whole, whatever the specifier is in C++ to say that you have to check the return value. and there's like a whole debate going back and forth about, uh, like they added it to the standard for 20, but they're like, oh, it's a lot of work to do this, for libraries and figure out which thing needs to check our return value and doesn't so maybe we should just take it out and say, no, the standard library won't ever, uh, apply that, uh, specifier. And it's like, ok that's a terrible idea. And so you get, you get quite a bit of, of this where it does kind of feel

Deirdre:

does it

Justin:

there's, there's a lot of good things to learn from other languages. And, I think the people working on C++ standards and the standards can be, should be a lot more open to learning useful things from other languages.

Deirdre:

Okay, the rest of discussion. I want a Fuchsia phone. I want a Fuchsia computer, like a end-user Fuchsia computer, not just like, get a, get a, board and compile Fuchsia for myself.

Justin:

Not just a NUC with Fuchsia on it.

Thomas:

I'll be honest and say that like, whenever the topic of

Deirdre:

Yeah.

Thomas:

Fuchsia comes up, my brain just turns off. I just stopped thinking about it. Like sell me on, sell me on paying any attention to Fuschia.

Justin:

I don't know if I could tell you I'm paying any attention on to Fuchsia because I don't know the future of Fuchsia, but I can say is it is a, it is a very elegantly designed operating

Deirdre:

Yes Not just because it's written in rust with a microkernel I like the way

Justin:

Yeah, I would, yeah, I would say that the Rust part is even sort of like a smaller piece of it. It's just like, from an architectural perspective, like. a process in Fuchsia is not created with any ambient, capabilities. Uh, it, yeah, it has to be like, it is born into existence with nothing. Um, and you have to you have to actually consciously choose to hand over capabilities. Like the simplest, like it's funny, you look at the sandboxing code for Chrome on like every platform it's like, there's a whole bunch of work to go shut down all of this stuff that automatically gets turned on for all of these different processes. Where it's like the Fuchsia, uh, rederer broker like this tiny little thing, that just essentially does nothing. Um, except for like set up the IPC channel or whatever. Um, yes. it's a much clearer design. uh, a good friend of mine, he's one of the, uh, lead engineers on Fuchsia. uh I think he described it as the love

Deirdre:

yeah

Justin:

for everybody who doesn't like windows. You really have to take a closer look at NT. I think most of the things you don't like about windows are the win 32 things, but the actual NT kernel, the core, the design, it was a, it was a very well thought out, very forward thinking design. and think Fuchsia it took a lot of the best parts of that. and it actually like incorporated lessons learned in that. So it is, it is a, it's a capabilities-based operating system. really takes the capabilities part seriously.

Deirdre:

Yes. And I think the latest I've heard is they are swapping out like the Linux kernel or whatever, the, the base operating system for things like the Nest Hubs or something like that. And they just sort of were able to like push it out and just be like flip a flag, use Fuchsia instead of what you've been using since the beginning of your existence. And it just worked?

Justin:

I mean, whenever you have to do, um, firmware updates in the field, you're always going to have some number of things that just rando-fail. Um, but yes, uh, a bunch of the existing nest devices got the update, um, and it's just a, it becomes a much simpler operating system. It actually, you know, resources quickly, the issue with the real issue with Fuchsia is that you don't have device driver support, like remember the, old Linux problems from the mid nineties. where fuchsia is. frankly, there's actually a bunch of stuff

Deirdre:

it's not that old. Like you still run into that when you're just trying to plug in random crap. It's

Justin:

That's fair. Although every, although every, the reason why, Linux-based operating systems are so popular for embedded devices is because you do have some drivers support for all of that. If Fuchsia has to build up that, uh, driver support, like I have friends on the fuchsia team, uh, colleagues, et cetera, but I don't have any more insight into it than anyone from the outside would have, but I just, I very much liked the design. I very much, uh, like the decisions they've made. I also look at it as there's, you know, fuchsia has layers. Uh, It's entirely possible, to sort of take pieces of fuchsia and integrate them over time. and I think that's actually that's probably the best path forward, then you look at something like the nest, uh, nest devices where they, they just, they didn't do it a heck of a lot. So it's easy to build that on top of fuchsia. Uh, but if you're looking at something like an Android device or a ChromeOS device, you would, you would probably start at like, be like, okay, we're going to stop using the Linux kernel because these devices have hardware support for the, uh, you know, these drivers, et cetera. And we are going to swap out the Linux kernel and have a layer here above the Linux kernel. And you would just keep evolving that.

Deirdre:

I like that.

Justin:

That's, that's just my guess. I am not speaking for anyone. I have no insight into how that would work, but I do, it is how it was the thing that makes sense.

Deirdre:

I am a fan.

Thomas:

Justin, you don't come across as a Marine.

Justin:

I'm not, I was, I'm not anymore. That was 20 years ago.

Thomas:

if there was a group of people in a room and I was going to point out the one that was likely to be a Marine, it wouldn't be you. that a bad thing to say?

Justin:

uh,

Thomas:

Should I say that you come across as a Marine?

Justin:

uh, it was, it was an important formative life experience. Um, but, but it's the same point of, I bet you, I could find pictures of you 20 years ago. Uh, it'd be like, that's a very different person than that is in front of me today.

Thomas:

Unfortunately, not. I'm asking just because I'm curious, but of all of the service branches, why the Marines?

Justin:

I mean, obviously,

Deirdre:

The Marines are cool. I knew Marines,

Justin:

yeah, it was the, it was the, um, it, it was the hardest looking one. I dropped out of an art program. I was at Northern Illinois, uh, up at DeKalb had dropped out of a graphic— well, they called it, I think like studio art or whatever of at like 19, uh, cause I just got sick of college. I'm like, I'm going to go enlist into the Marines. So that was, yeah, that was how happened.

Thomas:

How was that experience for you?

Justin:

Useful.

Deirdre:

Useful.

Justin:

yeah,

Deirdre:

Yeah.

Justin:

I don't know. I think, I think

Thomas:

Okay.

Justin:

people should opportunities like broaden their horizons. I think people tend to be, it's very easy to sort of just getting inside your box, stuck inside of your perspective. I think also there's like, going to say that I'm, a kindhearted person on Twitter or anything like that, but I see a whole lot of back and forth, where people are making a whole lot of assumptions about any given situation. and I think it's like, if you, if you've had enough diversity of experience in your life, gets hard to make so many assumptions because you're like, Hmm, I don't, I don't know the rest of the details here. So I guess, yeah.

Thomas:

Would you say having worked in the ICU changed your views on anything that you work on

Justin:

I think I have a very, very different perspective. Because of, you know, spending

Deirdre:

yeah.

Justin:

years, in the intelligence community, seeing this a lot of things from the other side, I also, I am one of those people who was 100% supportive of, uh, mandatory service, not necessarily like military, et cetera, but I think, people make a lot of assumptions about how their government works, having no exposure to how the government actually works. and it kind of feels like people don't have skin in the game. So I think, you know, if you want to, like, it's funny, I did the Marine Corps, my brother did the peace Corps and my sister did AmeriCorps. We all did like very different things, but we all did like some term of government service and I

Deirdre:

Hmm.

Justin:

it's very useful experience.

Thomas:

you feel like people's ordinary model of like, especially the offensive work that the IC does and computer security, do you think people have a good mental model of how that stuff

Justin:

my knowledge is stale by, by many years now. Um, I think people sometimes make a lot of crazy assumptions. Um,

Thomas:

there's, like a, there's a standing meme in our community that like, once you're at the NSA, when you leave and go back and the industry, now you're an asset. It most often comes up with Dave Aitel. I think it might be in Nicole. I think it might be in Nicole Perlroth's book about he's an NSA asset

Justin:

it's hilarious. because frankly, there are a lot of people NSA were like, during there uh, were not happy with Dave. like the notion of, uh, like the, left NSA, went to the, you know, line of business that he did, et cetera. yeah, I don't know, but David's, and Dave's also always been a very outspoken, very direct person, um, he's mellowed over the years, certainly. Um, but yeah, I do remember, I remember my time at NSA. Dave and I were both in a program called, uh, Snip, which was like a, uh, it's like an interdisciplinary program. and, uh, I, I started it a few years after Dave left. and there were still stories, uh, about, about interactions, uh, or I wasn't even stories about interactions. So it just, people had opinions. But, yes, I am 100% certain Dave has never been an NSA asset after leaving debts.

Thomas:

I'm a kind of Dave Aitel admirer and that, like, there's a south park episode it's called"Simpsons did it" where like the punchline,

Justin:

Yes.

Thomas:

every joke came from the Simpsons first and he was my Simpsons did it for a while. and I, I, I gave up, on ever getting ahead of him on stuff and moved into cryptography, which is why I'm here now. And, you know,

Justin:

Yeah, I think, Tom, Dave was a very polarizing figure back in the day. Not just, not just among his former government colleagues, but in general, Dave was a very polarizing figure back in the day.

Thomas:

also have a lot of admiration for polarizing figures.

Justin:

Oh, I guess I find dave much less polarizing, but maybe that says more about me.

Thomas:

I have a sort of built in admiration of polarizing figures to an extent,

Justin:

I also, like Dave, I find his input, uh, very useful. he is he's genuine. like, I will say the hardest thing for one of the craziest things for me was, um, when I started moving into like the privacy space, that there was almost like an aversion to the idea of coming up with threat models and trying to like scope the problem because people wanted the flexibility of not having threat models. It being able to sort of ad hoc, define things in, uh, in, no, this is not like the, now I'm not talking about like a private space in terms of like, you know, when, when you're actually like, spec-ing things down to, uh, like can unlimited levels of all bottle that I'm talking about things like, oh, we put up this new privacy feature. It's like, okay, what's your threat model for that privacy feature? It's like, it's a privacy feature. Yes.

Deirdre:

That sounds stressful though It sounds like everything's in scope. That sounds very stressful as you as a defender,

Justin:

Yeah this is the problem that the privacy space is like, like, so I remember the security space back in like the, mid nineties or whatever, it was basically lots of security, like lots of AV, lots of security products, stuff like that. Um, not much in terms of like, hey, let's actually reengineer these things like, I totally remember what Thomas was talking about with the whole, like, you know, replacing strcopy everywhere. Microsoft really did care. It really did put in the time and energy to, to make some significant changes But the first several years of my exposure was, was it reminds me of the privacy space right now

Deirdre:

Hm.

Justin:

of people saying, look, it's a— no coherent threat model. It's like, look, it's a privacy thing. So I guess it's, I don't know. I see. I guess what I'm kind of saying is that I would like to see like, uh, like a Dave Aitel in the privacy space or, or, you know, more people like that who will like directly sort of like call things out when they are, when, when, you know, when it's just ambiguous crap,

Deirdre:

This leads me into one of the last things. I think we have time to talk about, organizational mindset about security privacy, in Apple versus not Maybe that's in Google or in Android or in Chrome. because it seems like iOS should be very secure. It has good bones. like architecturally. It has good bones. and yet, yet, they say that they care about privacy, but it's like, Define that a little bit more, uh, especially some of the things that they've been pushing out lately, versus not Apple. And it seems to be an organizational thing because like, we started this talk basically saying like, we know so many amazing security and I think privacy people working at these companies, but then, but then it kind of peters off.

Thomas:

I mean, I think,

Justin:

Yeah.

Thomas:

you, you can talk about like there's different engineering disciplines between the

Justin:

Yes.

Thomas:

you guys have good insight into— Justin, dirt. you, you in particular have good insight into the engineering culture differences, like, I feel like, Apple's bonafide on privacy technology are pretty solidly kind of like they, they did a lot of work that wasn't published for a long time, but is now published. Like for instance, like the quorum HSM work for the, uh, the, the pin lookup for iCloud and all that stuff. Right. They, they put a lot of engineering effort into stuff that other people aren't even considering doing. Um, you know, the, the enclave processes and other things, right? Like that was nowhere before, before they did it. Right. They, they do a lot of stuff that they're not required to do. Um, but, you know just cause they take the problem seriously. I tend to come at these things like as a last thing to talk about, I think probably iOS CSAM, probably not the best thing for us to open up right now. Right. But like any technical work that they say or any engineering work that they say that they're doing, or any safeguarding work that they say they're doing, I tend to believe them and kind of take them, you know, in, in good faith on that they put the effort in. That doesn't mean it's the right necessarily the right thing necessarily for them to do my feelings about that are super, super complicated. Right. But like, I wouldn't put that down, right. Like to a culture of not caring about that. I think they'd probably like,

Deirdre:

it's I guess I'm

Thomas:

Okay.

Deirdre:

at like, it's feels sometimes like iOS, macOS security culture is resting on its laurels in terms of good architectural things. But then reintroducing the same bug fixes, like, like they, they fixed a vuln then it came back and they had to fix it again, like

Justin:

Well,

Deirdre:

several releases later.

Justin:

it's the engineering rigor. Um, which so, and this is my personal take. Interacted with Apple quite a bit over my time at Google. particularly when we shared, uh, a code base, uh, with WebKit and, I think it is fair to say that, the team in mountain view was essentially doing the security work that for both of the browsers at that point in time. Um, and my perspective interacting with Apple over the years is they have very good people. but the number of people they have. This is a recurring pattern across Apple. Like the number, the number, of people they invested in something like Apple will put, you know, a person on it. Google will put a team on it. This is, this is kind of like a mentality. Google, Google will over-engineer and overdo things. Uh, I have observed this. Um, but, like you look at, so Apple has been getting a lot of hits, for, fairly dodgy web standard support, um, basic functionality that works in other browsers, but it's it's buggy and unreliable, et cetera, and Apple and this has been my experience. That's sort of a recurring pattern with Apple where they're just, uh, that stuff requires a larger work, just churning through bugs. It's not it's. So one thing you said about good bones, it's like, I think there are a lot of good bones. I think there are a lot of things with like good designs. think like I actually think Apple's, uh, design, for, notarization is actually a very good design

Deirdre:

Yeah.

Justin:

is a design that I think would make that makes a lot of sense. If you want to get— most of the, uh, essentially anti-malware benefit of an app store without actually having to have a closed app store. I think their implementation of it has been terrible. Uh, but, but the actual design and architecture and the ideas behind it are quite good. the way, the way you fix that is you throw a bunch more engineers at it to churn through the bug list and, and fix it and make it good. particularly post Aurora security engineering excellence just became a thing at Google. It's like, well, obviously we are amazing engineers. And so our security engineering must be amazing. Um, it's sort of easy to like, to, to work that way. I don't know how you get that— so it was okay. We would throw a whole bunch of people, a whole bunch of engineers to just churn through the bug list. I don't know how you get that same kind of like cultural, I don't know how you get that connection at Apple. Um,

Deirdre:

like paradigm shift in identity when you haven't had something like Aurora happen to you.

Justin:

Although I feel like they keep having the, like, do keep having these things happen. Right. Remember Microsoft got hit with a whole bunch of worms until they, you know, they had

Deirdre:

Okay.

Justin:

to seriously internalize it. Microsoft really wasn't hit directly that much. There

Deirdre:

That's true. There's lots of little things.

Justin:

Yes. And I feel like Apple should be at that point where they're like, okay, we just have to accept that this isn't like, this isn't the thing where it's like the whizzbang shiny new feature, you market around it. You're good. This is the thing we have, to do it right, we need to throw a bunch of bodies at it and they just need to churn through a bug list.

Deirdre:

And practically all over the place. They have to be like embedded and distributed. It's not just like, well, going to throw like a thousand engineers at iMessage and we'll be fine, or, you know, whatever. it it's all over the place.

Justin:

Yeah, that was good was sort of like the apple people on, um, on webkit and safari, prided themselves on having a fraction of the size of a team that the equivalent Chrome people

Deirdre:

Yeah. But like the bugs, like, you know, that's, you know, maybe you could pretty proud of that if you haven't had like a zero day in like every, you know, patch, every release it's like, yeah, this was a zero day vulnerability and it was found by somebody else.

Justin:

Yeah. And that's, so that, that is my take on this, that it's, it's one of those things where, when you're one of these big companies, you'll get to a point where you're like, look, we have to, we have to rearchitect things. So are intelligent things that we can do to make this better. And I do see Apple doing a decent amount of that, but the other pieces, and we have to throw the bodies at it to fix all the damn strcopies.

Deirdre:

Okay.

Justin:

And I don't see them saying— throwing all the bodies at it. Like, no, no, no, new wizzbang technology that solves this. It's like, nooooo

David:

It's like an innovation versus maintenance thing. Almost.

Justin:

It very much is. So I think that's exactly it it's like my sister works in urban policy and planning and she's like, we don't need people building bridges. We people fixing we have,

Deirdre:

Yep.

Justin:

people only want to build new bridges.

Deirdre:

Yeah. That's exciting! And fun! And novel!

Justin:

it. Yeah. You get your name on the bridge potentially

Deirdre:

Yeah.

Justin:

Okay.

Thomas:

Bringing us back to fuchsia.

Deirdre:

Yeah. Well,

Justin:

It's a fair point. That Fuchsia is a new thing, but

Thomas:

I don't know how much I disagree with the argument about, like, I think there's a place for a big moves too. Right? So.

Deirdre:

It's not like no, one's been doing the maintenance on Linux for the last 20 plus years.

Justin:

It's not like Google in particular, hasn't been doing a ton of the particular, the security maintenance. And you have people like, like Case Cook who is just like, there are times where I'm like, wow, he is like, single-handedly the protector— he's not single-handedly, but he's sure doing a lot protecting

Deirdre:

Linux. That feels like it.

Justin:

at times certainly. um, I there is, there is space. to do big new things, but I would say part of the fuchsia thing is that they're also doing all of the little work to get it right.

Deirdre:

Nice.

Thomas:

I need case cook on just to run us from like 4.* Up to current five, just on there's a blog somewhere, and I try to keep up with it, but there's just so much stuff going.

Justin:

Yeah, the stuff he does is just, this is one of the things I really loved about working at Google. the amount of like just public and open-source contributions that you get that you get to do, you just, you feel good about that? Um, it's not that there weren't upsides and downsides to being at Google. It's not, that is a perfect company in any way, shape or form, but very proud of the vast majority of what I did at Google. And I liked the fact that I got to make so many public contributions in my time at Google.

Thomas:

I'll ask you one last question, just related to that. So I think a lot of us probably already know about the work that case cook does. is there somebody else besides them that we're not paying enough attention to that your work has brought you into contact with in the field? Come up with one real quick.

Justin:

Um, uh, Abishek Arya,

Deirdre:

Yeah.

Justin:

Uh, yeah, he, um, so he was my first, uh, you know, you're a mentor for a Noogler that comes on board at Google. So a new employee starts at Google, so he was the first person who I mentored at Google. he was, had just joined uh, Chrome security. but he's a, principal engineer or director. Uh, he's, he's an exec now, but he's essentially responsible for the big fuzzing efforts, like

Deirdre:

Yes, sir.

Justin:

clusterfuzz, OSSFuzz, all of that. It was like literally started as him, grabbing interns' workstations as they would leave. Like they would leave for the summer, he'd start grabbing their workstations and would like, was like stacking them under his desk, to run, fuzzers. then that evolved. from, his legs basically getting burned by all of these Xeon workstations under his desk to like moving into the cloud and all that. And now yeah, he's like in charge of an entire team. and just, yeah, that's another one of those things that has just has a huge impact on the industry as a whole, like, particularly with OSS fuzz, um, where they're just basically like, Hey, we're just gonna, you know, all of this software is important. We're going to give you a framework for building fuzzers and running against it. it's amazing how much it's evolved and the fuzzing stuff that they've done, they keep incorporating all of the new advances in

Deirdre:

Yep.

Justin:

the fuzzing in terms of like the different the app, the input guided, I forget the names of all the

Deirdre:

And the coverage guided and, and

Justin:

Yes.

Deirdre:

they support, they support like AFL and they support multiple different kinds of fuzzers. they, they did all the documentation to help, to help you get a Rust binary, uh, covered and automatically updated. And they'll automatically report. It's a, it's, uh, extremely valuable because I tried to set up an instance of cluster fuzz and I had it and it was like, okay, I could do this or everything could be open and they could report it. I could use to report back to me. And it's like, it's a nontrivial amount of work to just make that project go. And it's extremely valuable to just anyone who just sets up a, you know, a fuzz target. it's great.

Justin:

Yeah. You can just, you got your open source software. You can just set up a fuzz target. mean, it's not entirely plug and play, but it's about as close as you can get to plug and play when you're dealing with fuzzing. Yeah. that whole team.

Deirdre:

You can run your phys target as part of your CI. But the whole point is that you got to keep it going. You got to keep throwing the computers at the fuzzing or else you may not get the full value out of it. And that's the hard part of cluster fuzz solves.

Justin:

don't get the thing where people have been trying to run fuzz targets as part of CIs. kinda like, I'm, I'm just gonna breathe oxygen, for like a minute or two. And then I'm gonna go about the rest of my day. And it's like, no, that's not the way fuzzing works.

Thomas:

Awesome. Okay. Well, uh, I Abishek Arya and also OSSFuzz. excellent. Excellent examples. Thank you Justin so much for, for being here.

David:

Thank you.