Privacy is the New Celebrity

Bunnie Huang on Hardware Hacking, Secure Enclaves and Evidence-Based Trust - Ep 27

June 16, 2022
Privacy is the New Celebrity
Bunnie Huang on Hardware Hacking, Secure Enclaves and Evidence-Based Trust - Ep 27
Show Notes Transcript

In this episode, Sara Drakeley interviews Bunnie Huang, security researcher, hacker, and entrepreneur. Bunnie wrote the book, Hacking the Xbox: An Introduction to Reverse Engineering. He also helped create Chumby, a consumer electronics product designed to be modified by users and provide hackable widgets. Bunnie and Sara dive into trust models and Bunnie outlines the challenges of developing evidence-based trust. Sara asks Bunnie for his take on secure enclaves and Bunnie explains why "you  shouldn't eat out of your toilet bowl." Sara and Bunnie puzzle the biggest security flaw of all - humans.

Speaker 1 (00:02)
Hello and welcome back to Privacy is The New Celebrity, a podcast about the intersection of tech and privacy. I'm Sara Drakeley, Chief Technology Officer at MobileCoin. And today on the show we have Andrew Bunnie Huang. Bunnie is a security researcher, a hacker, an entrepreneur, and a recipient of the EFF Pioneer Award for his work in hardware hacking, open source, and activism. He wrote the book Hacking the Xbox, an introduction to reverse engineering, and helped create Chumby, a consumer electronics product designed to be modified by users and provide hackable widgets. Bunnie has also served as a research affiliate for the MIT Media Lab and a technical advisor for several hardware startups. He has long been interested in secure enclaves and how to develop an open source enclave, which we'll get into in our conversation today. Bunnie, thanks so much for joining us.

Speaker 2 (00:48)
It's great to be here. Thanks.

Speaker 1 (00:51)
So to start off, how do you describe your work? Can you tell us?

Speaker 2 (00:54)
How do I describe my work? Well, is there a particular dimension you want to go in or just an overall philosophy of life or something like that?

Speaker 1 (01:03)
We can go in both dimensions of philosophy. What's getting you out of bed right now?

Speaker 2 (01:09)
Right now, the thing that I'm really focused on is trying to build a device that you can trust from evidence based trust, basically. So right now, a lot of people say, why do you trust your smartphone for sending a message or whatever it is? And they're like, well, I don't know. I trust Apple, I trust Samsung or I trust whoever, I trust the software vendor or something. But it's very nebulous because you're basically trusting in a brand, which in my opinion is kind of not too different from a religion.

Speaker 2 (01:38)
Right.

Speaker 2 (01:38)
It's sort of like what symbols and what guarantees and what Church of belief you belong to, and none of it's actually really written in stone. The thing I'm trying to build is something where you don't have to trust me at all. In fact, I don't want you to. I want you, what I want is I want to give you everything you need to be able to make your own assessment, if the thing I built is trustable to your standard. And the real challenge of it is not to just throw over a pile of unintelligible source code and schematics and stuff, but to try and distill it into multiple different levels of where, however much you want to know about it, you can get a clean read, trust or not trustable. So if you're just like, whatever I'm not like, into a super hardcore attack threat model, then you can just like, okay, like signatures check out fine, right? You can just move on with status quo. And then if you worry about supply chain attacks, then you can look at that side. If you worry about sort of malware implanted by the vendor, you can look at that side but all the paths are there, the breadcrumbs are there.

Speaker 2 (02:38)
And you can basically choose your own adventure and go as deep as you want in terms of figuring out what you can trust. But the point is, it's always based upon evidence. It's not based upon any sort of rule. It's not based on any sort of guarantee from me. You can measure it, you can do it yourself. It's all based upon your own personal factual experience. The reason why you could trust or not trust the thing I'm building.

Speaker 1 (03:00)
I love that. That aligns so much with what I think we need throughout computing. And there are several threads we can pull on right there. So what you're describing is that that aligns with kind of the trustless idea. But is trustless a little bit too ambiguous of a term, or do you still feel like this is in a trustless paradigm?

Speaker 2 (03:19)
Yeah, I guess I never really tracked the whole trustless term. I think that's a term that was coined by the industry, and it seems to have been co opted and used in a few ways. I get the general concept. It's the idea that you trust because there's evidence.

(03:37)
Right.

Speaker 2 (03:38)
But the problem I have is some of the people who are doing the trust stuff, like, well, I have certification for trustless. I'm like, but now you're trusting the certification body. Come on. The point of this is like evidence based trust, right? It's evidence. It's based upon physics. It's based upon, like, you're arguing a court case kind of trust. Right. You are the jury and judge of your own trust. And this kind of dovetails into, like, sort of you asked why it gets me out of bed in the morning. That's the answer to that. But then in a broader philosophy of life. Right.

Speaker 2 (04:10)
A lot of things I worry about, a lot of things I want to devote my life to is basically giving people back control of their own life agency.

(04:19)
Right.

Speaker 2 (04:19)
So you don't want to be controlled by technology. You don't want to be driven by ads or by whatever sort of input is coming into you. And so part of this is like, how do we not make technology mysterious? How do we contribute to a commons of knowledge? And how do we basically always leave it so that we're not in a situation where we're once again going back to that sort of, sort of priesthood model where I broke my phone, I must go to the genius bar because I'm not good enough to fix my phone. I mean, you're literally told, like, this power structure is built into almost the product and the marketing of how it is or like, oh, I've been locked out of my Google account. Like, now I can't do anything with my life. I'm just totally like, whatever it is, the whole idea is how do we get it into a world where if you do get in trouble, if you do feel like you're sort of losing your grip on your agency and your reality and whatever it is, how do we leave breadcrumbs so you can find your way out and live in a modern world and regain control over all this stuff?

Speaker 2 (05:24)
And one of the core tenets is, can I trust this piece of hardware? Because basically, if you can bootstrap yourself into a trusted piece of computing, you now have a tool they can then use to open more and more doors and sort of move yourself forward.

Speaker 1 (05:37)
There's a beautiful quote of yours. This is absolutely along these lines of without the right to tinker and explore, we risk becoming enslaved by technology. And the more we exercise the right to hack, the harder it will be to take that right away. I think that is so aligned with what you're saying. So on the one hand, sometimes you might have to delegate trust, because if you don't know how to hack, on the other hand, can you delegate trust to people who are doing that hacking, who are getting down to the evidence, who are tinkering and reverse engineering things? Are you still kind of driven by this quote? How does this play into your life today?

Speaker 2 (06:15)
Yes, absolutely. That sort of drives the whole threads of along open source on right to repair along sort of trustability. All these threads all come together around the whole idea, and that basically boils down to agency. Do I have control over my life and my decision processes and what it's going?

Speaker 1 (06:33)
So we talk a lot about software and cryptocurrency. MobileCoin is a cryptocurrency. And at one end, to increase decentralization and democratize verification, you have to be able to run the software on any hardware. So there's kind of this hope that you can develop software that could run on any hardware. However, as you've observed, I'm sure this is not how things have played out. On the one hand, because of the way that proof of work works, there's this incentive to develop hardware that can solve this specific problem the fastest, and so that influences hardware development and that influences roadmaps for hardware. And then on the other hand, you have this trustless or the hope to be able to have a way to delegate your trust to a remote computing platform that can provide you evidence. Hopefully, I'm getting into SGX, which I know you have lots of thoughts on SGX.

Speaker 2 (07:29)
Yeah, yeah. I see where the line of questioning is going. So cryptocurrency shapes hardware in multiple ways. But one of the most remarkable ways that cryptocurrency shapes hardware is that it almost reinvents the hardware business from the bottom up and that there are very few hardware applications you can build where you can get a return on investment like instantaneously. So from the standpoint of proof of work Asics right, the people who buy them are the only people in industry who pay cash. Most people have to buy chips, put them in products in a very long supply chain, put them on retail, and then a couple of quarters later they show up on the books. A cryptocurrency miner is profitable the moment you test it. It hasn't been cut out of the way for it. They power it on after testing it and they're doing hashes and they can monetize it if they get lucky. Right.

Speaker 2 (08:25)
So it's a really incredibly bizarre twist on sort of hardware. And it's almost pure return in a way that because the proof of a problem is almost so easy. All you have to do is just build more and consume more electricity where it is and get more returns. Right.

Speaker 2 (08:44)
The incentive wheel is just like crazy around things like proof of work in Bitcoin and so forth. Right.

Speaker 2 (08:51)
And so you get into all these other arguments, is it really decentralized if the mining pool is really held by a few big guys and infrastructure gets crazy and Asics have such a huge barrier in theory, you can run your proof of work on anything. I could build a little FPGA thing around it. I just won't make any money doing it because it's not going to be fast enough to be competitive. It's really interesting how profoundly some of these decisions affect the hardware industry. And it's really fascinating to sort of-it's almost this weird sort of-you can connect all the dots from economy to human psychology to hardware engineering to energy to physics, back to the economy. Almost like you couldn't come up with a better case study of all these things, like playing with each other at the end of the day, yeah. It would be nice if there was a way to get out of the proof of work world in terms of the resources and the consumption. The one thing, despite all of the waste of proof of work, at the end of the day, it is a measurable quantity and it is large.

Speaker 2 (10:02)
And again, going back to this whole like, I don't have to trust even the laws of physics, it is a barrier, right. It's a real barrier. You can measure some barriers in terms of thickness of steel in the vault or the number of guns around the thing or the number of auditors that you're paying and the guys on their payroll or the amount of insurance you have, whatever it is. In this case, the barriers just measured in Terra ashes, which turns into gigajoules of energy, which is like this kind of thing. And so the nice thing is that it's as measurable as the thickness of steel on a vault. Right.

Speaker 2 (10:35)
Whether it's a good thing or a bad thing that we should be using steel for vaults is a different question. But it's a very simple trust model. Very simple trust model. Stupid trust model, almost, right. But in a way, you want things to be simple for trust. Right.

Speaker 2 (10:47)
A lot of the other schemes that are trying to do things like staking or deferred computation, remote attestation these types of things. They try to do an end run around simple trust models. Right.

Speaker 2 (11:00)
And a lot of people are very clever, and I think they're working on ways to solve this. But I think it's an open question. If we can even work through all of the incredibly intricate incentives that people have. People have at the end of the day its people, the incentives that people have. We have this weird thing with Bitcoin and the incentives of people and creating wafers and mining and the whole thing. Now imagine that you have a staking ecosystem. What are the incentives there around a staking ecosystem? Right.

Speaker 2 (11:34)
And how can people use it? You have these Byzantine, embers trust models, whatever it is. But it's crazy how complicated that is. And then you have what the saying was, code is law or something like that. That's a very typical saying. And you have the smart contracts that famously implode because of a bug or something like this. And then there's a question, okay. Now can you go to courts and recover something, or is code law? Right.

Speaker 2 (11:55)
At that point in time, do humans now actually get any say, is code really law or is law? Right. Again, you're getting back into these very subtle human incentives. And so people, for example, people who are in cryptocurrency who didn't lose money would say, code is law. It's very easy.

(12:12)
Right.

Speaker 2 (12:12)
Because that enforces, like, the whole ecosystem. But the people who lost money want law to be law at the end of the day because they want to be able to have some recourse to get their money back, because screw the whole ecosystem, they're poor. The only recourse to get their money back is through law. Right. So again, you're back in this very interplay of technology incentives and issues like this. We already kind of uncovered the crazy Bitcoin incentive issues when someone told me they can monetize Bitcoin asics the moment they test them, light went on my head like, wow, that is a cash ecosystem. That is crazy. Like semiconductors don't work this way.

(12:51)
Right.

Speaker 2 (12:51)
It completely breaks that business. I feel like in all these other models where we're trying to look for ways to secure cryptocurrency, there's going to be something equally profound in terms of the way humans work, the way incentives work. People who are guaranteeing the attestation of your enclave is what is their incentive model? What are they making, who charges for and who applies social pressure to get this and that or the other?

Speaker 1 (13:18)
Let's get into enclaves.

Speaker 2 (13:20)
Yeah, sure.

Speaker 1 (13:21)
So let's start with what is an enclave by your definition?

Speaker 2 (13:27)
Yes. The idea of an enclave is that we're again trying to get back to this idea of evidence based trust. It tries to get there. And the core problem is that even if I give you all the open source, for example, for the Linux kernel, by the time you finish auditing it, the Linux kernel will have rewritten itself because it'll take you, like a year to audit it, and there'll be so many patches, it will take another year. So basically you're just trying to measure the age of the Ship of Thesius. It's just not possible because by the time you've measured it, it's already changed.

(13:58)
Right.

Speaker 2 (13:58)
This kind of Heisenberg uncertainty principle about complex systems.

(14:02)
Right.

Speaker 2 (14:03)
And so one approach is like, okay, well screw it. Let's make an enclave, a small defined attack surface, simple piece of hardware, which will give us some root of trust, something that we can reason about in terms of, like the privacy of our keys or the strength of it or some measured threat model. And we put it into, ideally, typically a separate piece of hardware. And now we can basically say, well, we can throw all the other tech surface out of the door. All that complexity doesn't matter because we already encrypted it. And we've got a cryptographically hard problem being executed in an enclave that we trust, therefore everything else around it we don't care about.

(14:44)
Right.

Speaker 2 (14:45)
So that's basically the whole idea of an enclave is you've just gotten this enclave of sanity carved out. And the problem with the term enclave is that people kind of abused it. I think the purest form an enclave should be a separate piece of hardware and a separate execution domain with no side channels, no nothing going into the unsecured domain.

(15:05)
Right.

Speaker 2 (15:06)
But of course, that costs more money. And so then you have these people who think they're very clever and they can build virtual machines that can serve as enclaves. So why don't we just go ahead and reuse the CPU and just make these really strong guarantees about, like, borders around my thing. And this will be the enclave. It's actually running on the exact same CPU that you had before. But you can also run your insecure software here. And guess what? It'll be fine, because I'm very clever or wherever it is. And again, this goes back to the whole principle. You don't eat out of your toilet bowl.

(15:36)
Right.

Speaker 2 (15:38)
Even if you were able to go ahead and just clear it out with Lysol, whatever it is and just whatever it is. But what if I just missed a spot and there's a little poop in my food? That's kind of gross, right? You don't eat out of your toilet bowl. So we have a kitchen and a dining table and a toilet, separate rooms. Not because we don't think we can clean things well. We know how to do disinfection. We know how to do to a very high standard. Sterilization is a very high standard with autoclaving sort of thing. It's not that we don't know how to sterilize, it's just that it's just more hygienic to have separate areas. And so the problem with a lot of enclaves is that now the term has been abused, in my opinion, to represent things that are actually executing entirely out of the same computer using the same cores, the same registers, the same caches, the same memory hierarchy, the same pins, whatever it is. And so you have this panoply of side channels, you have this panoply of leakage, and then you also have the complexity of trying to marshal everything.

Speaker 2 (16:38)
And again, you're back in the situation of like, how do I trust what this enclave is doing?

(16:43)
Right.

Speaker 2 (16:44)
Long answer to a short question.

Speaker 1 (16:47)
That's great. Yeah. Well, and if you were going to try to design an enclave. Okay, actually backing up one step, there's also this sense that if you have an enclave that has any closed source aspects of it, is that something that you can trust? Because I think then you end up trusting the brand again or you end up trusting whoever is building.

Speaker 2 (17:10)
Right.

Speaker 1 (17:12)
How do you think about the ways in which open source and security kind of go together?

(17:17)
Right.

Speaker 2 (17:18)
So, again, that's a really good question. Again, this goes back to this thing I'm trying to build, that ideally I have one right here, these little devices that you can trust.

(17:31)
Right.

Speaker 2 (17:31)
So this is actually phase one where the device has an FPGA on the inside and has its own system on chip that serves as the enclave and the application software. The next phase of this is to actually not do an FPGA, but to try and make some Silicon. But as you say, the problem is how do we know what's inside a piece of Silicon? That's a really incredibly dicey problem and unfortunately, the state of the art today. And that is like, how do I trust a chip? Well, I read what's on the box outside. It's so bad, people don't even talk about it.

(18:00)
Right.

Speaker 2 (18:01)
Really, in terms of that trust problem at the end of the day, and that's not going to cut it if I'm going to make my own chip. So actually, the prerequisite to taping on a chip is designing a system that allows you to inspect the chip at the point of use. So basically, you receive the chip. And we're getting a little in the weeds here. But basically, I want to build a system that's based on a relatively inexpensive laser and a scanning mechanism, plus some other guarantees based upon the measurements of the chip and timing to sort of at least say the right broad components of the chip are on there. This many gates are on it, this much Ram is in it. The buses are here. Because light is big and transistors are small, we can't say exactly what the transistors are or exactly what these things are doing, but we can at least say, for example, the interference pattern looks about right of the laser interacting with the expected mass pattern. And we can also then compare that against the sort of change you have to make to do something meaningful in terms of a computational deviation.

Speaker 2 (19:06)
And between those two, if you've made those two sort of have no overlap, you can say that we've guaranteed the larger growth structure. And at the fine level, we've also guaranteed there's no modifications. Therefore, we can trust this piece of Silicon that we're running on, and we actually know for sure this is built correctly. And to that extent, like you said, we need to know the design of the chip. So open source is important for it in the sense that we need to be able to know what we're looking for. We're inspecting. And ideally, open source is open enough that we can check things for functional correctness and whatnot from the source code inspection. But from a standpoint of just verifying the chip hasn't been tampered with, you may even be able to just go with, like, here's a chip, here's the reference pattern, make sure they line up.

(19:55)
Right.

Speaker 2 (19:55)
Because honestly, at the end of it, it's what most people really want. And that just moves, even though, even if you haven't gotten down to the point of, like, thorough gate level analysis, these types of things, we've moved it from reading, like, literally the printing outside the box to something meaningful we can say about the structure of the chip itself. It's a huge leap forward in terms of trustability. It comes at a cost, of course, but that's where I think this needs to go. At the end, if you really want to have something that you have an evidence based reason to trust, of course you can trust it for other reasons all you want.

(20:25)
Right.

Speaker 2 (20:25)
But I'm talking about evidence based trust.

Speaker 1 (20:27)
And is that the precursor that you just flashed for us?

Speaker 2 (20:31)
Yeah. This thing is called the precursor. The overall sort of project as a whole is called "Be Trusted." And so precursor is the first phase of Be Trusted because it's a precursor. And eventually we want to get to that kind of vision where we have chips that you can get and inspect relatively cheaply. I mean, we can get into numbers and stuff. It's not going to be something you will do in your kitchen or something like this. But if you can know somebody or go to a center or bring it to yourself and just look at the results and say, yeah, I mean, for my root of trust, where my crypto keys are located or my wallets or my to be authenticators, my crypto wallet, these types of things. I know that's true and real at the very least. And then that is actually a big leap forward. You know that those things can't be violated.

(21:25)
Right.

Speaker 1 (21:25)
And let's describe what it looks like for our listeners. So it looks almost like a little cell phone. How would you describe it affectionately?

Speaker 2 (21:36)
I mean, it's designed to remind you of a mobile phone because it's a form factor that we're very familiar with. A lot of people compare it to like a BlackBerry. There's a physical keyboard. And the physical keyboard is very deliberate because it's a very simple thing and doesn't have a touchscreen. It doesn't have an IMEA to have keys diverted off or recorded or something like this black and white screen. Again, simple. Everything simple and easy to inspect. But it also has a decent amount of capabilities, full graphics. It can do IO networking, USB. We're just laying in the stuff for you two F and Fido, these types of Authenticator protocols on the inside of it. So you're not going to use this to browse the web. You're not going to play games. And the whole idea of even having an app store would be ridiculous because the idea is you trust everything. You're not just like throwing random stuff on the thing that you really trust. Again, this is like not eating where you poop. This is a very clean device. It does a few things very well.

Speaker 1 (22:29)
And so this might get a little nuanced and technical, but I think it's really important to talk about this, which is something that the precursor and your work right now is helping to address, which is the struggle of plausible deniability and confidentiality and how to protect users against the so called wrench attack.

Speaker 2 (22:51)
Right.

Speaker 1 (22:52)
Can you describe this problem space?

Speaker 2 (22:54)
Right, yeah. So this goes back to sort of the original premise of the project. It was actually started. It's an outgrowth of a work that I actually did with Ed Snowden back in the day trying to secure phones for journalists who are operating in heavily denied zones. They're up against state level adversaries.

(23:11)
Right.

Speaker 2 (23:11)
And you end up at precursor.

(23:13)
Right.

Speaker 2 (23:14)
But one of the things that's on the back of my mind is now if I build a device that's very secure, are we trying really at the end of the day to try to secure secrets or are we trying to keep people safe at the end of the day?

(23:25)
Right.

Speaker 2 (23:26)
And so, in my view, actually all security must serve people at the end of day, it's not a service in and of itself for the crypto keys.

(23:36)
Right.

Speaker 2 (23:36)
And so some people say as a rule, never carry secrets that are worth more than your life. These are all good rules to have. But sometimes some people will choose for very good reasons to take on very big risks. And those people oftentimes change the world. Right. As technologists, I think we may be almost owe a debt to people who are more brave than us to help them do the things they do to keep us free.

(24:00)
Right.

Speaker 2 (24:01)
So the idea of plausible deniability is that the more common issue now is now the devices are very let's just say devices are perfectly secure.

(24:10)
Right.

Speaker 2 (24:12)
The next thing you do is you go after the user.

(24:14)
Right.

Speaker 2 (24:15)
And so how do you go after user well, police check points, customs stops, whatever it is you go into a country and they say, oh, you want to enter here for your meeting or your conference, whatever, it's really important to you, well you've got to unlock your phone and show me all your timeline, all your stuff, give me your passwords, whatever it is. They can make a whole lot of demands. And their document. This is not a fictional case. This happens all the time with various countries, right? And I think it would be good if people had the ability to be like, no, really, like, there are some things in my life you guys probably just shouldn't know about, right? Like, you're curious about my interactions in this particular space. I'm happy to show you this, but you don't need to know about my bedroom life. You don't need to know about all of my accounts. You don't need to know about my medical conditions, whatever it is.

(25:03)
Right?

Speaker 2 (25:03)
So I would like to be able to show you some section or wherever it is or whatever it may be. I would like people to be able to elect to do that or just say, like, look, screw it. I'm just going to just deny everything and throw it all away. So the idea in this device here is that I wanted to give people the ability to have plausible deniability. And we've seen plausible deniability in action over and over again in the past years. Like, the number of times you see interviews of really powerful people going, like, I don't recall, I don't remember. I don't recall just stonewalling things. That's plausible deniability at work, right? I mean, a lot of lawyers say that's a terrible approach towards the defense, but, man, it has been incredibly effective lately for a lot of people. So how do I get everyday people like you and me the ability to say, like, I don't recall, I don't remember and not and be able to do it and be like, there's no way anyone can catch me. So this particular device has the ability-that's a long preamble to the punchline. This particular device has the ability for you to keep secrets in what I call a set of bases, right?

Speaker 2 (26:04)
And so you can say, like, I have a regular, everyday basis where I keep all of my less confidential things, and I can open up and unlock a set of different boxes. And these actually overlay directly onto my existing application. So, for example, I have a contact book and you're in it. And my mom is in it. And I want to also have contact with my doctor. But I don't want people to know who my doctor is. I can put it in a basis. And when that base is unlocked, my doctor will appear. And then when I close it, doctor will not be there, but my mom and you will be inside that contact book. And so the idea is that it gives you survivability at a customs checkpoint. It's not like you're showing someone a complete blank thing where like, okay, no, really, show me the real thing. $5 wrench hit in the head until you show the thing. No, really, this is like all this sort of stuff here, like whatever it is. And the plausible deniability feature. And this is designed such that if you truly forget the password, it's the same as deleting it, right?

Speaker 2 (26:58)
So literally, I do not recall is the same as it's been deleted. Or if you can't get it's gone, the idea is to make it so that in a way, cryptographically, you have the option to elect to deny something and know that you can't be caught for it.

(27:17)
Right.

Speaker 2 (27:18)
And whether or not you do that, that is a social question. Maybe you shouldn't do it in all cases. Maybe you should disclose everything to certain people.

(27:27)
Right?

Speaker 2 (27:27)
But the fact of the matter is, if you have a system that keeps secrets but then has like, name a file like secrets.txt and it's encrypted into the password, someone will see it and be like, you got to give me the password for this. I'm sorry, right? This one doesn't have that problem. So you know that no one is going to be able to take the device, forensically dump it and analyze it with the biggest computer they can. They can't prove or deny that you've given all the passwords there on it. There's no tell, there's no sort of leakage of that sort of information. So now you have the option to be able to exercise that privilege of plausible deniability and be able to sort of make choices about privacy and security that way.

Speaker 1 (28:06)
Love that. And plausible is the key there. And the idea of taking the tools that we have in the social space and meet space and putting those in the legal space and putting those into technology and bringing those tools into the space of zeros and ones is really powerful.

Speaker 2 (28:26)
I really want to make sure that again, part of the thing is if you look at some of the people who deny things all the time on TV, wherever it is, those people, you can say what you want. Some of them are just good at being pathological liars. A lot of people in technology aren't that way. They're not that type of person. Deniability doesn't come easy to them because they're logic based and they know if they're caught, it's going to be a problem. But if you give them a tool and I tell you like, you can deny this and cryptographically, no one can prove that you're wrong or right at the end. I think it makes it a little easier for people who aren't expressly trained or psychologically primed to be that kind of a person to exercise these tools that the powerful and influential have.

(29:06)
Right.

Speaker 2 (29:06)
And so I think that helps to level the playing field a bit more for the small people. That's the idea. That's the hope, at least at the end of the day.

Speaker 1 (29:14)
So critics might say that you're enabling people to lie with this technology. Why do you think it's important to have this privilege?

Speaker 2 (29:21)
Yeah, sure. You always have the ability to disagree. You can say, right, we're getting back into this world of like, as we've seen so amply demonstrated, people can say they have alternative facts now and these kinds of things. So we're entering in this world where it's not really lying, but we're now even fundamentally arguing about what is the nature of truth, these types of things.

(29:48)
Right.

Speaker 2 (29:49)
Which I don't like. I don't like that we're entering this particular world, but I don't want to be people who are able to do that, have a really strong leg up in terms of being able to manipulate me, to be able to forward their agendas and their narratives or whatever, these types of things. And so I want a way to be able to just push back in my own way against that particular trend. The other thing that I always have about me is that don't take it for granted that authorities are correct.

(30:24)
Right.

Speaker 2 (30:25)
I believe in social cohesion and structure and being a good citizen and taking care of people and loving people, these types of things. But that does not necessarily mean that authority always has your best interest in mind or that they're well guided or wherever it is. And again, you can say, okay, well, we have a process for this. You can go to the judiciary, wherever it is, and good luck. Some of these judicial processes take decades. I mean, I'm still stuck in a freaking lawsuit in the United States, like years and years and years on after filing with it, you can just basically die in court.

(30:56)
Right.

Speaker 2 (30:57)
And so while the judiciary is wonderful and I had a lot of respect for the Supreme Court, I think that things are changing now, and it's more important even now, especially as technology is coming along and society is evolving, that we start to evolve our mechanisms and our social norms and our ability as technologists to be able to navigate this very cringy, very scary kind of world that's coming at us at the end of the day.

Speaker 1 (31:25)
And for someone who has a deep knowledge of the vulnerabilities of hardware, are there any tech devices out there that lots of us use, but that may be less secure than we think they are.

Speaker 2 (31:39)
All of them, depending upon what your threat model is at the end of the day, right. Here's the honest truth. And this is the sad part about it. It doesn't even need to be that insecure. It's just about what we're believing is in the end user license agreement now.

(31:57)
Right.

Speaker 2 (31:58)
The amount of stuff you just give away for free, the amount of privacy that you lose for just wanting see some cat photos or whatever it is at the end of the day. Right. You don't have to exploit the hardware. You exploit the human.

(32:11)
Right.

Speaker 2 (32:12)
At the end of the day, sure, there's a lot of things that leak data on it, but the reason why we haven't seen more hardware level attacks and more of these things is because people you can just dupe a person into hitting a button and sending a recording or wherever it is. This is not an uncommon problem. We're seeing this more and more, and that the fishing and the spam attackers are much more sophisticated at tricking human beings, at emulating authorities and making calls that sound threatening. I've seen some of the they're so damn clever. Now they tell you that you have fraud, so they put you in an upset mood which primes you to be more compliant. Then they ask you for your passwords. They've got how to work your psychology down to an art. Right. What devices should we worry about? Frankly, all of them. But I think we also need to just have a holistic view of what it means to be secure and have control over lives and protect the things that we care about.

Speaker 1 (33:15)
And when you're talking about a threat model, that's a particular term. Can we describe that a little bit? For example, you can look at it as how much does it cost your adversary to get what they want? And how do you think about threat modeling?

Speaker 2 (33:31)
Yeah, threat model is a bit of jargon there. And the reason why threat models are so important is that if you don't define the correct threat model, first of all, you're wearing tinfoil hats and you're wasting effort and you're being silly and you're also missing the big picture. At the end of the day, it's sort of like, I don't know, like worrying about the security of the thickness of the steel in the bank when actually all your funds are electronically stored and there's a weak password on it or something like this.

(34:06)
Right.

Speaker 2 (34:06)
And so the threat model means, are we actually worried about a physically present attacker or are we worried about an electronic attacker, let's say. And what is a character of attacker? Are they going to be well funded? Are they not going to be well funded, blah, blah, blah, all these sorts of things. And so you create a model of the adversary that you're trying to defend against. Once you've agreed upon the model of the adversary you're trying to defend against, you can now talk about rational countermeasures that don't go way off in the deep end about what you're trying to defend against.

Speaker 1 (34:35)
You don't have to create a scifi piece of technology to defend your password.

Speaker 2 (34:41)
Right. Or more importantly, a threat model also says, at what point do you throw in the towel?

Speaker 2 (34:49)
So if you say, like, look, this thing can defend up to an adversary when they spend $100,000 to break into it, then you shouldn't put a secret that's more than $100,000, right? That's a really nice thing to have where you say this isn't worth more than $50.

Speaker 1 (35:03)
Okay, fine.

Speaker 2 (35:03)
If you put $49 on it, no one's going to break it because I'm going to lose money.

Speaker 2 (35:07)
So having those very clearly delineated, I mean, again, money to penetrate whatever is one metric of these types of threat models. But whenever someone starts talking about security and what you need to worry about, it's always a good thing to always come back to what is your threat model? What are you trying to protect against? Who do you trust? Who is not trusted? Is the government in the threat model or not? Is it just random script kiddies? Are you talking about like paid professionals are coming after you? Is your threat model yourself like you're forgetting your phone and losing your stuff? Actually, one of the things that a lot of people don't realize is that probably how much crypto is lost because people just lost their passwords or wiped out their wallet or whatever it is.

(35:52)
Right.

Speaker 2 (35:52)
So in a way, that's a threat model, losing your root keys.

(35:57)
Right.

Speaker 2 (35:57)
Is probably some large amount of lost value there. You really need to make sure you're considering all the possible angles and what they are.

Speaker 1 (36:08)
So I see a theme with your temperament and your interest in your activities around reverse engineering. So at the beginning of your career, you reverse engineered some really key technology, for example, in the Xbox, and that led to writing Hacking the Xbox. And you've also reverse engineered. Throughout this conversation, I can see how you're reverse engineering society, reverse engineering people. And so you have this mindset around reverse engineering things, but it has also come at a cost to you. And I think you've written about this as well of how you've had to kind of contend with the Digital Millennium Copyright Act and how that has had a chilling effect on people's ability to take apart and explore the insides of an Xbox is something you said. So, yeah, how do you recommend that people protect their privacy while still being able to tinker, being able to reverse engineer?

Speaker 2 (37:09)
Yes, I think there's always parties who don't want you to take things apart.

(37:14)
Right.

Speaker 2 (37:15)
For various reasons. There are technical advantages, there are business advantages that people want to have, legal reasons people don't want you to take things apart. And honestly, at the end of the day, within very broad brushstrokes, I have found that actually don't take this as legal advice, but if you're curious and you're just doing it in the privacy of your home, just go ahead and do it right. Honestly, you can worry about all these other consequences or whatever at the end of the day. But again, the courts have not been terribly fast at addressing the scenes. Legal process has not been terribly fast at addressing things, and technology moves really fast. And so from a very practical standpoint, people who are curious and want to learn and tinker should just do it.

(38:00)
Right.

Speaker 2 (38:01)
And honestly, if someone calls you out for it you'll be a great test case.

(38:04)
Right.

Speaker 2 (38:04)
Because you're just doing what comes natural. You're learning the way humans learn.

(38:07)
Right.

Speaker 2 (38:08)
This is not a legal practice that you're doing fundamentally, whether or not the law is inked one way or the other. And that's one of the things we're almost missing right now are good test cases for this type of activity. Because basically the laws only keep honest people honest. The people are going to do bad things anyways, will break the law and don't care about it.

(38:24)
Right.

Speaker 2 (38:26)
So that's kind of the real politic of it, but also a lot of the engagements that I have taking things apart, reverse engineering, whatnot. Early on, I didn't understand the world. I didn't understand how it worked. I didn't understand law. And so I got into some hairy potential court cases and some actual court cases, and I'm in one right now. I have a device, and I want to be able to process video in the privacy of my own home, to be able to put subtitles on it, to be able to do machine learning on it, all these sorts of things. That's actually illegal because of the Digital Millennium Copyright Act Section 1201. It's a crime to basically decrypt just decrypting the video that's on a DVD or something like this that's been encrypted itself is a crime. And it just seems ridiculous, actually, that I can't do these things in the privacy of my own home. And so there is actually a lawsuit right now with the United States government, which has been going for several years, where I'm asking for relief. I'm hoping to get the right to be able to do this through the front door, basically be able to do it straight and narrow and get that all fixed.

Speaker 2 (39:40)
But I can tell you, having knocked on that door for a long time is a long, hard path. And it's not a practical thing for me to suggest that a kid who's 14 years old and just curious what's inside the device should go get a law degree and file a bunch of lawsuits for them to learn something about technology.

(39:57)
Right.

Speaker 2 (39:58)
I think for kids, not just kids, but adults, too. But people are curious just within reason. Just go forth and learn and experience the world because you'd be missing out otherwise.

Speaker 1 (40:12)
Yeah. It is truly chilling to think about the ways in which being able to go through the front door and interact with the technology that's in your own home, that maybe has your own data on it is kind of obscured in this legal quagmire.

Speaker 2 (40:32)
Yeah. And it really has a chilling effect, like the amount of innovation that doesn't happen. It's very hard to measure a negative space at the end of the day, like what didn't happen because of all these laws and whatnot. But I can tell you the number of times that back in the day when we were at Chumby and we were talking about can we transcode video, can we move video around all this sort of stuff? This is before YouTube, whatever it is, we didn't touch any of it because we're so scared of section 1201.

(40:58)
Right.

Speaker 2 (40:59)
We're just like, maybe we could have gone that way for better or worse. Some people fought it. And now we have things like YouTube and Netflix or whatever it is. But I can tell you in that particular instance, we did not do a whole bunch of stuff that actually was probably pretty reasonable to do because we're scared to do it. That's literally chilling of innovation at the end of the day. And so it's hard to measure that negative space because it didn't manifest right. We're only talking about missed opportunities at the end of the day. And the arguments of all the other people who are pro section 1201 are like, well, show me a problem or whatever, has happened. It's a very difficult case to fight. But the number of ideas that haven't happened is remarkable. And if you look at just for better or for worse, if you look at other ecosystems where the laws are different around these, practice different, you actually see a very different style of innovation, a very different style of learning. And I wonder sometimes if we just had changed that, would we not have just a more sort of robust, more fair ecosystem?

Speaker 2 (42:07)
I mean, one of the things that the Section 1201 of the DMCA does is it tends to concentrate power. People who have lawyers and have the ability to get the licenses or wherever it is, get the technology and they get the power. And then you have this barrier around just little everyday people innovating. And coming up, because we don't have the lawyers, we don't have the money, and we don't have these kinds of things. And so we're missing all this diverse fecundity, this wonderful environment. But you see other places and it's just when individuals are allowed to innovate, you can see some pretty cool stuff come out.

Speaker 1 (42:37)
This plays into the IP law, Copyright, the ways that the marketplace of ideas, however you feel about that term. But this idea that there is an ability to innovate and to be inspired by other ideas and to not feel like you are in danger for innovating and tinkering. And I think that you also have a lot of experience in traversing these supply chains in a lot of different cultures and different places and different environments. And you even wrote the guide to Shenzhen Electronics. You've been featured in videos showing how to build a working mobile phone from parts you picked up in the market. What have you seen in supply chains that you think is important for the future of technology?

Speaker 2 (43:22)
Oh, man.

Speaker 2 (43:23)
I mean, these days, supply chains. Supply is a new Mercury in retrograde or something like this, because it's all supply chain problems these days. It's really interesting. God, I don't even know where to start. But I think the main thing to remember is that supply chains are made up of people. We like to think of it as like, how many chickens can we get? Or how many chips can we get delivered at a particular time? But at the end of the day, there's a person, whether it's a farmer or an engineer in a workshop or something like this, they have incentives. And supply chains are all about balancing incentives and making sure we have fair deals and fair trade and all this sort of stuff. And this becomes like a geopolitical issue very, very rapidly. And this is the world we're in right now. And then the other thing about supply chains is that they're also very physical in that unlike computers and cloud computing, where you can decouple the point of innovation from the point of deployment, from the point of scale up, from the point of revenue, supply chains are about moving atoms around. And so if you can concentrate 20 vendors in the city who can all produce roughly the same thing, you have a very competitive, very efficient supply chain.

Speaker 2 (44:37)
If you split those guys out a little bit further, they all start to get small regional advantages. You get a less efficient supply chain. So a lot of people are like, one common question I have is you've been to China. You've seen how they do hardware. How do we put that in Silicon Valley? How do we bring it back to the United States? It's not that easy because a forest is not just transplanting a big Redwood tree into the middle of the city and say, we have a forest. Now, a forest is the bugs and the mulch. And like the bushes and the thousand years of dirt that then the tree can plant and grow that big and tall, and verdant and green at the end of the day and be self sustaining.

(45:17)
Right.

Speaker 2 (45:17)
And so someone needs to be really fond of making mulch in order to have a really good forest. And part of the problem in a lot of supply chain stuff is people don't have much appreciation for little things like screws, things like, what do you do with defective parts? What do you do with e -waste? What do you do with all these little things? It turns out that one of the things that China is, for example, good at is they're really good at recycling e-waste.

(45:43)
Right?

Speaker 2 (45:43)
I mean, like, if there's any value to be had in those chips, they pull them out, they polish them up, and we say, oh, they're reselling them like new. They're fake whatever it is, it's also recycling, by the way. And the chips are perfectly functional and they're usable. And as long as it's disclosed that this is what it is, actually, it's a good thing, not a bad thing, but in that ecosystem, they're able to sort of leverage this. And in a way that gray market is that mulch that supports everything. You can plan your trees and planning certain things, but you can plan your production runs, but you need to have some slop. You need to be able to get some chips sometimes when you're not quite ready for it, you have some demand that's unpredictable. You have a shortage, you have a line down problem or whatever it is. Those gray markets, those things are fueled by recycling and e-waste and those things that are a little bit off market wherever it is, they lubricate the engine of the supply chain. Right. Do Americans in Silicon Valley want a gray market? Is that a very American thing to have, like, sort of off labeled chips being sold in fakes and the sorts of stuff like real.

Speaker 2 (46:53)
It's not, right. It's culturally not a value that we choose to have. That's wonderful. That's the way we like to live our life. But in harder manufacturing, the lines down, the lines down. And doesn't matter if you're buying, like, real resistors that came from one vendor or another vendor or you got them second hand or the date code is too old or whatever it is, you're up on the line again, and it's a calculated risk. As the person who's doing the manufacturing, you say the risk of me not producing the run at all today is X. The risk of me putting in resistors that are two years too old is Y. One of them is much smaller. I'm going to buy resistors off the gray market. I'm going to test them and I'm going to go put them on the line. You have that option because the gray market exists, whereas in a situation without that, your only option is you're lying down. You lose revenue, right? So it's a difficult problem.

Speaker 1 (47:39)
All right, we're going to leave it there. That's it for today.

Speaker 2 (47:46)
We didn't even get to SGX.

Speaker 1 (47:48)
I know. You got to come back on. All right, so our guest has been Andrew Bunnie Huang, hardware hacker, security researcher, and open source advocate. Bunnii, thanks so much for coming on the show.

Speaker 2 (48:02)
Yeah, sure. Thanks for having me.

Speaker 1 (48:12)
Don't forget to subscribe to Privacy is the New Celebrity wherever you listen to podcasts and check out mobilecoinradio.com to explore the full archive of podcast episodes. That's also where you'll find our radio show every Wednesday. I'm Sara Drakeley. Our producer is Sam Anderson, and our theme music was composed by David Westbaum. And remember privacy is a choice we deserve.