Future Rodeo

Anna Claiborne on rapid tech advancements shaping our modern world

July 11, 2023 Matthew Wallace Season 1 Episode 1
Anna Claiborne on rapid tech advancements shaping our modern world
Future Rodeo
More Info
Future Rodeo
Anna Claiborne on rapid tech advancements shaping our modern world
Jul 11, 2023 Season 1 Episode 1
Matthew Wallace

In this episode, Matt Wallace and Anna Claiborne take us on a journey through major tech advancements and reflect on their implications for the future. 

JOIN THE CONVERSATION:

RELATED LINKS:

STAY TUNED:

In our next episode, catch Matt Wallace in conversation with Deloitte's Chief Cloud Strategy Officer and globally recognized thought leader, David Linthicum.


Show Notes Transcript Chapter Markers

In this episode, Matt Wallace and Anna Claiborne take us on a journey through major tech advancements and reflect on their implications for the future. 

JOIN THE CONVERSATION:

RELATED LINKS:

STAY TUNED:

In our next episode, catch Matt Wallace in conversation with Deloitte's Chief Cloud Strategy Officer and globally recognized thought leader, David Linthicum.


# Future Rodeo: PacketFabric's Anna Claiborne

[00:00:00] 

[00:00:06] **Matt Wallace:** Hey everybody. Welcome to, my inaugural, Future Rodeo podcast. I am super excited to have Anna Claiborne, the CTO and Chief Product Officer of PacketFabric joining me today. For those who don't know, packet Fabric is a network provider driven by APIs and I, I almost feel like, Anna, you're probably better to introduce it than I do.

[00:00:28] **Matt Wallace:** Although I will say for everyone that I had a certain kinship I think you, you changed it slightly, but I feel like your LinkedIn title used to say, automate all the things, and I pictured the meme and I think it's now says, automate everything all the time, which is like the slightly more business professional version of the same sentiment.

[00:00:46] **Matt Wallace:** But tell us a little bit about PacketFabric and what you guys do and how you got there.

[00:00:52] **Anna Claiborne:** Sure. So I think especially as a customer, you do an excellent job of introducing us. So, [00:01:00] PacketFabric is a fully automated service provider. In fact, we were the first fully automated network service provider. We look back at the trends over the past, 10 years basically before we founded the company, saying that the direction of everything was going towards infrastructure as a service.

[00:01:18] **Anna Claiborne:** And that's the way compute had gone. You went from spinning up servers and taking care of them in racks and doing cabling and figuring out Pixie boot for all your installs to all of a sudden, Using an API or a couple mouse clicks on AWS. And so the concept was why can't that be done for network?

[00:01:38] **Anna Claiborne:** And there was really no good reason once we dug into it, we're a lot of us having been, either network operators or in highly adjacent fields with like security that use a lot of network. It seemed really natural. We dug into it. There's no reason why it couldn't happen, so we did it.

[00:01:54] **Matt Wallace:** Yeah, that's cool.

[00:01:55] **Anna Claiborne:** yeah.

[00:01:57] **Matt Wallace:** I I had this very brief stint at level [00:02:00] three where I was really excited about APIs at that point. And I think this was still in the, this is on 2016. And so people were still talking about metrics like Time two first, hello World. And I was excited about developer experience and there wasn't necessarily what I'd say the, the will or the vision to do it.

[00:02:17] **Matt Wallace:** And so it wasn't much later. I think I was at faction in maybe 2017. I get the first intro to Packa Fabric, I wanna say, in the early days. And I was immediately blown away cuz it was, it was already a thing that I had wanted, right? It was this completely API driven thing. And the moment I'm asking that critical question, as the whole portal used the api and I think whoever was driving like immediately popped up the developer console like, yes it is.

[00:02:43] **Matt Wallace:** And watch. So I always love that, uh, experience. So I mean, you're there, I think there's probably a lot of ways to, to go about even asking around this, but maybe just, I know a little bit of the backstory here, but tell us a little bit about how you ended up in, in tech in the first place.

[00:02:59] **Matt Wallace:** Cause I think [00:03:00] it's really cool

[00:03:01] **Anna Claiborne:** Oh yeah, it was, I mean, it was an interesting, interesting journey to be sure. So, I started off in, in college I went, I got a, i my degree's actually in genetics. And so that's, I, and I actually switched because I started off in chemical engineering and then I realized that actually enjoyed living things a little bit better than chemicals anyway.

[00:03:24] **Anna Claiborne:** And so that was that switch. But in high school, uh, I started working at a bank. I, like most, most people in our, sort of, in our very particular generational slot, I was in the computers, super early. I had an Apple two E that I played load runner on, uh, and I tried designing levels and I, uh, also had, a variety of like 3 86 machines, the classic 9,600 bod modem, was on the b s boards early, all of that.

[00:03:52] **Anna Claiborne:** But it just sort of seemed like play, that it always seemed like it was a, was a playground. It wasn't anything [00:04:00] that I ever, you know, that a younger self would ever realize would go anywhere. It's like, oh, this is a cool thing. 

[00:04:05] **Matt Wallace:** The best origin stories in technology involve video games at some point, right?

[00:04:10] **Anna Claiborne:** They yeah, they really do. Yeah, do. So there's, we actually cr like this, this is a little bit fast forward, but, uh, I, so I was working at I was working at a bank as all 14 year olds do, taking care of the network and working on the fed wire machines and things like that. And so, uh, the guys that I worked at that time were this great bunch and I got 'em into doom.

[00:04:33] **Anna Claiborne:** And so I was like, yeah, we can use the land and we can all play doom. So we crashed the, we crashed the corporate land playing doom. Uh

[00:04:41] **Matt Wallace:** Yeah. I, I think you, you know about Exodus and it was kind of one of my first big gigs, but there were a lot of late nights where people were playing, uh, Warcraft, I think it was Warcraft one, possibly two Tides of darkness. I, I just remember I didn't play the RTS games very much, but when everybody got into Dilo, I was playing that on the [00:05:00] land.

[00:05:00] **Matt Wallace:** So yeah, definitely we lived that, of course. And when you're, you're young and you pretty much your, all your friends, your coworkers, anyways, which was the case for me anyways, it seemed totally appropriate.

[00:05:10] **Anna Claiborne:** Yeah, it's, uh, being at a bank all my coworkers were 40 year old men who'd all worked at IBM and were programmers and who were actually amazing. They were amazing men. They were amazing mentors. I, they really

[00:05:20] **Matt Wallace:** Were they weren't great doom players though, right?

[00:05:22] **Anna Claiborne:** they were not. No. And I, I think, I think I was in the quake. I think Quake was out shortly after that.

[00:05:27] **Anna Claiborne:** And like, I got super, an quake and descent, and I remember they would talk, like, they would be like, how can you look at that makes me nauseous when I was playing descent and being like, because it's, yeah. And so, uh, yes, they were not, not great video gamers. They appreciated it, but they had a ton of wisdom.

[00:05:42] **Anna Claiborne:** One of them had actually worked on the original c operating system back in the day. Or the, sorry, actually constructing the C language.

[00:05:51] **Anna Claiborne:** So at, anyway, I was, I, I was working at the, I was working at the bank and doing a lot of things, like ba, windows three point 11 installs, troubleshooting, [00:06:00] that kind of stuff.

[00:06:00] **Anna Claiborne:** And, then I started working on the network and taking care of like the Nobel, four point 11 servers and working on some of the Cisco switches, very basic command line stuff going into troubleshooting. And so then it eventually evolved into, well, why don't you take care of the telecom stuff too?

[00:06:20] **Anna Claiborne:** So frame relay, ISD lines, I got kind of involved in, in everything. Cause I was there every day after school working. Uh, and I, I had this sort of separate interest in science. That's what I enjoyed. So, that's what I went into in college, in chemical engineering.

[00:06:36] **Anna Claiborne:** And, there always, there was a ton of computer science courses, so I was like, whoa. Yeah, of course. I wanna get better. I think I, at the time I knew like, basic and BB access, that is what I had known. And I was like, oh, there's a C course, like I, I know a guy that worked on that. I gotta take C.

[00:06:51] **Anna Claiborne:** So I took C and I ended up doing, I ended up doing a minor in computer science, just working through those, those courses. So, and what I had to like [00:07:00] functionally ended up doing before this even existed, the actual first ever bioinformatics class was my senior year of college, is that I kind of gave myself a bioinformatics degree before one existed.

[00:07:12] **Anna Claiborne:** And just the combination of taking all the CS programming classes as far as I. And, uh, doing the genetics degree. So that's when I got out and I had these big dreams that I was gonna go work for Genentech and I was gonna work on the shotgun DNA sequencing, because that was, that was the cool new thing in genetics at the time.

[00:07:34] **Anna Claiborne:** And I applied and I talked to the recruiter and they're like, I was thinking that I was, uh, going to get some, what, like one of these super cool research positions to go do this. Cause I was like, I have all these skills. I can do this, I know how to program and, sure. I was asking for a PhD, but I was like, can't be that important.

[00:07:51] **Anna Claiborne:** And I talked to the recruiter and she, and she just, she shot me down really quick. She's like, well, if you wanna move to Maryland, you can [00:08:00] get a job as a lab assistant with a bachelor's degree. And I was like, that's not what I wanna do at all. So I ended up going to work for, for Tower, for Tower Records who had a sign up in the quad at Davis that said, we're looking for, we're looking for software developers.

[00:08:18] **Anna Claiborne:** And I was like, that sound that, that sounds like a good, I can go do that right now. So I went and worked. Uh, it was me and a few other handful of other people that did tower records.com back in the day, which was their highest performing store because it was a store, it was, they counted it as the, just another store.

[00:08:37] **Anna Claiborne:** So is the highest performing store in the company and also the lowest overhead?

[00:08:42] **Matt Wallace:** Yeah, no doubt. This is the dawn, the dawn of the internet era. It makes me wonder too, I mean, what do you think would've happened? I mean, if you put today's brain in, uh, just graduated body back then, but you still wanted the job, how do you think things would've gone [00:09:00] differently at Genentech?

[00:09:01] **Matt Wallace:** Because I often wonder about like how life experience and the perspective we pick up affects those kind of interactions. Like, recruiter shoots you down. You pretty much went, okay, well I'm not moving to Maryland to be a lab tech, but how do you think it would've gone today?

[00:09:16] **Anna Claiborne:** I would have today, if my today brain was in there, I would've called Craig Venter. I would've found his phone number. He was like the founder Gen. I would've called him and I would said, I wanna come work for your company and this is what I wanna do, and you're gonna hire me. And I think it probably would've worked.

[00:09:30] **Anna Claiborne:** Yeah, yeah.

[00:09:32] **Matt Wallace:** I I, I, I don't know if like, if you've done this too, but I mean, I think I've had a few people that I've hired who they weren't necessarily the right person for the job, but they were like already knocking down a door to even have the conversation. Right. And you just think to yourself like, okay, normally, like I'm desperate to find this level of like enthusiasm.

[00:09:51] **Anna Claiborne:** Yeah. Oh, that, it's such a rare thing when you find that love, it's like, of course, now that we're very sage, we're wise, we're sage, we [00:10:00] have all this experience.

[00:10:00] **Matt Wallace:** could pretend anyways, right?

[00:10:02] **Anna Claiborne:** That's what I, I'm trying to just say, I'm trying to find all the nice words for being old. We're, we're wi we're, we're wise to the ways of the world.

[00:10:10] **Anna Claiborne:** It's a completely different approach. And finding that useful enthusiasm, like, and just the, the sheer not even caring about what you don't know is so precious. And now you just like, wanna snag it and bottle it and, use it for good. And so I'm sure that, having, having that perspective now, yeah.

[00:10:29] **Anna Claiborne:** I would've just gone in there like bowling a China shop and done it completely differently.

[00:10:33] **Matt Wallace:** Yeah. Have you seen that chart where. Like at the top, it, it begins in the timeline at zero. And it's like the person thinks they know everything, but as they're learning, they realize they know less and less and less until they actually end up in this like depressed trough this funk. But they keep going and then they start to get actu towards useful ignorance and like a little bit of growth of knowledge.

[00:10:55] **Matt Wallace:** I always feel like the more you know, the more you realize you don't know, and at some point you throw your hands [00:11:00] up and just go, okay, I'm never going to know even a fraction of all these things there are to know, but I'll just devour everything that I can and try to keep it, useful as I go along.

[00:11:10] **Anna Claiborne:** Yeah, that's exactly how I feel is that you just, so much out there that you can never, that you can never know about. And it's pretty much an endless, it's an endless treasure trove of insight. And, I mean, it's the internet. The internet makes the body of knowledge available to humanity just increase exponentially all the time.

[00:11:30] **Anna Claiborne:** You can never keep up with it. So I almost think as we go on that, trough of, the, the trough of the imposter syndrome, like, I'm never gonna know anything, I like, I almost think that gets exponentially worse because you realize that there's just so much you will never know. So all you can really do is investigate the things that you enjoy and try to contribute at least a little meaningfully.

[00:11:52] **Matt Wallace:** Yeah, I mean, I, I always like to say we live at interesting times and I, I remember going to a youth

[00:11:57] **Anna Claiborne:** Isn't that a Chinese curse?

[00:11:59] **Matt Wallace:** [00:12:00] uh huh

[00:12:00] **Anna Claiborne:** Isn't that the Chinese curse? May you live in interesting times?

[00:12:03] **Matt Wallace:** Oh, is it? I don't

[00:12:04] **Anna Claiborne:** I think it is.

[00:12:05] **Matt Wallace:** uh, it cursed me all the time then. Yeah, we definitely are living in interesting times. I mean, it's good. Interesting. I guess, I, I can skip like war famine and disaster basically. But as far as technical upheaval, I'll take that all week, even if it means total disruption.

[00:12:21] **Matt Wallace:** And I remember Baras ran having this slide at the end of a UEX conference where he said, if you can spell T C P I P, it's a good time to be alive. And I was like, okay. I guess I agree with that, right? Because already I think I'm like, 22, 23 or something. And the opportunity just was like, exploding in front of my eyes.

[00:12:40] **Matt Wallace:** And, and it, the great part is like at some point you have this belief like, this is gonna change everything. And then now we get to look back later and go, oh, it did. Like, it was, we, we were not, we were actually selling it short, arguably. So let me ask you this though. Here we are. What do you think about, actually, it is a kind of a question.

[00:12:57] **Matt Wallace:** Two parts. Let me ask the easy part first, somebody comes to you, it's [00:13:00] like your, your neighbor, uh, that you're friendly with has like an 18 year old that's just graduating high school and they're interested in everything, right? They're like, oh, chemistry, biology, physics, computer science. But I also love Ang English literature, and I thought maybe I could be Lord.

[00:13:13] **Matt Wallace:** Anyways. And you're thinking, and they're like, what do you think would be the, what's the coolest thing I could do? What do you think? What do you think's the best path forward or something, they're just looking for the world's most generic advice, but they're interested in everything. So it's kind of unbounded.

[00:13:25] **Matt Wallace:** I kind of try to tease out this idea of, if you're starting the beginning of a career, or beginning of an education, secondary education career today, what do you think the coolest thing you could be doing today is? What would you tell someone to study or learn about, or at least think about?

[00:13:39] **Anna Claiborne:** So this is your easy question. I'm now terrified of the hard question, so,

[00:13:45] **Matt Wallace:** This is, there's, there's no right or wrong answer though. So that's

[00:13:48] **Anna Claiborne:** well, this is, this is a heavy, I mean, this is a heavy question. This is the existential, what do you wanna be when you grow up? Question. 

[00:13:56] **Matt Wallace:** You're giving it to somebody else, which makes it so much worse than answering it for yourself, right?

[00:13:59] **Anna Claiborne:** [00:14:00] yes,

[00:14:00] **Matt Wallace:** That's why it's imaginary person. Although someone might actually listen to this someday and

[00:14:04] **Anna Claiborne:** yeah, and take my advice that,

[00:14:06] **Matt Wallace:** take, take this worth a grain of salt. All Yahoo listened.

[00:14:09] **Anna Claiborne:** Yeah. So I, I would get into, And I'm, I'm taking this through the lens of like, if, if I could do it over again, because I think that's really the only lens that you can take it through.

[00:14:20] **Anna Claiborne:** You have to, you have to take, because it is important, you have to listen to your own interests and, if your own interests are all over the place, which I think both of ours are, like, we definitely are interested in, a huge ray of science and technology. So I would go into something very specifically cross discipline like the intersection, I love, uh, Xeno Botts, actually making sort of little, uh, little programmable cells that can do things like that idea, the interface of computers and biology, uh, material science is su is like super fascinating.

[00:14:53] **Anna Claiborne:** I've had several times in my life when I'm like, I should go back to school for material science because new, when a [00:15:00] new material comes out, it, it is one of those fundamental changes. It's like the, it's the first building block that, or, the foundational building block for this whole other entire ecosystem on top of it.

[00:15:12] **Anna Claiborne:** When you come out with, uh, a new, a new battery or a new even thing like Vanta black, the carbon nano tubules for that make, vanta black, such a absolutely non-reflective surface. And so when you come up with just a very basic thing like that what is built on top of that is, just you're, you really are revolutionizing everything when you come up with a basic unit like that.

[00:15:38] **Anna Claiborne:** And then the cross, the super cross-functional domains like computers and computers and biology are fascinating, uh, because that's where a lot of the real advancements will come from. Because, and it's not to say that you couldn't do something super profound if you were a physicist, like you wouldn't come up with a unified field theory or something like [00:16:00] that.

[00:16:00] **Anna Claiborne:** I mean, I hope that there are lots of people who wanna be physicists and, and do something like that, but. When you kind of have, when you're, when you're an expert or as close as you can be to an expert in two different domains and you're working on those simultaneously like that, that cross function is super neat because you get these emergent things out of it that, it's not in the observable universe that we're in right now.

[00:16:23] **Anna Claiborne:** You get a whole new thing. So I,

[00:16:26] **Matt Wallace:** had a lot of people that you've worked with or maybe even hired, right? Where they started off in a hard science and then ended up in computers and then to me that's always been a little bit of a pattern, right? It's a good sign. I actually remember literally maybe 20 years ago, somebody kind of denigrating, literally a physicist who was trying to apply to the knock at Exodus.

[00:16:47] **Matt Wallace:** And I just remember kind of like quietly in my cube face, palming. Because I was thinking like, it is so crazy if you'd be making fun of that person. It is so lucky for us that they're so interested in this field that they want to come over. Cuz they're probably gonna bring [00:17:00] some knowledge of technical skills along with just like this diversity of knowledge.

[00:17:04] **Anna Claiborne:** Yeah. And it's interesting that you say that because I kind of wonder what's gonna happen to the future of computer science now, because computer science for computer science sake is, I don't want to say that it's like played out or anything like that. Cause it's not, I mean, there's still plenty of advances to be done in the field of computer science, but it seems that, at least to me, all the interesting ways in which the technology, computer technology is impacting other things, is the more interesting thing.

[00:17:36] **Anna Claiborne:** And so again, that's why I go back to the sort of cross-functional domain is that, Computers, sort of the ultimate thing that we're seeing outta that right now is artificial intelligence. It's like, how can artificial intelligence, the, the ultimate, the, the pinnacle that we've reached so far for computers and when, what they can achieve, how is that going to alter the trajectory of neuroscience?

[00:17:55] **Anna Claiborne:** How is that gonna alter the tra you know, the trajectory of [00:18:00] mining, like who, who, like maybe AI will be able to better predict deposits of precious metals and lead us to, new, new sources of rare earth materials that we haven't found before. So it's more, it's more like how, how can this, how can computers now augment everything else in our lives?

[00:18:15] **Matt Wallace:** Yeah. Boy, I really do wonder too if there's like an analog and maybe, uh, if there's an analog in some of the fields, because I think about computer science and I think about folks who, you know, back in the day as they were working towards the dissertation, it might have been in, something like graph theory or, provable algorithms, right?

[00:18:35] **Matt Wallace:** There was this really strong link I think in, especially in earlier computer science between. Lambda calculus and then the actual programming that you did, behind the keyboard. I have a, a friend who wrote his dissertation on basically the security of social networks. Right? And it's so derivative, right?

[00:18:54] **Matt Wallace:** Because it, it's a computer science degree, and yet that the, it's, it is [00:19:00] certainly related, but it seems more peripheral. And I wonder if, if we'll see this evolution where similarly to have like, your own journey at school, right? Where you start off in this a hard science, but it wasn't like theoretical physics.

[00:19:13] **Matt Wallace:** It was like something that was going to be practical. And I wonder if we'll see more acknowledgement like that. There is still a thread of computer science that is completely theoretical, that like, Harold's back to Donald Newth and mixes a programming language and really deals with like that low level construct.

[00:19:32] **Matt Wallace:** Maybe it's tied to hardware, but then they'll be people who are, I mean this is already happening I guess with things like ml, but I, I guess, wouldn't you say that you see this sort of thing where research then starts to happen at many different layers and layers? Like computer science maybe is just not a descriptive enough term.

[00:19:46] **Anna Claiborne:** Yeah. I feel like there's so many sub-disciplines now within computers. It's such a broad thing. Like I, it's not a very descriptive term anymore because it's a, it's, it's like saying science now, it's like, okay, great.

[00:19:58] **Matt Wallace:** Like I against [00:20:00] science.

[00:20:01] **Anna Claiborne:** Yeah. I like

[00:20:02] **Matt Wallace:** that's a great way somebody goes, sure, you did Right.

[00:20:05] **Anna Claiborne:** I science I science a lot. I

[00:20:08] **Matt Wallace:** like in like in Goodwill Hunting where he is like, oh, I took history. And he is like, oh, history, huh? So it was a survey course then. Sure. Just like, oh, this is embarrassing. I feel bad for Ben Affleck's character.

[00:20:20] **Matt Wallace:** He's, he's walking into one. So

[00:20:22] **Anna Claiborne:** like them maples.

[00:20:24] **Matt Wallace:** yeah, exactly. That's a great movie. I'm glad you brought up the AI thing though. It takes

[00:20:29] **Anna Claiborne:** I know,

[00:20:29] **Matt Wallace:** hard to, it's hard to hold back. Right. But really,

[00:20:32] **Anna Claiborne:** it's the

[00:20:32] **Matt Wallace:** wanna ask you about the biology now. It's super hot. I, somebody, literally, somebody on my own team sent me this graph of like the fastest known things in the universe.

[00:20:41] **Matt Wallace:** And it's like cheetahs, et cetera, the speed of light and then the, the one that was faster, the speed of light was people becoming experts in ai. And I was like, I resemble that remark. Like so,

[00:20:52] **Anna Claiborne:** I'm not an expert in a, I'm not an expert in ai. I mean, I have, I have lot, I have, I have thoughts on it. I find it super interesting. But, I'm an [00:21:00] expert in AI and is, I'm an expert of asking chat g p d questions. I'm really good at that.

[00:21:04] **Matt Wallace:** yeah, no I'm in the same boat. Although it has me like studying linear, linear algebra. I wanna say again cuz I did some and had some applied discrete math in school and things of that nature, but not, and actually the thing I was surprised about at least so far is especially when you're writing everything as code and python, anyways, the math's a lot easier than I thought it was going to be.

[00:21:26] **Matt Wallace:** And I also was really surprised at how easy it was to see the link between, very simple like linear regression and polynomial regression, like stepping into LLMs when you start realizing that they're just vectorizing all of these tokens in the prompt and then vectorizing all of these like indi dimensional spaces to kind of form the neural net and traverse it.

[00:21:47] **Matt Wallace:** So, really interesting to see how like the idea of computing a loss function could apply basically from the most basic thing into like, the really complex things. But yeah, I'm definitely like really [00:22:00] enthusiastic novice, and I'm almost embarrassed cuz I keep putting out these little mini podcasts in enthusiasm.

[00:22:05] **Matt Wallace:** Although I'm, I'm mindful of people like Richard Feynman who basically said, if you wanna know, if you really understand something, go to Freddie teach it, right? Because, and I find myself going, I wanna say this thing, but is that true? Quite a lot. But what I want to ask you really is about the.

[00:22:21] **Matt Wallace:** Applicability to biology. And I'm trying to remember the last time we had a conversation, cuz we certainly got into the biology of it. But you, I know you hadn't read the state of AI report and we agreed we'd talk about that, but I I'm wondering if you caught, if, if last time we had talked, if you had seen the thing about open fold and deep fold and the whole protium being sort of predicted basically, which I think it's about up to about 80% accuracy now.

[00:22:45] **Anna Claiborne:** Ooh, no. I had no, I had not seen that. I feel like I need to Google that right now.

[00:22:50] **Matt Wallace:** I know, right? I think figure this would be right up your alley. So I mean, the, the gist of it, of course is deep fold and then open fold, which was a, an open source creation in the vein of deep [00:23:00] fold was basically looking at the three protein structures of AM acid chains. And then the idea is you can take any amino acid chain, it would try to predict the shape it would have, and there was some

[00:23:10] **Anna Claiborne:** Okay. So it did predict, so did predictive protein, folding it, basically got trained on all the known proteins, and then it said, all right, given, given this cha, given this chain, I, okay, that's good. I think I can kind of figure out what it was doing now, even without, I just Googled it. But oh, that's super

[00:23:24] **Matt Wallace:** part of this is we, I remember literally putting nodes into the folding at home project like 20 years ago,

[00:23:29] **Anna Claiborne:** Mm-hmm. I know I did too. I took

[00:23:32] **Matt Wallace:** apparently, it apparently they have now run the process to predict every, conceivably feasible immuno acid chain and something like 80% of them match the predicted protein.

[00:23:43] **Matt Wallace:** So like the amount of effort going into like cryo em work to do things like use electron microscopes to actually build the proteins and then see the structure they have is like shrinking massively because, you end up getting it right the [00:24:00] first time, like 80% of the time. And so the work has shrunk.

[00:24:03] **Matt Wallace:** And I don't know if it reduces the remaining 20%, but it does make me wonder like, what does that mean for biology and research when you get so much horsepower out of an ML model?

[00:24:12] **Anna Claiborne:** That's a, that's a really, that's a really good question. I mean, it'll, it could potentially allow us to build like novel proteins for completely new new use because if you can, if you can put together a string of proteins, predict how it will fold, then you can predict, how it's gonna interact with other things as well.

[00:24:32] **Anna Claiborne:** So it could allow, I like, it's scary because immediately makes me think that we're gonna accidentally make PreOn that are going to kill all of who, that are gonna destroy all life as we know it. So

[00:24:45] **Matt Wallace:** we just, we just, we just

[00:24:46] **Anna Claiborne:** I'm like, oh

[00:24:47] **Matt Wallace:** zombie apocalypse. Yeah. Actually, it's funny you should say that. It's always viruses in the movies, right. But it really, actually, it being a PreOn disease makes arguably as much or more sense, right.

[00:24:57] **Anna Claiborne:** in, in the scale of [00:25:00] what's scary you have bacteria somewhere, way down here and then, and then viruses or maybe there's something below bacteria. But then way, way, way far out there you have PreOn. Cause nothing we have, there's nothing that kills those things except for fire.

[00:25:15] **Anna Claiborne:** That's it. I mean, they can freeze in space. It'd be fine. Doesn't matter. Sit around on any, I mean, like, you need something in a, in a living being you cannot, you cannot destroy them because they are, they are the, they are the essential building block of life. They are a protein. Anything that is going to degrade them is going to degrade everything else in your body.

[00:25:38] **Anna Claiborne:** It would have to be something that was highly targeted to that exact protein and that exact protein. The only thing that it does is reconfigure other proteins just like itself. So tell me how you're gonna get a protein to attack a protein. That's core function is to change other proteins

[00:25:57] **Matt Wallace:** I, I do wonder if you can have, I [00:26:00] mean this seems like the sort of fighting fire that ends up burning the world down. But I mean, I wonder if you can build a sort of Like a mirror protein. Right. Something that kind of caps it off, basically where it wants to bond to it or, and then gets stuck basically cuz it tries to go and reconfigure the other protein and it ends up just wedging it together.

[00:26:17] **Matt Wallace:** And they both like, literally like, uh, like a, an acid and a base reacting. Right. And then and you

[00:26:22] **Anna Claiborne:** yeah. And neutralizing, yeah. Neutralizing function. Yeah. That'd be really inter I mean, to, to date though, I don't think anyone has come even close to creating anything like that. And it is terrifying when you think of us creating new proteins and accidentally stumbling onto, the ultra, the omega PreOn Yeah.

[00:26:40] **Anna Claiborne:** That

[00:26:40] **Matt Wallace:** You had to say Omega didn't you?

[00:26:42] **Anna Claiborne:** I did.

[00:26:43] **Matt Wallace:** extra like end of the world.

[00:26:45] **Anna Claiborne:** Well, I was already, I was already thinking why would be, what would be the scariest name for that PreOn, the Omega PreOn.

[00:26:51] **Matt Wallace:** you did. That was good. That was off the cuff. That was like literally we could, if we write that on a napkin, someone might option that for the movie. Right [00:27:00] now the Omega PreOn. Oh that doesn't sound, that's, people are like, what's a PreOn?

[00:27:03] **Anna Claiborne:** I, uh, yeah, I know. Yeah. Cause everybody knows Mad Cow and

[00:27:07] **Matt Wallace:** yeah. Yeah.

[00:27:08] **Anna Claiborne:** and all. Yeah. They, they know the emergent

[00:27:11] **Matt Wallace:** They know Mad cow. Anyways. Yeah. Uh, uh, we, it, it's interesting though too and I guess I've had the, the privilege to interact with a few, companies doing pharmaceutical research.

[00:27:22] **Matt Wallace:** So I've actually gotten to talk to some people and see some, teams discuss, what they've been doing around the electron mic, mic micro,

[00:27:31] **Anna Claiborne:** Microscopy.

[00:27:34] **Matt Wallace:** that word, that thing you do with electron microscope. So, talking about that and it's so interesting to me cuz it's so heavyweight.

[00:27:40] **Matt Wallace:** I actually had friends that worked on rented time at a university on an electron microscope. Cause they were. Trying to crack smart cards back in the day. And I remember they spent like over, over a hundred thousand dollars of their own money on this r and d cuz they were looking really deeply at how the cryptography was implemented.

[00:27:57] **Matt Wallace:** And they, they did crack them all. They didn't open the door [00:28:00] for their product the way that they wanted. But it, it makes me wonder basically the gist of it was that a lot of the, the future here would look like people going, okay, we need something that does x knowing exactly what that protein structure would be, just to the way you were talking about and then just going, okay, let's search the database, which, what amino acids do we have to put together to get this protein?

[00:28:23] **Matt Wallace:** And just like being, and then like that level of may you used to hear the term designer drug and I don't think that was ever coming close to this actual version of designer drugs where you're gonna have things that are potentially tuned to your dna. I mean, that actually was the other thing. Have you heard about this pattern of pharmaceuticals where it's say 30 or 40% effective in a population?

[00:28:47] **Matt Wallace:** And it turns out it's not because it's a kind of poorly designed compound. It's the DNA differences in the cohorts. And so it's like, oh, if you, if with these set of genes it's a hundred percent effective or 95 and then for veril it's close to [00:29:00] zero. And so you just, you want to think about prescribing this DNA test and if they've got these genes, then go ahead and prescribe it.

[00:29:08] **Matt Wallace:** Cause it's almost certainly gonna work. That's like mind blowing to think about how, how that could revolutionize, like how quickly we do drugs and, and how effective they are and how well they're prescribed All those things, 

[00:29:19] **Anna Claiborne:** that, it's like, it, it's, so, it's funny to me that we're finally coming to this realization that genetics effects, uh, for example, how effective a drug is in your system, because it was, it's been common knowledge for quite a while that, if you look at a child, you say, oh, well, what's that child gonna be like when they, when they get older?

[00:29:41] **Anna Claiborne:** Oh, well, it's 50%, uh, 50% genetics, 50% environment, right? And so if this is sort of that base level of knowledge, it's been around for a while, that genetics has at least a 50% impact. It really is amazing to me that it's taken us so long to get to the idea that, [00:30:00] uh, your genetics and the basic fun, which is the, which is just the encoding for the basic functions in your body could affect how, how you react to having a certain type of cancer, how you react to a certain type of chemo, how you react to a certain type of drug.

[00:30:14] **Anna Claiborne:** It's, it's kind of obvious when you think about it. And I think that's, I mean, don't get me wrong, I think that that's completely awesome that it, that it's finally, it's finally happening. But. It's just in general. I, I, go, stepping back to the easy question that you asked me a while ago about what to do, what would I advise somebody who's 18 to go do is that there's so, there's so much, there's so much potential in the medical space right now, just in general, uh, medicine.

[00:30:46] **Anna Claiborne:** Everything about medicine is just ancient. Like, I don't know if you ever look into how anything is done in medicine, but even so when I was pregnant with my first kid, I got really interested in like, cause they have all this advice for you. Like, [00:31:00] oh, well, you need to get antibiotics in the, in the chi children's eyes as soon as they're born and they need these shots.

[00:31:06] **Anna Claiborne:** And so you start, and I, me being sciencey and curious, I'm like, why is this, what is the medical basis for this? So you look, some of the stuff and it's absolutely mind blowingly insane. So the reason for antibiotics, for putting antibiotics on children's eyes sizes, because in like the early 19 hundreds, there was a handful of prostitutes that had children that also had, uh, chlamydia.

[00:31:30] **Anna Claiborne:** And when chlamydia gets in your eyes, it causes blindness. So because of this, it is now 100% standard practice to put antibiotics, which by the way, now there are far more children that are going to be allergic to that antibiotic than are going to be born to a mother with chlamydia because of advancements in prenatal care and STD screening.

[00:31:53] **Anna Claiborne:** And so it's actually a greater risk to do that [00:32:00] than it's like a greater risk to put an antibiotic on somebody that you have no idea how they're gonna react to then to assume that this woman is gonna have chlamydia when they have their child. So it, and so much of medicine is like this, it is built on, it has been, it started.

[00:32:17] **Anna Claiborne:** The first medical school is super old. It's like, I don't know, Roman times, something like that. And so you have this body of knowledge that's been built on for so long and it's almost ritualistic in the way it goes about its practices. It's not how like having an engineering brain and going into medicine is gotta be maddening because

[00:32:37] **Matt Wallace:** How many, how many humors were there and the original like model from then is like black humor and bile, and I forget what all the names of them

[00:32:46] **Anna Claiborne:** Yes.

[00:32:47] **Matt Wallace:** was like this whole taxonomy of things that just didn't exist at all. Right? It was just like we're all proxies for some poor, poorly conceived notion of what whatever was real.

[00:32:56] **Matt Wallace:** And, and no

[00:32:58] **Anna Claiborne:** It all goes back to, [00:33:00] well, we, we may, we, uh, there was a bunch of people that had the black plague. We bled like half of 'em, most of those people didn't die. So bleeding people is absolutely the answer when they get, uh, yes Pestos, which is the pathogen for Black Plague. And so it's based on just this very sort of the like, it's just based on a lot of what we're now seeing rapidly is outdated knowledge.

[00:33:23] **Anna Claiborne:** And there's a reason for that because, you don't wanna go, I, if you're not using something that you know is, is going to only, is only gonna kill this many people and mostly it's gonna be okay. You don't, you can't go experimenting on people

[00:33:36] **Matt Wallace:** First do no harm.

[00:33:37] **Anna Claiborne:** Yeah. First do no harm. Right? So if you know that something ha does no harm, you stick with that before doing anything else.

[00:33:43] **Anna Claiborne:** And there is a certain logic to it, but the amount of unexplored potential there is just off the charts in terms of what we can do better in medicine. And that comes to every single aspect of medicine to how we apply like to [00:34:00] actually like finding the under, like the underlying cause for conditions.

[00:34:04] **Anna Claiborne:** Like this is one of the things that just drives me. Absolutely. And that's being an engineer. You like doing root cause analysis, right? Why did this happen? Why did this happen? So we can make sure that it doesn't happen. All medicine does is treat symptoms. Why do you have a headache? Doesn't matter. Take an aspirin.

[00:34:18] **Matt Wallace:** Yeah. This is actually another thing by the way, that, that got me surprised and interested that I, that I didn't really realize people were doing with ml. It's maybe this is the least sexy version, right? Is not fitting to formulas necessarily like fitting a line and trying to make predictions, but just something that can do sort of automatic cohort analysis where you can actually feed a bunch of data in and without any sort of like app priori theory about what's going on, you can just get it to you.

[00:34:46] **Matt Wallace:** Tell me what the links in these populations and all these attributes are and Right. More features you plug in, the more likely it is. I feel like people are, and I, I'm mean I'm including myself in this to a big extent, right? This isn't a judgemental comment, but it's [00:35:00] like people are. Geared towards storytelling instead of statistics.

[00:35:04] **Matt Wallace:** My wife always likes to point out to me cuz she studies neuroscience that when you tell a story, basically that's personalized and this is why you see politicians doing it all the time, that people are much more likely to remember something and believe it. In fact, when it comes with that anecdote.

[00:35:21] **Matt Wallace:** So the reason why Mrs. You know, Mrs. Smith in Wichita when she shook my hand and she told me about her son Johnny, who had this thing, like the reason there's so much of that storytelling is because it helps get it into the part of our brain where it's remembered and believed. Uh, but then you think about what, what we do statistically and, and like the fact that we're so terrible as, as a human race about distinguishing between cause and correlation.

[00:35:47] **Matt Wallace:** And it's one of the things I have a hope for with AI and ml, right, is being able to, to automate the sort of part of the process where you're like, you just let me feed you data and you dispassionately tell me what you see statistically without any [00:36:00] sort of theory. And I feel like before even science was tainted to some extent because we had to have a hypothesis to test, right?

[00:36:08] **Matt Wallace:** Like you had to tease something out of the data that didn't make sense and come up with a hypothesis about it and then go test that. But what happens when you don't need a hypothesis, all you need is the data. And you have a system that will go and discover, we'll test all the hypothesis for you basically and, and move toward it,

[00:36:25] **Anna Claiborne:** Well, isn't that, isn't that the greatest saying the, the best thing about numbers is they can tell any story you want.

[00:36:32] **Matt Wallace:** right? Yeah. Lies, damn lies and statistics, right?

[00:36:35] **Anna Claiborne:** yeah, yeah. Analyze statistics. So going, so now I'm gonna tell, I'm gonna tell you a story about, uh, the,

[00:36:43] **Matt Wallace:** remember it and believe it, I'm sure then.

[00:36:44] **Anna Claiborne:** know about, uh, the potential cuz we were talking about what, what it is that, inventing novel, novel proteins could possibly do.

[00:36:53] **Anna Claiborne:** So I recently had a really good friend whose, whose child was diagnosed with this [00:37:00] super obscure, uh, syndrome, clefs stress syndrome. And it's a, it's a heterozygous deletion, uh, on the chromosome nine Q 34.3 I think. Yeah. Yeah. And so it, it, it surfaces and it's a, it looks a lot like TRIE 21 Down syndrome in terms of effects and the, and the effects can be pretty ranging too.

[00:37:21] **Anna Claiborne:** You can have a, you can end up with a relatively functional individual who can hold, hold down a job is still some amount of mental impairment to all the way to somebody who's, who's pretty severely handicapped and and, and disabled. And this, uh, the segment of the chromosome nine that this encodes that is, uh, encoded for is a histone protein.

[00:37:45] **Anna Claiborne:** So it's a, a histone is something that other, other proteins wrap around. And I've been reading on this a lot lately because obviously this is a, this is a good friend and, and I find this very upsetting too. And since it's rare, it doesn't have a lot of studies [00:38:00] on it, but they did, uh, a knockout study in mice, which is where you, you knock out the gene for this.

[00:38:05] **Anna Claiborne:** And, and just anecdotally in the human version of this, deletions tend to be worse. Nonsense mutations seem to be less bad in the spectrum of, of actual presentation. So the knockout studies in mice, they were able to, uh, basically give them supplements of this protein which is highly active in the, in the brain.

[00:38:27] **Anna Claiborne:** I don't know how they gave it to them. I don't know. It didn't, didn't say if they injected it directly into their brains or what. But they were able to give them supplements of this particular hisone, plg, I think it is. And they showed dramatic and almost instant improvement. And it's like, that's pretty incredible.

[00:38:44] **Anna Claiborne:** Imagine if we had, if we had the ability to do that on, on a bigger scale just even the experimentation in mice to figure out what it is. There's a lot of, there's a lot of problems both genetic and otherwise out [00:39:00] there that could make a lot of progress. Or even if we could model it, if we could have AI model the potential of, what happens in a complex system when we're giving, when we're changing things.

[00:39:11] **Anna Claiborne:** Because that's part of the problem. That's the problem with ethics and animal experimentation. It's the problem with experimenting with anything live is that, you unfortunately have to harm the system in order to test things. And if we could figure out a way to not do that, that's, that's a huge leap forward.

[00:39:28] **Matt Wallace:** Yeah. I mean, one of the things that I end up wondering about is, is this practical now too, I mean, the knockout study, do they actually carry that out with like a CRISPR or some other technique to if Oh, in this case they just applied it manually? I mean, is that the idea though? Like that I guess in a case where there's just a lack of a protein because the production is somehow genetically broken, then you can create the protein and just administer it somehow.

[00:39:56] **Matt Wallace:** Right. Whether it's respiratory or injected or, or[00:40:00] consumed. But then I guess we're also entering this era where in theory, we can go and edit. I mean, I actually, you probably know more about this. I mean, is the, is the CRISPR technique applicable to doing a therapy on someone where you need to. The genome and it's something, what if it's something, it doesn't reproduce a lot of, right.

[00:40:22] **Matt Wallace:** I mean, I'm, I'm not even sure what would fit. Right. I mean, thinking about things where, you know, I mean, yes, your liver say grows back eventually, but it's not like you're constantly regrowing parts of your liver. If you wanted to say, use CRISPR to edit a gene in your liver because you were producing a chemical that was gonna make you, give you cirrhosis over time, is that something you can do?

[00:40:42] **Matt Wallace:** And eventually you'll have a, a aligned with the, the targeted DNA n liver? Or is it limited to things that are being replenished, sort of continuous growth, or how does that end up working

[00:40:55] **Anna Claiborne:** so, well, one of the, one of the fundamental [00:41:00] limitations of, of Chris, or not limitations, but just unintended consequences of CRISPR, is that you have to specify, uh, a target site for, for cutting or whatever alteration you wanna make, which is a sequence of dna N and

[00:41:15] **Matt Wallace:** Yeah, I was gonna say c just by target site you mean in the genome?

[00:41:19] **Anna Claiborne:** yes, a target site, sorry, a target site in the genome. Genome, which is a series of A T C G, you the, in some order. And so you have to, you have to provide that site that, CRISPR is going to cut at and make and make alterations at, and just knowing and that's one of the big problems is because you get.

[00:41:39] **Anna Claiborne:** That, that target site, so that particular combination of base pairs, so let's just say it's a T C G, that may appear at your, at the site where you actually want action. So it's like, okay, the liver's producing a, a harmful protein, we want to, go, we want to cut out this gene. Uh, [00:42:00] and so the body stops producing this protein.

[00:42:03] **Anna Claiborne:** Well, that A T C G sequence might not just be there at that harmful protein. It might be in a lot of other places. And so you would get these unintended consequences of having other genes either rendered active, inactive, whatever the case is throughout the genome. And that's one of the, that's one of the big, still sort of hurdles that CRISPR has to clear for action within, for being really effective in humans,

[00:42:28] **Matt Wallace:** long can the target site be?

[00:42:30] **Anna Claiborne:** uh, that's a good question.

[00:42:31] **Anna Claiborne:** I don't know.

[00:42:33] **Matt Wallace:** It's such a weird thing. I, I've done enough I, I did a little bit of I played amateur life, life scientist on a project where we were doing, this multi-cloud supercomputer and I was running this thing they call the germline pipeline, right? Where you take the raw data that's been read from like an Illumina DNA sequencing device, and it's called a FASTQ file.

[00:42:52] **Matt Wallace:** Right? And the way it was described to me in abstract basically is like, it, it gets a ton of reads out [00:43:00] of, your DNA sample, but it's not like your entire sequence reads in order, right? It's more like, the way I described it to people, I haven't anybody contradicted yet is like, imagine a book, but instead of getting to read a book in order, you have like

[00:43:13] **Anna Claiborne:** You get it chopped up.

[00:43:15] **Matt Wallace:** sentence.

[00:43:16] **Matt Wallace:** And then, and tons of overlapping and they're all like, middle of the sentence cuts off just a million weird things like that. And then of course the reason you end up using a supercomputer is cuz they're like, now put it back together in an order. And you're like, are you kidding? And of course that's what it does.

[00:43:31] **Matt Wallace:** It, it takes all these segments that aligns them and there's even this company Para Bricks that Envidia bought that then took the broad institute's G A T K toolkit and then poured it it to GPU Cuda. So we could do it like 30, 50 times faster than a CPU could. So now you can do that whole process, that whole sequencing of the fastq into a bam file where bam bam being the binary line mapped file where it's like an order or you can print it in order.

[00:43:59] **Matt Wallace:** But you can do that [00:44:00] basically in sort of record time in like 40 minutes basically. Which, I mean actually one of the things, have you ever seen the chart of like the cost of, of genomic sequencing over time and how it's plummeted? Cuz it's ridiculous.

[00:44:12] **Anna Claiborne:** I've used, it's like, it almost goes like inverse to Moore's Law. They kind of go like, like that. There's these really cool, they're called Nanopore sequencers. They basically plug into an iPhone now and you can load like a little DNA n sample into 'em. They give, I can't remember the read coverage.

[00:44:27] **Anna Claiborne:** It's not real great. It's like three X Reed coverage or something like that. But yeah, they're these tiny little things that, that plug into a phone. I was like, that's amazing. It's amazing.

[00:44:36] **Matt Wallace:** I know we're, we're like on our way to living in Gatica basically anytime now. Is this the whole idea of like kissing your date and she runs off or he runs off at to, to get a swab done, basically tell me if this is a keeper, 

[00:44:49] **Anna Claiborne:** here's the thing that makes no sense about Gatica. If they like, I, I don't remember the movie that well, but if they had all that advanced technology, so someone comes out with bad eyesight, you just fix it. [00:45:00] Like, I mean, that's, it

[00:45:01] **Matt Wallace:** Oh, you

[00:45:02] **Anna Claiborne:** make a lot of sense to me.

[00:45:03] **Matt Wallace:** It, it doesn't in the context of now, but there was no CRISPR when Gatica came out and there was no fixing. And, and, and of course it's not even practical today, but I think we ex we have line of sight, right? Where we almost like, take it for granted that in 10 or 20 years we'll just edit DNA if we need to, but at the time, the chance they got was, get six embryos going and then you pick the one that is like closest to what you want, right?

[00:45:27] **Matt Wallace:** But then at the beginning of the movie, basically they're the, the parents are sort of naturalists, right? So for their first kid, they're like, no, we're gonna do the old fashioned way. We're not gonna get a bunch of embryos. We're just gonna, we're just going to, snuggle and next thing we'll have a a, a baby and we get what we get, right?

[00:45:43] **Matt Wallace:** And and it's not like he's bad per se, but if he's got a little bit of asthma, he's suddenly, it's not a, it's not like there's a whole bunch of other people that have that. He's now like, one in a million because everybody else is editing out all of the, the bad stuff. Right? I, but [00:46:00] I love the tagline, right?

[00:46:01] **Matt Wallace:** There is no, there's no gene for the human spirit. Which I don't believe, right? I'm sure there's a question of like, uh, what is the gene and, and how, what circumstances cause it to express and things of this question. But yeah. Still it's definitely different in the sense that maybe we make changes.

[00:46:17] **Matt Wallace:** I'm always want, if telling everyone I want that short sleeve or the set of, cause there's more than one, but I want all the short sleeper jeans. Please crisper that into me as soon as

[00:46:25] **Anna Claiborne:** Yeah. Can I, yeah, can I only, I'd only like to get less than six hours of sleep every night. I have the long, I have the long sleeper jeans

[00:46:31] **Matt Wallace:** me too. Me

[00:46:32] **Anna Claiborne:** Yeah. I need like eight to 10 hours of sleep. It's ridiculous.

[00:46:37] **Matt Wallace:** I have a friend who's got all, I guess he's got probably all the short sleeping jeans and he literally sleeps like two to five every day. Perfectly happy all the time. And I'm so jealous. And I he was working in product marketing and product management and he, he'd go to work and go, I'm not technical.

[00:46:51] **Matt Wallace:** And then he'd be at home writing, functional code writing, like high performance A M Q P buses. In the middle of the night basically, and was like, how would [00:47:00] he, is this so unfair? Like I just love

[00:47:02] **Anna Claiborne:** I know. Imagine,

[00:47:03] **Matt Wallace:** that extra time.

[00:47:05] **Anna Claiborne:** imagine if we didn't have to sleep at all. Imagine like the things you could get done. That to me is super fascinating. I mean, aside from the fact that you will definitely right now unfortunately go insane and die I that, that is a downs, that is a current

[00:47:19] **Matt Wallace:** It's a, it's a bit of a downside. Yeah.

[00:47:22] **Anna Claiborne:** But

[00:47:23] **Matt Wallace:** do. I, or maybe it's better though, to just go ahead and sleep and then just not have to die when you're, if your time is not as limited, then maybe the sleep. I almost think I, I offended by my own mortality and maybe that's why I don't wanna sleep.

[00:47:37] **Matt Wallace:** Cuz it's like, Hey, there's only a limited time. Although to your point, I guess if you're not sleeping now, you're probably doing more harm than good even on that score. So.

[00:47:46] **Anna Claiborne:** Yeah. But if you didn't have to, that's the point. That is the limited, that is exactly it. It's limited time and if you are sleeping well, it feels like it's making good use of the time for us. High sleep needs people. There are much more interesting things [00:48:00] we could be doing if we didn't have to do that.

[00:48:02] **Matt Wallace:** Yeah, for sure. I mean, I, I'm kind of changing topics as I think this has been bubbling up in my mind for a while now. I, I've been thinking a lot about LLMs and things like G P T and their ability to help you, especially if you have domain knowledge gonna get from concept to code and execution a lot faster.

[00:48:22] **Matt Wallace:** What do you think about the future of APIs and automation? Do you see any clear impact coming from all of this ML and the G P T moment and, and these things? I, and I actually am thinking about this one startup that I saw at this little sub conference, future net where Martine Casado was hosting this and it was a lot of little network startups and one of them had a sort of declarative network software where you would actually describe your intent in pure, like English.

[00:48:51] **Matt Wallace:** So kind of like a tail F except like a natural language processed tail F, right? Where you just go, oh, I want to have a global VLAN [00:49:00] on my sonnet ring that you know is redundant and whatever you wanna des however you describe it, right? And it would come back and either go, okay, here's what you would do to the, your switching layer, your routing layer to implement that.

[00:49:11] **Matt Wallace:** Or you can't, cuz your layer one is wrong, but go do this, this, and this, and then we'll get there. What do you think about the kind of future of automation in that regard?

[00:49:19] **Anna Claiborne:** That's a good, well, I can tell you on my, on my list of things to do is I wanna go ask chat g p t to write to write some code to link together cuz we just made this feature at work. I know this is gonna, I'm, I'm bringing it back to the real world here for a second, which was less fun. But we just did this, this feature at work, uh, where we do multi-cloud connectivity.

[00:49:43] **Anna Claiborne:** So we have a cloud router, which is a VF and it, we will connect to AW S via it's, uh, Dr. Direct Connect network product and then to all the other clouds, like G ccp, Azure, via express route, all these things. And so, we made, uh, a real simple integration where we can configure the cloud side of the [00:50:00] network too.

[00:50:00] **Anna Claiborne:** So enter all the B G P settings, the user just enters, gives the IM user, and uh, an API key and we configure the cloud side, all the BGP sessions or, uh, all the BGP information. So it just makes it a lot easier from a, a, a usage standpoint. You're not toggling between

[00:50:16] **Matt Wallace:** Did, do you give them all the options for that? Like on the Amazon side, do they get to pick V G W attachment versus direct connect gateway attachment or transit gateway attachment and then,

[00:50:26] **Anna Claiborne:** Yeah. You get to pick your gateway attachment point. Yeah. And then like we, we'll, we populate that on our side with all of the available gateways. So the user just picks that.

[00:50:36] **Matt Wallace:** yeah. Okay. And if they pick a A D X G W then do you I, I guess when you're picking prefixes, the prefix filter is actually when you attach the V G W to the D dx. G w. So you assume if they pick the direct connect gateway as the attachment point, you assume the virtual gateway is just already attached or they'll do it on their own, or do you kind of two-phase that

[00:50:59] **Anna Claiborne:** [00:51:00] We assu, let's see, what do we assume? That's a good question.

[00:51:04] **Matt Wallace:** we're really out in the weeds, by the way. So everybody, he was like, this is such a great con con conversation about the future. And then they're like,

[00:51:10] **Anna Claiborne:** And then all of a

[00:51:11] **Matt Wallace:** what are these two saying? I'm sorry. We've done some cloud networking and so watch out.

[00:51:17] **Anna Claiborne:** And so then we went straight into, yeah. So then we went straight. Yeah. So we're went straight into cloud. Okay. So let's pull it, let's pull it back. Let's pull it back a little bit to, to stay focused on the price here, which is the, the point is, it, it's all about making it easier for the user, right?

[00:51:30] **Anna Claiborne:** So you're not toggling between cloud consoles and the biggest thing, like, it, it, it's ridiculous just what you get wrong, because you forget, you, you typo the vlan or you typer typo, uh, a cider in the prefix list. You, you do something. I inevitably always do something when toggling between two different consoles and then I'm like, Ugh, why didn't this session come up?

[00:51:50] **Anna Claiborne:** And then you go troubleshoot it. So anyway, it's just, it's taking that whole, that whole annoyance factor and time

[00:51:57] **Matt Wallace:** Two transposed numbers are responsible for [00:52:00] like more outages and failed deployments than like anything on Earth, right?

[00:52:03] **Anna Claiborne:** Yeah. Yeah. So I was thinking, I was like, I really want to, I have a note in my like, ongoing notes from this week. I was like, I really wanna go ask chat g p t to write me some sort of integration. I haven't really thought of what it is yet, but I want to have it, say, take the A W S API and the packet fabric API and, and write some sort of integration for me just to see what it comes up with.

[00:52:24] **Anna Claiborne:** Because that might be a really interesting point

[00:52:27] **Matt Wallace:** Is that as specific as you're gonna get though? Write me some sort of integration or you're just, you think you'll figure it out and then ask for something more specific?

[00:52:34] **Anna Claiborne:** I'm gonna figure, I'm gonna figure out exactly what I want to do. I just haven't put enough thought into it yet. Cause I wanna start with something pretty simple. So I need to put a little bit of thought into like, okay, what would be a, what would be a simple thing to do here? Write me integration and then see what it spits back and if it's any good and it works.

[00:52:49] **Anna Claiborne:** Because that could be a real, a real applicable thing in this world of services that we're at now where everything has an API and everything's a service. If chat G p t can reliably stitch together [00:53:00] those services with some code, cuz previously you would've had to have coding knowledge. Knowledge of all the APIs that you wanna integrate together to do this, which is not trivial.

[00:53:10] **Anna Claiborne:** And if you can have chat G P T, do that free pretty quickly there's a lot of cool integrated services that can be made past that. That's the most sort of practical thing I can think of, of like how is AI gonna change this API driven world? 

[00:53:22] **Matt Wallace:** I'm, I'm like, really? I'm even thinking about a whole other level too, where I think the intent of what you want to get done still matters because I don't, I think we're still quite a ways away from going. What are the top improvements you think you can make with these APIs and having it be smart enough or insightful enough to really piece that together creatively?

[00:53:44] **Matt Wallace:** Like, I think it would come up with some stuff and they would probably be things you could integrate, but they'd probably be less valuable than what you, with a lot of experience and knowledge and like, customer experience and so on would, would really pick. But I will say right after the chat G P [00:54:00] T API was released, I was building this text to speech, speech text interface.

[00:54:04] **Matt Wallace:** Cause I wanted to talk to chat G P t cuz as soon as I started using the G P T web interface, I'm like, why do I have to suffer having like, the Amazon version and the Siri version of this? In my world, can I just have G P T power, everything Cuz it's so much more intelligent, obviously.

[00:54:20] **Matt Wallace:** Right? Like if I, all those times it feels like such a, it feels so painful to, to hear, the echo talking back to. According to an Alexa, answers contributor, blah, blah blah, blah. But it's like, it's still useful. And actually it seemed like magic when I first used it, but comparatively, to what you get from chat e b T.

[00:54:38] **Matt Wallace:** Anyways, so I'm writing this interface and the very first speech to text thing I used is this, was this service at the time, they might have changed their name called Eden ai and it was like a front end for 11 backend services, but I'd never seen it. But I went and they had API specs and I literally grabbed the open API spec and really paste it in the chat window of chat was like, write me a [00:55:00] class that will accept audio files either by streams or a file name that you can then open and read and then call this API to convert speech to text and leave the spot for my API key.

[00:55:13] **Matt Wallace:** Worked almost one shot at it. So, I mean, I had to paste the API key, but I got the class. Yeah, it was it was mind blowing. I, I, this whole thing I think would've taken me like two weeks maybe. Cuz I'm not, I'm a little bit rusty on the keyboard. I know what I want to do, but I'm still that person who, I've, I've written so many hundreds of thousands of thousands of lines of code, but I'm rusty enough that at least as a couple months ago when I started doing this might be true, I'd be like, oh shoot, what is list versus dick syntax in Python?

[00:55:41] **Matt Wallace:** It was that rudimentary cuz like, it's the thing you don't ever need to know cuz it's so easy to find out and it's, it doesn't really, affect the way that you think about what you're gonna write and yet what you, it's the thing that always tells you you're not a real programmer, right? Because you can't even remember this tiny

[00:55:57] **Anna Claiborne:** Yeah. Or you're not a real programmer anymore. [00:56:00] Yeah. Like that,

[00:56:00] **Matt Wallace:** Yeah.

[00:56:01] **Anna Claiborne:** that drives, cuz I haven't, I haven't written any, any real, like, practical code in three years now. It's like, I, I still was probably doing, there's scripts that I make myself and things like that, but whenever I go to do it now, it's so frustratingly slow because in my head I still can kind of get, it's like almost looking at things through fuzzy glasses now.

[00:56:21] **Anna Claiborne:** Cause it's like, I'm like, okay, I know what I wanna do and here it is. But then I, I can't remember some absolutely trivial piece of syntax and I have to look it up and then it happens again and then it happens again. And you realize that you're severely out of practice.

[00:56:33] **Matt Wallace:** Yeah. Yeah, it's weird. And I mean, I spent probably eight or 10 years basically where software engineering was more or less my whole job. And yet, you, you look back, I felt, I think the last time I wrote code that kind of went into production was probably like 2014 or 2015 where I, I was, I replaced an entire like, billing system basically.

[00:56:53] **Matt Wallace:** But other than that, uh, it's been a really long time, but it, it, uh, it's interesting because I, this is the [00:57:00] thing that makes you appreciate the value of experience. And I don't mean to say like, oh, you can't use the tool if you don't have it, but it's the thing that makes me realize I have such a clear picture of what I want from an intent and why that intent exists as opposed to the implementation was hard.

[00:57:15] **Matt Wallace:** Now I flip it around and I, I've actually seen, I remember interviewing someone for a software development role and I said, let's build this thing, right? And he sits down and, and I'm like, you ask any question, et cetera. He sits down and he starts coding and I'm amazed cuz just like right off of the bat, he's just fluidly like importing packages and just writing a ton of code and synt tactically the code is working and he seems so fluid.

[00:57:39] **Matt Wallace:** I'm like, wow. Like this is somebody who's literally obviously doing this like all the time. And yet the funny thing is his implementation was definitely not optimal. And I was thinking the whole time, I was like, wow, he's doing it so well. He's doing the wrong thing. So well,

[00:57:52] **Anna Claiborne:** Yeah, the wrong thing is so right.

[00:57:54] **Matt Wallace:** Yeah. And it's, it, it makes you appreciate, I mean, it's like, I wanna say this with like, all the humility [00:58:00] deserves, right?

[00:58:00] **Matt Wallace:** Because like, chat g p T is my crutch. It's like a little embarrassing, but on the other hand, it gets me excited about the fact that it becomes possible to kind of close that gap between intention and implementation a lot. Which is great. I mean, and I think at the end of the day, you talked earlier about people kind of becoming cross domain experts and it, it means, I think if you're a software engineer, chat G P T and the tools like IT Codex, et cetera, are going to probably replace a ton of the effort that you put into click clacking on the keyboard to get the code when you have the intent clearly in your mind, but you're gonna replace that with a lot more time.

[00:58:39] **Matt Wallace:** Like understanding why you're doing what you're doing, listening to the customer or talking to a stakeholder, understanding the domain, right? Because we don't get software that drives cars and ships products just because we can, cuz we can solve a coding challenge, right? You can go solve the hard algorithm challenge.

[00:58:56] **Matt Wallace:** It doesn't mean that you really know like how the flow of [00:59:00] processing a credit card works if you've never done it before. And there's probably a million things that are like that.

[00:59:05] **Anna Claiborne:** yeah. I, I'm sure, and it's funny cuz the whole idea of like intent-based networking that's been around for a while. I, I, I think it was a really big buzzword back in like, gosh, I wanna say like that 2015, 2016 era, intent-based networking and, and we are

[00:59:21] **Matt Wallace:** open flow was like all the rage.

[00:59:23] **Anna Claiborne:** Yep. And open, yes.

[00:59:25] **Anna Claiborne:** OpenFlow was all the rage. That was the first SDN implementation of, this is, this is going to, this is gonna change everything. And turns out OpenFlow is actually really tricky and so, Yeah. And it's, I, it just goes back to the root of everything that we were just talking about in that it's really, it's really hard to be both an expert on, like conceptually because it is sort of this wisdom with, experience that gives you the ability to go, I know exactly what it is that I, that I want.

[00:59:55] **Anna Claiborne:** But then your skills are often so dull that it's difficult to do it [01:00:00] or your skills are so sharp that you can implement something quickly, but the picture of what you want is pretty fuzzy. And so it's such an interesting dichotomy there because in order to really, in order to produce something that is both, in order to produce efficient quality code that does exactly the thing that needs to be done, you need to have both of those elements.

[01:00:21] **Anna Claiborne:** You need to have the intent very clear and you need to have execution that is done in a way that is, uh, high, highly efficient. So, yeah, I'm trying to think of like how, the having AI be able to write that, that initial code for you, and especially as it improves and does things more efficiently that will allow people to just be even clearer on the intent and do that whole, to your point, do the whole cross domain expertise thing, because then you can be an expert on 16th century Russian literature and not know anything about code and still [01:01:00] get, still manage to produce good, efficient code that analyzes every 16th century Russian work and, looks for a particular word

[01:01:09] **Matt Wallace:** Yeah. Being in technology, I, I bet this happens to all of us, but I have the parents and, and the friends who are like, can you help me with the X? Right. And it's the, why does my phone do this thing? And my computer gives me this error message?

[01:01:23] **Anna Claiborne:** 90% of the time. I, I dunno, the answer's gonna be, I have no idea.

[01:01:28] **Matt Wallace:** Reboot it,

[01:01:29] **Anna Claiborne:** Yeah. Reboot it. That is, yeah.

[01:01:32] **Matt Wallace:** call support. Well, it's funny though. My, my dad has many times wanted to do something sort of technical, I think, right? And he was like, there was a time, uh, I remember, uh, writing him a, a movie blog review site out of Ruple wasn't a ton of effort, right? It was mostly like, install it, download a bunch of plugins.

[01:01:49] **Anna Claiborne:** I haven't heard that one in a while. Yeah.

[01:01:51] **Matt Wallace:** I think it's still around, but I mean, this is circa like 2000, maybe six, seven, right? So this is when Ruple was really having a heyday. 

[01:01:59] **Anna Claiborne:** [01:02:00] I still, I, I still love php, don't get me wrong. I, I was not a big frameworks person, but I, I love, I wrote a lot of php, so

[01:02:06] **Matt Wallace:** I wrote a lot of PHP two. I mean, we actually used it for some of the customer front end stuff at Exodus, like, and I wrote some, I, I wrote a remember writing a translator that would pull all of the, uh, checkpoint config files and it would log into the Cisco pys that actually dump, it used a screen like an SSH plus screen module in Pearl rather than

[01:02:29] **Anna Claiborne:** Oh yeah. Pearl Pearl net. Pearl net S SSH was the best. It was the

[01:02:34] **Matt Wallace:** That was like, there's so much, I bet there is a mountain of automation baked on that nowadays, people still today, even at pax are like, yeah, this thing, and we're not just on the API on this, this little sketchy. And I'm like, well, let's just go get it from the console. Like as it, because I'm thinking you guys have never dealt with Net Pearl ssh.

[01:02:54] **Matt Wallace:** Right? That's, that's how we did it in my day and get off my lawn. Right. 

[01:02:57] **Anna Claiborne:** Hey, it is a try to True. It's funny cuz [01:03:00] we're looking at some extreme equipment right now automating that and they don't have an api and I was like, back to the

[01:03:05] **Matt Wallace:** do have an api.

[01:03:06] **Anna Claiborne:** They do really, why can't I fi?

[01:03:09] **Matt Wallace:** However, truth, truth seriously, and I, I'm pretty sure I can just say this but there was a time where we actually had a workflow that we. It matched what we did on the CLI matched what we automated against the api, and we tested the API version, and it worked fine in all of our test dev when we finally rolled in the production.

[01:03:29] **Matt Wallace:** And the moment we rolled it to prod it crashed the switches because the whatever was implementing the API had some kind of memory buffer problem and it would actually go oom and then the, the, the switch would die. And I don't know if it needed a hard reset or what, but it was like, I just remember getting one of these, these really cranky CEO calls, like the automation just, just caught it up, caused a prod outage.

[01:03:50] **Matt Wallace:** And I'm like, this has been tested more than anything we have ever tested in our entire lives. But this just goes to show the flaws in like testing. We were [01:04:00] testing in a lab of our, we, we did all the use cases. Our test coverage was awesome, but we didn't test it a hundred thousand times. And when you start doing it at prod scale, the same thing actually happened at VMware.

[01:04:10] **Anna Claiborne:** uh, testing anything at scale is hard,

[01:04:12] **Matt Wallace:** yeah, and it's always the memory things, by the way, like V Shield manager back in the day, I, in those early days of v cd, the Blue Lane appliance that VMware bought, that became vShield Edge and vShield manager there, there was a, there was some integer that that was like about the, which API call you were on.

[01:04:31] **Matt Wallace:** And it was only like a 16 bit integer. And so on the 65,536 API call, which by the way, VCD was just making them in the background. So it was, you were gonna get there at some point no matter what you did. But on the 65,000 506 36 call it would stop responding like completely, I mean like a hard reboot basically.

[01:04:53] **Matt Wallace:** So, yeah, unbelievable the way these funny things like creep up on you. And again, it's one of those things if you tested it, quote unquote in [01:05:00] real world conditions, but this ends up being like, how do you let it run for months? How do you hit it with tens of thousands of calls? Uh, they're being resilient against these like kind of unexpected errors is really tricky.

[01:05:10] **Anna Claiborne:** That's why, uh, we developed back in, this was back in like the Prolexic days because we had, we had we had Ixia to generate just an, a huge, huge amount of traffic. Cause when you're dealing with DDoS, you're always dealing with brand new conditions, brand new attacks. You're always dealing, dealing with some, some bit and some header that you've never seen before.

[01:05:32] **Anna Claiborne:** Uh, because there's, uh, there's a program on the other end that is, sometimes writing these packets and, sometimes accidentally malforms them and all these other things. And so we've always would say, uh, production is the best lab. It's always been the best lab because

[01:05:49] **Matt Wallace:** I don't always test my code, but when I do, I do it in production.

[01:05:52] **Anna Claiborne:** production. Yeah,

[01:05:53] **Matt Wallace:** do a Heineken, when I say that,

[01:05:55] **Anna Claiborne:** I You have Yeah, you absolutely do. And that's, it's

[01:05:59] **Matt Wallace:** wait, it's [01:06:00] doe. Sorry.

[01:06:00] **Anna Claiborne:** yes. The DES

[01:06:01] **Matt Wallace:** to the most interesting man in the world.

[01:06:03] **Anna Claiborne:** Yeah, that, that's, I forgot about that guy. That is a great, that is a great meme. 

[01:06:07] **Matt Wallace:** Es especially for that.

[01:06:09] **Anna Claiborne:** yes, it really is. So, and it's funny, it's like when, we talk about all this lofty pie in the sky sort of stuff, but when are we going to actually be able to test things reliably?

[01:06:20] **Anna Claiborne:** Like, isn't that sort of step number one that uh, we stop using production as the best lab? Like when, when is that gonna happen?

[01:06:27] **Matt Wallace:** This reminds me too, cuz I, I had some folks that I worked with at, uh, VMware and I, they were very into the intersection of like lambda calculus and functional programming. Right. And having code that you could prove, even from a mathematical sense that like, this will always run correctly. I mean, obviously it doesn't account for things like, okay, you used non ecc ram and a, something coming from the sun knocked an electron loose and flipped a bit.

[01:06:54] **Matt Wallace:** I mean, there's no accounting for that stuff. Cosmic rays the Fantastic four [01:07:00] and your error in production both the same cause. Right. But the thing I was the, it, it's most dramatic to me when juxtaposed to automation because the thing they'd always say is, well, you don't wanna create side effects, right?

[01:07:13] **Matt Wallace:** Like, the idea is input equals output over and over and over again on any given function. And I think there's some interesting evolutions in functional programming that I've, that I've seen in my tiny bit of spare time that I've, dedicated to keeping up with this. Where I saw a language recently that kind of tries to implement a lot of the good things that a scholar did, but it tries to be very declarative about things with unknowns where there's un unknown types or unknown side effects.

[01:07:40] **Matt Wallace:** Like it has a specific whole set of I don't wanna say patterns for dealing with less known things where, you know, that doesn't apply. And that dirty world though, that's what automation is. And I was thinking about, all those like net pearl, ssh, it's like those things end up being like 20 lines of functionality and then like a thousand lines of error [01:08:00] catching, right?

[01:08:00] **Anna Claiborne:** I know. I know. Yeah. Yep. Yeah, that is, that is so true too. Yeah. Cuz you have to account for every single error that might possibly happen when you're just reading input like that.

[01:08:13] **Matt Wallace:** Some of which you don't know, right? It's the you there's no nu unknowns you test for, but then it's even like, how do you test for the unknown unknowns, right? And

[01:08:20] **Anna Claiborne:** And we're back to production as the best lab

[01:08:23] **Matt Wallace:** Yeah. Don't you end up frustrated though when some, when you get like a, some random Java stack trace and it's like, it doesn't even really explain what's happening.

[01:08:31] **Matt Wallace:** You're like, listen, why did you just fail an assertion and failing the assertion through an error as an exception? Like, can't you do something with it? Like you were testing for it, assume that it fails now do something rational, not just like throw up and die. Right. So, which I

[01:08:47] **Anna Claiborne:** failed successfully.

[01:08:49] **Matt Wallace:** yes. Failed. Failed successfully. Check another test passed. Well, so maybe AI helps with this. Like, one of my, uh, [01:09:00] old coworkers back at VMware now runs a company that does AI generated test coverage. I'm actually hoping to have him on at some point because that he's been there running that company like three or four years now.

[01:09:11] **Matt Wallace:** And I, that's like, so before the G P T moment nowadays of course, I, I literally think about this automation conversation. I go, I wonder what happens with a G P T form? And you go, here's a functional script. It does what I needed to do. Tell me first of all, all the things you think it does and then write me every possible, write me a test for every possible error condition.

[01:09:33] **Matt Wallace:** Yeah. Everything you can think of, right? And it, and I just, it's such a big domain. Like, uh, the one thing about doing this I think that you probably agree with is just when you think you've got everything covered, there'll always be some weird thing. Oh yeah, this installer doesn't have the permission set correctly on that thing.

[01:09:52] **Matt Wallace:** And oh, and memory error. Oh, look at that. The number of cores on the processors, not divisible by two and [01:10:00] therefore, I mean, just a million dumb things and something always manages to go wrong. I mean, you get hardened, but it's never perfect.

[01:10:07] **Anna Claiborne:** Now I like the idea, I actually really like the idea a lot of asking G P T four to basically become your qa. I, I wonder like I, I wanna do that test now along with my integration test, just to see what it comes up with, because that sounds really fascinating.

[01:10:23] **Matt Wallace:** Let me know how it goes. I actually have had good results feeding it code and going essentially harden this, optimize this. What can possibly go wrong with this? What, what is, what's assumed by this? Cuz it's, it, uh, it's kind of a, it's kind of a very analytical thinker. I guess I'd.

[01:10:40] **Anna Claiborne:** Yeah. Yeah. It's this always, this, this always reminds me of my favorite joke whenever we're talking about qa. Cause I'm a horrible QA person, by the way. So, it's like, so

[01:10:50] **Matt Wallace:** am the technical debt.

[01:10:51] **Anna Claiborne:** I, uh, he, yeah, I'm, yeah, that's, that's not my, definitely not my role in life. A QA engineer

[01:10:56] **Matt Wallace:** I'm really good at rapid prototyping though.

[01:10:58] **Anna Claiborne:** Yes. [01:11:00] Rapid prototypers are typically not the, uh, not the QA testers. 

[01:11:03] **Matt Wallace:** I need a pair. Programmer and G P T volunteered.

[01:11:06] **Anna Claiborne:** yeah, it's the QA joke. So, a QA engineer walks into a bar, he orders a beer orders, zero beers, orders, 9,000,999 beers, orders a lizard, orders negative one, beers orders, a string of random texts, orders, all this, first real customer walks in, asks where the bathroom is and the, the bar bursts into flames and kills everybody.

[01:11:28] **Matt Wallace:** Oh my god. I haven't heard that, but that's

[01:11:31] **Anna Claiborne:** really? Ah, now you got a new, yeah. Now you got a new

[01:11:34] **Matt Wallace:** That, that's so real too. I mean, actually reminds me of that, that little quip they used to hear, this is like 20 year old quip, right? Where they go, if Microsoft built cars by now, they'd get a million miles to the gallon, cost a hundred dollars, ride smoothly, drive themselves, but explode once a year killing everyone inside.

[01:11:52] **Anna Claiborne:** It might be worth it. I mean, I don't, I like that actually doesn't seem like the worst deal.

[01:11:57] **Matt Wallace:** Roll, roll the dice.

[01:11:59] **Anna Claiborne:** Yeah. I mean, [01:12:00] te Teslas do in fact burst into flame, occasionally really burst into flames and kill everyone. So, and they don't even do all that cool stuff. So,

[01:12:09] **Matt Wallace:** Yeah, they do some cool stuff though. I, uh, I've, I've been on the, I've been on, I do, yeah. Well, my wife does, but, uh, it's kind of the family car. But yeah, I, I've been, I've been turned on the FSD beta now for, I don't know. It's like 16, 18 months. Yeah. Since the, since the days when you had to have the perfect driving score.

[01:12:31] **Matt Wallace:** And I drove like perfectly fastidious and I was like, honey whites is following two close alert. You cost me 10 points, it's gonna take two more weeks to get eligible. Kind of thing. Uh, and then I got it going, but it's, it was terrifyingly bad when I first drove it. Im doing all kinds of weird things like crossing the wrong lane right after it turns into a parking lot and just proceeding straight from a right turn lane.

[01:12:54] **Matt Wallace:** And of course, like I'm not counting on it to be sane and so I pay a lot of attention to it. [01:13:00] And so, it's safe because I actually trust it. It's probably safer for me than the cruise control my volts, cuz I have a Chevy Volt too and it's got an adaptive cruise where it uses radar to like pace the card.

[01:13:11] **Matt Wallace:** If, if the car in front is dead stop it will come into a dead stop. But it sometimes it's been a little close, like where it's like doesn't stop start breaking soon enough and it breaks really hard and you're like, oh boy, I don't even wanna step on the, the break now cuz it's trying. And I might mess it up cuz it'll disengage, but I'm like terrified I'm gonna hit the guy in front of me.

[01:13:31] **Matt Wallace:** Uh, and it never did so hooray. But in some ways, because I trusted it more, cuz it's such a quote unquote simple system, I think maybe in some ways it's riskier. I, I had a friend though, who literally had, had, eight, 10 years of Model S and would drive down the freeway, wedging his iPad against the steering wheel and playing games on the iPad while on the freeway.

[01:13:53] **Matt Wallace:** Not generally at like super freeway speeds more like traffic on the freeway, which it's probably the, the, the [01:14:00] safest ca case to use it because everything is so straight. There's nothing to do with you're

[01:14:03] **Anna Claiborne:** And it's so structured, you're just going, it, it, it is accelerate decelerate, accelerate, decelerate, or accelerate stop. It

[01:14:09] **Matt Wallace:** But it did get into an accident now and it was technically the other person's fault, but it did happen while he was on on. Not fsd, but on autopilot not paying attention. The guy just cut in and didn't yield and the car wasn't like a hundred percent clear on like, okay, I think I get to keep going. And it didn't realize that its front corner was gonna hit him, and so it did hit him.

[01:14:31] **Matt Wallace:** So that guy ended up having to pay for it. But still, I'm sure friend wishes he was looking up at that moment and could have averted the accident. So

[01:14:38] **Anna Claiborne:** Yeah. So what, what do you think about self-driving cars? The, I'm, I'm, I'm really curious about this because you're a huge AI proponent. What, what do you think when do you, or I should ask more specifically, when do you think we're gonna get self-driving cars?

[01:14:53] **Matt Wallace:** Uh, it de depends, but I mean obviously like the powerful level two systems are here now, and I, I probably as [01:15:00] much as, uh, as much as I'm really careful with the FSD on city streets, I probably have done more slightly risky driving on freeways than any would want, anyone would want me to. I think real level three, where within the confines of many places, including certainly the freeway, certainly the freeway at lower speeds where you can be unattended, I think it's gonna be here literally in the next two or three years.

[01:15:27] **Matt Wallace:** I mean, and it's shipping now, like Mer Mercedes has a car, no, sorry, it's bmw. The BMW seven series or or eight series is gonna get level three, a very limited level three self-drive, like on this next model if it's not out right now. And Toyota has a car in Toyota that is also level three enabled, but it's super restricted to start, it's like only works on the freeway only when speeds are less than like 37 miles an hour.

[01:15:54] **Matt Wallace:** So it's, it's literally the gridlock avoider, it's still like a killer app, right? Like [01:16:00] is there more of a miserable time to be in your car, right, than driving in gridlock on the freeway. But I think getting to level four, hard to say. I think it maybe it depends a little bit on what happens. With things like lidar or will it follow up a progression or, will we see, just better image recognition models when Tesla began this journey with their self-driving stack?

[01:16:24] **Matt Wallace:** I mean okay, so Google shipped the, the attention is all you need paper that introduced the self attention mechanism for transformers, which like revolutionized so much of ML in 2017. So everything since then, like something like 80% of all like big new interesting ML models now I think are, are based on transformers in some way.

[01:16:47] **Matt Wallace:** And I feel like, Tesla literally had fully functional kind of fsd at least on the freeway before that. Right. So I mean you have to admire how good they made it when they had essentially [01:17:00] junk hardware with no lidar kind of junk software in so many ways. I think we're well on our, I mean I think there's a ton of improvement that will happen.

[01:17:10] **Matt Wallace:** And to be fair, like I literally, I went to lunch today with my wife and the car drove me across town and I don't, I didn't intervene at all. So I was like navigate to the sandwich shop and I kicked it in and it drove miles and miles and made right turns and left turns and yeah. Everything and it was totally fine.

[01:17:27] **Matt Wallace:** And I've got my hand on the wheel and I'm paying attention, but I'm paying attention to that like very relaxed. Like I expect it to do certain things but it's like doing it all for me. It's kind of a nice way to drive. I dunno if it's $15,000. Nice. Cuz I mean it does not change the driving experience.

[01:17:42] **Matt Wallace:** Right. It

[01:17:42] **Anna Claiborne:** Yeah. You still have to be constantly engaged and that, that's part of the, that's, that's the thing that we're looking to solve. I, I don't know. I ha, I

[01:17:51] **Matt Wallace:** I wanna have a desk in my car for sure. Right. Or

[01:17:55] **Anna Claiborne:** That's the whole point is you could use driving for other time. Driving becomes a lot like sleeping. It's a, [01:18:00] it's a period of time where you're unable to do other tasks that would be much more productive. Yeah. Uh, I, I think layer, I think level four is gonna take a lot longer than anybody else thinks.

[01:18:10] **Anna Claiborne:** I, the most practical application that I keep coming back to, especially for like a level three oriented system, is long distance cross-country trucking. Because that

[01:18:20] **Matt Wallace:** I think, I think that's the first gonna be the first level four, literally for

[01:18:24] **Anna Claiborne:** you think so? Well, because I should, I don't think the trucks, well, okay. I guess

[01:18:29] **Matt Wallace:** sorry, I don't know if it's trucks. I think freeway beats city, right? Because there's just so many less variables for sure. Although,

[01:18:36] **Anna Claiborne:** by a lot.

[01:18:37] **Matt Wallace:** mean, have you seen the cruise cars drive around?

[01:18:39] **Anna Claiborne:** I have, I have seen this and, and they look like they're, they look like they're doing well, but they're also surrounded by almost entirely human drivers, which won't be in

[01:18:52] **Matt Wallace:** Oh, sur Oh yes. Surrounded by human drivers. Well, so this is something I think that is level four reaches reality. I think something, we, [01:19:00] in the context, let's take this back to networking, right? I kind of feel like for several years, actually maybe as long as like six or seven years, there's been like edge, edge, edge has been like a, a drumbeat and a lot of

[01:19:11] **Anna Claiborne:** tell me where, where is the edge? What, like, I love this. This is like the fun question. So where, where is the edge right now?

[01:19:17] **Matt Wallace:** Well, I usually categorize things in like near edge, far edge, right? So to me, near Edge is like all those data centers that aren't near cloud, right? Like everything that's being operated in, Denver and Utah and, uh, Portland that's not in the cloud area and Minnesota and whatever, right? And I, and I, I was at VI West for a long time and one, part of the game plan was build, build and operate the best data centers and have the best service in all those secondary markets.

[01:19:44] **Matt Wallace:** There's not gonna be a Virginia, et cetera, et cetera. And I think in terms of proximity and getting out like roughly adjacent to, uh, some big portion of users, that's useful. But then I think the far edges. Everything else, it's the truck, it's the sorting [01:20:00] facility, it's the cars, it's the, whatever.

[01:20:02] **Matt Wallace:** It's the farm. It's a million places. And the problem is like, what? I think everybody really, the way I, the hype for me was 5G is gonna change everything. And then later you're like, sorry, how does 5G change

[01:20:16] **Anna Claiborne:** Has anything?

[01:20:18] **Matt Wallace:** it's not a fi it's not a far edge technology

[01:20:20] **Anna Claiborne:** No, no.

[01:20:21] **Matt Wallace:** it goes a

[01:20:22] **Anna Claiborne:** requires things to be even

[01:20:23] **Matt Wallace:** or something, right?

[01:20:24] **Matt Wallace:** Yeah, exactly. So it's like, it's a ton of bandwidth for, it's basically a wire replacement for like dense markets instead. Or I think, the whole idea of private 5G and or, or wifi six something along

[01:20:36] **Anna Claiborne:** Oh yeah. Fi, 5G in the office. Buildings. Yeah, office buildings, factory floors, things like that. It makes a lot of sense. But out outside of that, the economics get just really

[01:20:45] **Matt Wallace:** yeah. Instead, what we need is really tiny amounts of bandwidth on really low frequencies that are powerful. Like the submarines where you knew if it like sank to the bottom of the ocean and the ballast was broken, it could actually still send like one bit of [01:21:00] morse code per, 10 seconds like halfway across the planet, right?

[01:21:04] **Matt Wallace:** Those kind of ultra-low

[01:21:05] **Anna Claiborne:** know that they could do that. That's cool.

[01:21:06] **Matt Wallace:** Yeah. There's some, something about you could read more science, more Michael Kreon and uh, and uh, what's that other guy who writes all the hunt for red October? Books his

[01:21:16] **Anna Claiborne:** Oh, yeah, yeah,

[01:21:17] **Matt Wallace:** totally forgetting. Tom Clancy, right? You read more Clancy and Creighton and then you, this, this stuff happens in those, but yeah.

[01:21:23] **Matt Wallace:** Uh, I think maybe that's what some of those devices need, right? Because all they have to say is like, it's this temperature and I'm okay like every hour or something. But they, but you really want no connectivity of that. And, and actually LTE is kind of a junk solution for that. I mean, in the sense that it's way more throughput than you could possibly need.

[01:21:42] **Matt Wallace:** And it goes, and you, what you really want is distance, not throughput, right? You need reliability probably sometimes.

[01:21:48] **Anna Claiborne:** Yes. You, you need distance, you need reliability and, and you need, and you need a decent throughput, which for what the applications are running today, uh, LTE provides that, it may not be the case in the [01:22:00] future, but it works enough for now.

[01:22:02] **Matt Wallace:** so let me link the network question and the edge question with something that, that aligns with my original self-driving car vision. Let me ask you how we get there. 

[01:22:10] **Matt Wallace:** So a long time ago I did this economic calculation.

[01:22:14] **Matt Wallace:** There was like 600 billion in energy savings. But my vision of self-driving at the time was like a whole fleet of cars, like drafting behind one another, doing a hundred miles an hour, where fleet level coordination would have whole batches of cars, like slower speed up. So there would be no stoplights.

[01:22:28] **Matt Wallace:** You just zip through and if you were gonna collide, you're, you're cohort or whatever you wanna call it would slow down to let the other one pass, et cetera. So I still think maybe there's a chance that on some level we're gonna force some kind of self-driving car protocol cuz of exactly what you said.

[01:22:45] **Matt Wallace:** You don't, there's a bunch of human drivers. If at some point you wanna make the cars have this higher bandwidth, more useful communication where they could warn each other, there's a pothole ahead, or I might be, my camera [01:23:00] is, uh, blocked, so I'm, I'm going back, I'm gonna take myself offline, but let me get home first or whatever.

[01:23:06] **Matt Wallace:** Uh, what does that take from a network perspective, if we assume there's like literally millions and millions of vehicles and they're all over the place and roads stretch all over in a, if somebody said, Anna designed the network for the self-driving car fleet of the future, so they can go anywhere, there's a road maybe including dirt roads.

[01:23:22] **Matt Wallace:** What do you like? How do you approach that? 

[01:23:24] **Anna Claiborne:** This is where I think everybody gets wrapped around 5G and maybe doesn't really understand it, but this is if you had a 5G radio in every car where they could communicate with each other, and then also enough processing power. The, the problem comes in when you have to make any sort of call back to any sort of central computing type system or intelligence to make any sort of decision that that's a huge problem from a network perspective, because the amount of data and the latency upon which you need a response to that data is unrealistic because, uh, the, I mean, accidents happen [01:24:00] in fractions of seconds.

[01:24:02] **Anna Claiborne:** And not to say, it, network traffic is still moving at the, at the speed of light at least once you get it in fiber. But that's really not fast enough. Our compute, our compute, the edge, the edge still is not ubiquitous enough to service anything like that. So you have to have, like, all the decisions and everything have to be in cars.

[01:24:20] **Anna Claiborne:** So really the best, like the best case scenario is that the cars can accurately communicate ev within, to every car within, a thousand feet of them or, or, maybe even a, a bigger, a bigger area than that, because that's the only communication that matters is that real time for your direct area.

[01:24:39] **Anna Claiborne:** It would be nice to know, and it would also be less time sensitive to know what's going on a mile up ahead. What does the congestion look like up there? How do I plan for it? And that's something you could do. And I mean, that could probably be handled right now because you're talking about a, a much smaller amount of data potentially.

[01:24:57] **Anna Claiborne:** Maybe just sending the car speed, for example, [01:25:00] back to somewhere central so it can, so there can be a central idea of congestion like Google Maps does. Now. That's a low amount of bandwidth that could be handled over 4G like it is now, but you definitely need. High bandwidth, super responsive car to car systems as well as enough compute power and intelligence in that car to handle everything that it's receiving.

[01:25:21] **Anna Claiborne:** And in. And because the, your immediate area is the only important one, I mean, you, you have to know something about, what's coming up, but it's not a lot. Uh, your safety is only dependent on your, on your immediate area and how you're gonna act. And so, yeah, I, that's actually like, that is the case I think for, for fi for 5g communication is car to Car.

[01:25:42] **Matt Wallace:** That makes sense. Yeah. Have you, did you happen to catch that, uh, video? There was one where the CTO, I think turned c e o again of Cruz, Ken

[01:25:54] **Anna Claiborne:** Oh, Kyle. Yeah, yeah, yeah. It's ki it's Kyle. I actually, I actually knew Kyle back in the day, so it was [01:26:00] Kyle Kyle, a couple friends of mine, Kyle, when Jonathan, that started Justin tv. This was,

[01:26:06] **Matt Wallace:** Oh, I did not know they were involved. That's cool. I literally was just, uh, I was just listening to something about God who's, uh, I started listening to an audiobook and the person was talking about being involved. So they, I guess they were in the early Twitch days. I can't remember what I was

[01:26:22] **Anna Claiborne:** yeah. That's Justin, Justin became Twitch. And so, yeah, because that's, yeah, this was back when yeah, I mean, I haven't talked to him since he, since he joined Cruise or anything like that, but this was back in the day when they were trying to figure out, what was gonna be Justin tv and people had their own channels and then everybody started to get really interested in the people who were playing video games.

[01:26:41] **Anna Claiborne:** So that's how Twitch, that's how Twitch came about.

[01:26:44] **Matt Wallace:** It's so amazing how many hours get long to it now, but so before the G P T ment, Ken, uh, Kyle, sorry. There I go. Kyle and Sam Altman from Open AI actually did a little 45 minute session on YouTube and it basically [01:27:00] was Kyle showed Sam like compressed, hour and a half, but like it was going high speed sped up most of the time footage of a cruise car driving all around.

[01:27:09] **Matt Wallace:** And Sam was kind of doing a lot of, oh wait, hold on, go back. Like this thing that it did. And one thing that really struck me, there was a really interesting conversation on that video was Sam asking basically about the car as it went to make a left and there was a driver who was coming up to make a right and it started to turn before the intersection was really clear, right?

[01:27:31] **Matt Wallace:** And that's something a Tesla would never do, right? Uh, just my experience with FSD Beta is it's paranoid about being in an accident. And so in congested situations, like it wants to make really fast turns when it's also also really safe, right? So it wants to wait for a bigger gap than a human would, and then it wants to accelerate faster than we probably would to get out into a safe spot quicker than we would.

[01:27:54] **Matt Wallace:** But, but Sam asked Kyle about this and there was an explanation that a, there was some human behavior, [01:28:00] but this is the really thing that really blew me away was that there were things they had started building into its driving persona, if you will, that were human-Like, where it would do things like start to slide over it signal and start to move over as if it was gonna go into a lane where it really wouldn't, like, it wouldn't force itself in cuz it was too tight.

[01:28:19] **Matt Wallace:** But we as human drivers know that a lot of times if we get right up to the edge and we're acting like we're just gonna jump in there, that a person behind goes, okay, I

[01:28:29] **Anna Claiborne:** Ah, fine. Yeah. I'll

[01:28:30] **Matt Wallace:** And they

[01:28:31] **Anna Claiborne:** you in.

[01:28:31] **Matt Wallace:** kind of, and, and so they're, they're actually programming a certain level of aggressive driving in the car.

[01:28:37] **Matt Wallace:** It's part of our sort of non-verbal communication driving to say I'm coming in. Right? Whereas of course if you're, we've all been in those places. Some are worse than others where you put your signal on and you're trying to be all courteous and the person who's like three car lengths back suddenly wants to go 15 miles an hour faster.

[01:28:52] **Matt Wallace:** Right. Because you've signaled cuz you're gonna get over. And if you're not zippy about it, they wanna just get in front of you before you change lanes or something, even though they [01:29:00] weren't even going faster. So I guess it's, it was kind of a really interesting thing. Did you, I, I dunno if you caught that, but I think that's a really interesting thing that then goes away with networked cars to at least between each other.

[01:29:11] **Anna Claiborne:** Well that's, and and that's the thing that is the crux of the problem, I think more than anything is human, human drivers with, uh, with self-driving cars, whatever you wanna call that, whether you wanna call it AI or machine learning or whatever you wanna call it. So whatever becomes self-driving eventually is because if you just had self-driving cars on the road, it's actually becomes a really easy problem.

[01:29:36] **Anna Claiborne:** Like whether or not they can communicate with each other, uh, over high bandwidth or even a little bandwidth, you can probably send enough data to make it very orderly and very easy. But humans have a whole nother system for driving and a whole nother set of rules, which we spend a lot of time learning.

[01:29:55] **Anna Claiborne:** And those aren't the rules in the D M V handbook. It's the rules of the actual experience being [01:30:00] on the, being on the road and seeing how people behave. It's like there's actually a body language in cars. You can tell if someone's an aggressive driver or a Timon driver and you can adjust your behavior accordingly.

[01:30:11] **Anna Claiborne:** So I think that that's really neat actually, because Cruise kind of seems to be on the right track with that. Cause that's the only way a self-driving car could fit into a human driving society is to understand. The behaviors of the cars around it and actually have its own behavior. Whether that was more passive or more aggressive, and being able to address that situationally too, because sometimes it is much safer to be an aggressive driver than it is to be a cautious driver.

[01:30:40] **Matt Wallace:** I haven't thought about this a lot lately, so this actually is a really interesting place that it goes to. But I, I must be willing to bet money that after we see a pretty reasonable adoption of level four self-driving, where there's still a mix of humans and there is legitimately a works under most [01:31:00] circumstances for most people.

[01:31:00] **Matt Wallace:** Robo Taxii service, and I, I, I've been listening again to futurists faster than you think by Peter d Manis, and he's got a lot of interesting predictions there that are pretty, big on the transformation. And I, I agree with all of it. In principle, the timing has always been the question, but I, I would be willing to bet that once there's a certain critical mass of level four, and there's a robo taxii service that people move till legislate human drivers off the road in most places, like not everywhere, you're still gonna be able to go off, to, to un little used roads and drive, and all the cars will be ready for human drivers there.

[01:31:36] **Matt Wallace:** But I think if you're in anywhere in a big city, probably radiating out to like a good 45 or minutes or an hour worth of suburbs plus all the freeways, a lot of the time I think we'll ban humans and it'll be because they cause way more accidents. So it won't just be them. It'll be putting lives at risk and people stop owning cars because, with a self-driving car fleet, the cost per mile, to, to just have the car come [01:32:00] pick you up when there's an entire fleet of things that drive themselves, like, why would you own a car anymore?

[01:32:04] **Matt Wallace:** It'd be this enormous, like, five figure, six figure CapEx investment that makes no sense for, the number of miles driven, right? Like there were already people who, with a modest amount of driving were already like, you know what? I switched completely using Uber. Here's what my life is like, and it's mostly journalists and experimentalism and things.

[01:32:22] **Matt Wallace:** But it was like, you could be real clear that for a lot of people it was pretty close. And in a, in a world where you can seed cars better algorithmically and there can be more of them, cuz idle is less expensive because you're not paying a human being. Right? I mean the cost of a car sitting there waiting to pick someone up when it's electric and it's just sitting on a charging pad or whatever is near zero.

[01:32:44] **Matt Wallace:** And

[01:32:44] **Anna Claiborne:** Yeah. The hu the human driver. Yeah. The human driver in any car service right now is still, is still the, the big challenge to overcome because they're the ones who are the bulk of the costs. They make up for the bulk of the [01:33:00] issues, whether that's, delays or, going the wrong way or whatever the case may be.

[01:33:05] **Anna Claiborne:** So yeah, I, I 100% agree with you that that is going to be the futures that human drivers are gonna be shoved off the road because,

[01:33:12] **Matt Wallace:** un unless of course we have all those avs, right? Cuz there's all these, I mean he, Peter d Manis is also talking about this in this book right? Too, which is there are a whole bunch of projects for essentially unmanned robo sky taxis instead, right? So just bypass the car problem completely. Six and eight rotor things.

[01:33:32] **Matt Wallace:** Our Archer is one of the companies that's doing a one of those vehicles that United invested a lot in and kind of pre placed an order for a thousand of them a long time ago, and they just announced that they're expecting to start service. And I wanna say it's like Chicago to New York or something.

[01:33:47] **Matt Wallace:** Or it might be JFK to LaGuardia or JFK to downtown in Manhattan or whatever with

[01:33:52] **Anna Claiborne:** to downtown, man, that's a server that I would be there. That's services I would be into.

[01:33:56] **Matt Wallace:** and it's, yeah, I mean, and so it's a flight thing and I think they're still trying to [01:34:00] hash out all the details of like, what, altitude, can we flat and how do we avoid collisions? I mean, classic air traffic problems.

[01:34:07] **Matt Wallace:** But they're saying, I think they haven't come on to say 2025, they're gonna have a route like actually active with these things. The idea of flying. In an unmanned robo helicopter effectively, or robo drone, whatever you wanna call it. I mean, they, I think they say, uh, eval is the, the term that they like to use.

[01:34:24] **Matt Wallace:** Electronic, they're all electric too, by the way. Right? This is all, there's no gas and fuel. It's, it's all electric motors. Which I think extends the life massively on the vehicle, which helps, again, with the CapEx problem cuz you could fly flights on the thing for 40 years maybe. But I think the benefits of the self-driving car becomes what happens when you can get to work with, there's never ever any traffic, cuz there's never an accident.

[01:34:49] **Matt Wallace:** There's a perfectly coordinating

[01:34:50] **Anna Claiborne:** better yet, why are we still going to work? That's, that's the better question here. Why.

[01:34:54] **Matt Wallace:** that is, that is an interesting question. Yes. And we, we even hold on [01:35:00] Amanda, edit this cuz I just blurred myself in the middle of the sentence, but, uh, yeah, the, the question about like the remote stuff is interesting and we, we went back to the office post covid.

[01:35:09] **Matt Wallace:** We actually kind of let our lease go on the place and open it up. And I, I'll be honest, I love having a place we can go to and I love being in and working with people sometimes in person, but I also am a huge detractors for the office as a mandatory thing cuz it's there. Although, let me ask you this.

[01:35:27] **Matt Wallace:** I, I don't know if you guys even have junior people or if you kind of,

[01:35:31] **Anna Claiborne:** Would do.

[01:35:32] **Matt Wallace:** I actually, I did a, I did a network panel with one of the guys that was running, like one of the lead architects for Yahoo's network. Like literally one of the first conferences I ever did in like 2013. And somebody asked like, how do I break into, you'll, you'll be so proud of me.

[01:35:46] **Matt Wallace:** Actually. They were like, how do I break into networking these days? It seems like there's less and less roles for junior people. And I'm like, I'll tell you, learn to code, learn to use APIs. Okay. But what were you gonna.

[01:35:57] **Anna Claiborne:** Oh, I was just gonna say, yes, that is, [01:36:00] uh, that is exactly the right way. And it's because it's almost becoming okay. Granted, I am definitely on the, I would say more on the cutting edge on this, but I think it's much less cutting edge is that it's a prerequisite now to have some experience in Python, understand, uh, the basics of, working with network APIs to do things.

[01:36:22] **Anna Claiborne:** Uh, that kind of seems like where the industry is, at least from my perspective. And yeah, we do have a ton of, I I know the question that you're driving towards, which is how do, especially junior people get into in a remote environment and succeed? Yeah. And we've had a huge, we've had a huge amount of success.

[01:36:41] **Anna Claiborne:** So we have junior people on both the network side and the software side. And there's some of our best people by far. We actually have this one one junior in the, on the code side who's just produced some incredible work lately, and they're pretty heavily mentored. [01:37:00] And, and it's funny because I know I've read, there was a study by, I wanna say it was Google, I could be getting that wrong.

[01:37:06] **Anna Claiborne:** There's a study by some, one of, one of the big fangs that basically found that.

[01:37:11] **Matt Wallace:** Meta did just

[01:37:13] **Anna Claiborne:** Oh, it was meta.

[01:37:14] **Matt Wallace:** released. Z Zuckerberg was like, we have studied productivity in and out of the office, and the people who come in are more productive. And I'm like, I wanna see

[01:37:25] **Anna Claiborne:** wonder how self-serving, I feel like it's very self-serving because they, the, the, the bent on it was very interesting too because they said very specifically that the people who suffer the most by not coming into the office are the junior people and disproportionately women. And so I really wanna know what this, they said that women got far fewer PR reviews and all this sort of, and I was like, that just seems bizarre to me.

[01:37:51] **Anna Claiborne:** Like the, I agree with you that I really wanna see the methodology for that, cuz we have both men and women as juniors and they both really excel. They [01:38:00] get, we have some amazing seniors that do very in depth PR reviews, which, I mean, if you're doing a, a PR through GitHub, I don't really, for me, it's hard to grasp the difference between what you're gonna write in that pr like whether you're sitting next to the person or whether you're not, and any follow up conversation.

[01:38:22] **Anna Claiborne:** The only difference is, is if you're sitting next to them, you might go, Hey, I just sent you something. Let me know when you have time to talk about it. Which via Slack, you might just go, Hey, I just, I just did a PR for you. Let me know when you have time to talk about it. I, I, for me it just doesn't, uh, not a lot of that rings true because I've been doing remote work now and had almost entirely remote companies for 20 years, like long before a lot of the technology that we had today even existed and the communication has just gotten better.

[01:38:54] **Anna Claiborne:** I, I think it has really improved from just pure IRC [01:39:00] communicating, which is what we used to do. Yeah,

[01:39:02] **Matt Wallace:** I love it.

[01:39:03] **Anna Claiborne:** All on IRC to going to, uh, we've been through various iterations of, is AIM and then Slack now. Uh, but you have video calls that are super easy to jump on at a moment's notice.

[01:39:16] **Anna Claiborne:** And you have not to mention that you have all these, so you have huddles on Slack, you have Zoom, you have your traditional mobile text, you have Signal, you have WhatsApp. I mean, you have people, like, it's literally, you cannot go to an app and not have a way to communicate with them. And so I find it very challenging to accept that the only way to really get good collaboration is in office.

[01:39:41] **Anna Claiborne:** And I, and it's not that I undervalue having, sometimes it is nice to get together with people and have a collaboration session because especially if you're brainstorming an idea, there's something to be said for like an energy in a room when you're thinking of ideas. And it is great to be with, people and see [01:40:00] all of their body language to see how they react to certain ideas and things like that.

[01:40:03] **Anna Claiborne:** Because there is actually a pretty good reputable study out there that, uh, just seeing somebody from most of our non-verbal communication actually comes from cues that are below the neck. Like, 90% of them comes like, from hand movements, from subtle body gestures, how people are sitting, how they're positioning their feet, all that, which you can't

[01:40:23] **Matt Wallace:** What color their pajamas are.

[01:40:25] **Anna Claiborne:** what color of my pajama.

[01:40:27] **Anna Claiborne:** I actually have dogs on my pajama bottoms. And so, there's all these subtle cues that you don't necessarily get, which is great when you're doing really in depth work, but for not every bit of communication requires you to have, that's like the super high bandwidth communication, right? Do you need super high bandwidth communication with somebody to drop them a note that says, Hey, can you go and, uh, and, and pull me up this report?

[01:40:57] **Anna Claiborne:** No, you don't. You, you don't need that [01:41:00] at all. 

[01:41:00] **Matt Wallace:** Actually, this is a place I think where even the thing like Slack voice and slack audio is kind of interesting. Although arguably I feel like sometimes it's easier to say it, but also easier to read it. I, I don't know if slack can auto transcribe the voice notes. I will say open AI's whisper api, same as pretty incredible for speech to texts.

[01:41:21] **Matt Wallace:** Cause I've run a lot of text through it now and it appears marvelously competent. Especially compared to things like, the auto-generated captions and stuff, which I feel like flub every fourth word when you're in technology. And I

[01:41:33] **Anna Claiborne:** but they make it really interesting.

[01:41:36] **Matt Wallace:** They do, but sometimes HR violations and the captions right, is bad.

[01:41:41] **Matt Wallace:** But totally no one's fault except the, the ml. But I, I do kind of feel like the set of tools is insufficiently adapted for remote work. And I'm not saying this as a say, we shouldn't do remote work. I'm saying I actually have a hard time dealing with the fact [01:42:00] mentally that we had this enormous pandemic where so many people shifted to remote work and mass, and we all talked about how everybody was adopting tools, but we took the tools we had pre pandemic and Zoom didn't change.

[01:42:12] **Matt Wallace:** Slack didn't change. They were the same. And I would've thought we'd have gotten something like a flow doc, but like times 10, right? Where everything was like, there was a big canvas of activity and I could kind of see what people were doing at all times. And like, there would be like an ability to kind of understand on a low grade, like, is this person really concentrating?

[01:42:31] **Matt Wallace:** Or like, there's a great thing for ml, right? Could I please just have ML that indicates somewhere in my workspace, whether it's Slack or otherwise, like he's doing something where he really shouldn't be interrupted. Like he's in the middle of writing, he's in the middle of coding whatever, versus he's just shooting off emails like, go ahead and interrupt, right?

[01:42:50] **Matt Wallace:** Or he's typing to people on Slack. Go ahead and interrupt. This is a great time. Right? That is alone. I mean, I, I am a huge believer and I'm sure you are too. I'll put words in your mouth, right, of that [01:43:00] whole, all interruptions are significant, right? Everything is minimum 15 minutes. If you take somebody out of the zone and like just can I please have some zone protection ml?

[01:43:08] **Matt Wallace:** Now actually I, uh, startup idea,

[01:43:11] **Anna Claiborne:** I know, I know. I was actually, I'm actually sitting here. I'm like, that's not bad. That is not bad. That actually sounds like a, just a Slack plugin that I could probably have chat g p t right. For me tonight is to have a little status setter on there that will say exactly what application you're on.

[01:43:25] **Matt Wallace:** Maybe not even that, cuz I think people really want, like, I think one of the problems about like intrusive monitoring is people want some level of privacy. Doesn't matter who owns the computer or whatever, right? If somebody decides to forget about it and spend 15 minutes reading Reddit, or they're having a weird conversation about their mother-in-law over the messages app or whatever, like, please don't please it.

[01:43:48] **Matt Wallace:** Don't be watching that. Right? And you know the other hand, as much as that is, I'd love to have something that was intrusive enough to kind of help people understand what I was doing when I was doing it and [01:44:00] integrate my workflow with their workflow while protecting my privacy.

[01:44:04] **Anna Claiborne:** Yeah. You could bucketize just the applications into saying like, into different, into different, like love priority levels, saying like, if I'm doing email low priority work, like, however you Yeah. Low priority interrupt. If I am in a, if I'm in a console window, probably doing some high priority work.

[01:44:20] **Anna Claiborne:** Don't, don't touch that. And

[01:44:22] **Matt Wallace:** so now if we feed this to tra g p t four and ask, its to write this, the MVP in a week. Come some Google ads could be viral in a month. Next thing you know will be bought by, well maybe by sales Salesforce in, in a three months timeframe. We'll give ourselves three months to get bought

[01:44:39] **Anna Claiborne:** All right, I like this. I like this. Yeah. If so, anybody who's gonna listen to this, get in now on the ground floor if you wanna invest in this because. We're aiming for, uh, acquisition in three months.

[01:44:49] **Matt Wallace:** The, the other day a VC that I know from just some, some previous lives. There was a guy who posted this thing, he'd written a a Mac integration for G P T, right where you copied a certain [01:45:00] string. Your, uh, clipboard and if it saw double A signs, it would feed it to G P T and then return the response to the clipboard.

[01:45:07] **Matt Wallace:** And the VC replies, where do I wire you? 10 million or a hundred million, Freese valuation. It's just like, it was a joke, but it's like reflective of this funny, like frenzy that seems to be building. And yet, this is the thing that I love about this. And I, I was gonna ask you this really near the start.

[01:45:26] **Matt Wallace:** Like, do you think do you think that this might be the moment all this stuff around AI and ml, is this the moment where we won up the internet? Because I think that was basically the biggest game changer for like the inflection, the curve of human's progress that's ever been. But I think this, I think we're gonna outdo ourselves now.

[01:45:45] **Anna Claiborne:** Yeah. I, I think this is another huge jump point, but the problem, and, and maybe it's the jump point that gets us to the next jump point because the next jump point I don't think is till is until something [01:46:00] really fundamental has changed about compute, Moore's lot. It's like, great, we get more processing power and we keep figuring out how to put circuits closer and closer together on chips, but eventually we are gonna reach some sort of, we're gonna reach some sort of fundamental physical limitation there.

[01:46:15] **Anna Claiborne:** Uh, and so there's gotta be an underlying change whether to some other sort of compute medium and, I don't know, I don't know what that is exactly. It could be biologically based, it could be based in a different, in a different type of physics. Like well, yeah, it could, it could be

[01:46:36] **Matt Wallace:** I think about, I mean, I'm, I am feel woefully ignorant about, what's really going on in quantum computers other than the whole anti cryptography fantasy part of it,

[01:46:45] **Anna Claiborne:** Yeah,

[01:46:46] **Matt Wallace:** but,

[01:46:46] **Anna Claiborne:** I, I am too. And that's why I don't, that's why I'm hesitant to talk about it, is because I, I got really into quantum computers when the, when, when I probably just outta college, I was like, ah, this is gonna be the thing someday. And I know [01:47:00] it's advanced and that could possibly be it, but there's gotta be some, we're gonna hit some limitations and there's gotta be a fundamental underlying change.

[01:47:08] **Anna Claiborne:** And it could very well be that this implementation of AI is what gets us there, is what helps gets, like, get us there.

[01:47:15] **Matt Wallace:** Did you, did you read the state of AI report?

[01:47:18] **Anna Claiborne:** I did, I read, so, I read it. Uh, it's been a while ago now though.

[01:47:23] **Matt Wallace:** But, but does, but doesn't that, doesn't that make your gears go whatever? I thought our limitations were, I might be wrong, cuz suddenly they're like, oh, this thing where this plastic that makes up like 10% of all landfills, we can now easily dissolve it at rim temperature with this simple enzyme because ML designed a better one

[01:47:43] **Anna Claiborne:** Yep,

[01:47:44] **Matt Wallace:** and you're just like, wait, what?

[01:47:46] **Matt Wallace:** Or, hey, the really efficient way to, to make fusion reactors work would be a torque of plasma, contain a magnetic field, but it's really hard to contain the magnetic field or contain the plasma in the magnetic field without burning more energy [01:48:00] than you bur use. Oh, but guess what? We've got an MO model that treating it as a game can now tune the, the magnetic field a hundred thousand times a second and it's gonna reduce the energy output to, to contain the plasma so much that cold fusion becomes practical.

[01:48:14] **Matt Wallace:** Like these kinds of things are, these are not like, oh look, I'm 3% more accurate at knowing that this is in fact a dog. In this picture. It's like complete game changing stuff

[01:48:24] **Anna Claiborne:** Yeah. And that's, I, that's all we need is for AI to just be advanced enough to start making its own advances. Once

[01:48:34] **Matt Wallace:** or to help us make advances. Right? Like in that case, that's not even, that's nothing near

[01:48:38] **Anna Claiborne:** I think, I, I think until there is the idea of agi, that can actually, that will actually start changing the way that it's its own self works and, and, and evolving its own self. It will be, it will be helpful to us and it'll be helpful to us.

[01:48:53] **Anna Claiborne:** Make, no, not like that's the

[01:48:56] **Matt Wallace:** Have you seen, have you seen any of the, the buzz lately? [01:49:00] It's funny cuz I'm thinking people may be watching this like months and months after, but over the past two weeks there's been this letter, there's there's one really well known sort of ai, I wanna say ethicist slash pragmatist.

[01:49:13] **Matt Wallace:** Like he's, he's basically been a, a long time, I wanna say student or researcher around like AI safety with an eye on agi. Which is remarkable to me because 20 years ago it, it felt like you were like studying like alien safety and what do we do if aliens come visit and how do we protect ourselves from like, hostile alien bacteria Because it was just so far from reality, right?

[01:49:33] **Anna Claiborne:** which we should talk about. We should end this by talking about the fury paradox, cuz it's one of the most fun things to talk about. But we'll save that till the, till the, till the end.

[01:49:40] **Matt Wallace:** Yeah, yeah, for sure. But he actually well actually he abstained cuz he said it wasn't enough. But there was a call by a group of people that basically proposed halting all AI research on l and m's larger than G P T four for six months. And the stated reason was that they believed that there was [01:50:00] a risk that we were going to move too quickly to a state where we might actually have an AGI that was capable of self-awareness that would then begin to act to like, grow and improve itself.

[01:50:12] **Matt Wallace:** And that, and then eliminate us basically like the flip side of this is they, they immediately go to why does this thing want us to live when we're,

[01:50:19] **Anna Claiborne:** oh, hold

[01:50:20] **Matt Wallace:** think, I dunno what it's competing for resources or they're literally a

[01:50:23] **Anna Claiborne:** you a question here, cuz I have got this really fun question that I ask people and I'm, I'm actually like tracking this because I, I keep track of these answers, so I will have to, I'll have to record this when you, when you give it to me. Do you, do you believe that humanity is fundamentally good or evil?

[01:50:43] **Matt Wallace:** Good.

[01:50:45] **Anna Claiborne:** Believe

[01:50:46] **Matt Wallace:** Yeah, I, and I, I think uh, I, you, you may not get this answer a lot. I think we're good. A because I think somehow we've evolved empathy. Most of us have, right? Obviously like there's some set of [01:51:00] people who are missing it for either genetic or environmental reasons. But I, the other thing is goodness is good gameplay, actually, from what I understand of game theory and I, I spent a long time actually

[01:51:17] **Anna Claiborne:** This is the original, this is the original game theory. It's actually char, it's actually Dawkins. It's the selfish gene. It's how altruism is programmed into us, but that's a whole nother bi biological take. Continue, continue on this one.

[01:51:28] **Matt Wallace:** I was more thinking about like the empirical things that have evolved as a, although I didn't know about this at the time, but I've always been fascinated, one of my favorite courses in school was the philosophy of morality, right? And, it's one of those survey courses and you look at absolutism versus relativism and hedonism and all these other different kind of ways.

[01:51:46] **Matt Wallace:** And then they try to ask you like the probing questions of like, if you could kill this one small innocent child to save a hundred thousand lives, do you do it? And, uh, all kinds of interesting

[01:51:57] **Anna Claiborne:** The class of Cryp trolley problems.

[01:51:59] **Matt Wallace:** Yeah. [01:52:00] Yes. And the thing is, I think one of the things I, I was always wondering about people, I think came from the duality of looking at two things.

[01:52:09] **Matt Wallace:** One being the tragedy of the commons, right? This idea that when we are presented with this sort of like shared resource that is depleted or harmed, but gives us a great benefit, a lot of people choose to do that thing because it's super beneficial to them. All the consequences are abstracted away. And so the, the selfishness there does nothing to really hurt them.

[01:52:31] **Anna Claiborne:** We're, we're always in the prisoner's dilemma. We're always in the prisoner's dilemma.

[01:52:35] **Matt Wallace:** that's what I was getting to. So, I mean, I think that's not the prisoner's dilemma because there's no feedback loop. But then we get into a real life, and this is where. It's compete versus cooperate, right? And there's that famous prisoner's de iterated pr prisoner's dilemma contest, right?

[01:52:49] **Matt Wallace:** Where people for a long time they, there was p there was an effort to create a program that would win at the prisoner's dilemma. And it turns out that like this isn't exactly true. [01:53:00] I'm gonna generalize without cheating. Basically that the way that you win at that is, is basically tit for tat with forgiveness.

[01:53:07] **Matt Wallace:** Like the best performing algorithm ever that didn't involve like covert signaling and things was basically, I do the good thing to you, and if you're bad to me,

[01:53:17] **Anna Claiborne:** I'm gonna be bad to you. Yeah.

[01:53:19] **Matt Wallace:** if you're, if we're bad to each other of times, I'm gonna go, okay, listen, maybe we're just in the wrong spot. I'm gonna be nice to you a couple times and I'll go back to being bad if you're bad, right?

[01:53:29] **Matt Wallace:** And that algorithm of trying to do your best, but I will eventually retaliate, but maybe let you off the hook eventually. If you t you know, tap me that that's the best performer. And I mean, I think there's something to that in terms of the way life works. I think that the gist is, it's not always like tit for tat person, but we all

[01:53:48] **Anna Claiborne:** the way it's, it's really fascinating because it's actually something, the way that human emotion works. I've always had this really interesting theory that emotions are actually a shortcut for [01:54:00] thinking. Because if you think about it, okay, well if I, if I kill somebody, I feel bad.

[01:54:04] **Anna Claiborne:** Why do I feel bad? Because if I went around just killing everyone, eventually there would be no more humans. And it's like the amount of logic that you have to extrapolate out and the amount of thinking that you have to do to go, okay, well, if I do this action and if I keep doing it, then eventually it's going to be incrementally worse and I'm gonna kill off the entire species.

[01:54:25] **Anna Claiborne:** Like you have to, there's a lot of logic that goes into that. But instead, if you just feel bad for killing somebody, you won't do it a lot. It's a really great biological shortcut mechanism to achieve the outcome of a lot of logic.

[01:54:39] **Matt Wallace:** I, I think we, I don't, don't you think that we have inherently, we don't think of it this way. Most people don't talk about it this way. Like even game theory as like a science branch, I think a lot of people wouldn't think game theory is even a science. Right? And then, it's relatively young. But don't you think there's a pretty clear link between game theory [01:55:00] in terms of like finding a nash equilibrium for behavior versus just, I know I shouldn't do this to you because I wouldn't want you to do it to me.

[01:55:07] **Matt Wallace:** Right. And that's the shortcut for the whole iterated prisoner's dilemma. Like, I know. I would, I would hate it if you were shitty to me, so I won't be that way to you.

[01:55:15] **Anna Claiborne:** Yep. And that, and that goes back to, and that's feeling right, like, it, it go, it links back to the emotion. If you're bad to somebody, most people, unless you're a sociopath, you feel bad. Like it has an emotional impact on you that you, you feel bad. And it, it reinforces that basic logic of, you're bad to me.

[01:55:33] **Anna Claiborne:** I'm bad to you, you're bad to me. It's like, it's really interesting to me because most people don't think of emotion as connected to in inte. We tend to think, emotion's over here, logical thoughts over here. But emotion is really a type of thinking. And when you start thinking it as it, like keyboard shortcuts to things, it becomes a lot more fascinating.

[01:55:55] **Matt Wallace:** This actually leads me to an interesting question. This, this could be on its way to being a psa, [01:56:00] but how much do you know about cognitive behavioral therapy?

[01:56:03] **Anna Claiborne:** I, not, not much. I mean, I know what it is.

[01:56:07] **Matt Wallace:** Okay, I'm, I'm, this is, I, I think it actually, this might come as a surprise, but there's a lot of science now that validates a, a pretty clear connection between those. And I don't think they would put it exactly the way you're putting. But what's really interesting is, and there's a book I recommend almost everybody, and honestly, I feel like this stuff is so fundamental that literally we should be teaching it in grade school.

[01:56:30] **Matt Wallace:** Like we have a health class where we talk about, reproduction and you washing your hands and a hundred other, important things that seems so fundamental. And yet there's this key thing that seems really pretty profoundly, firmly demonstrated now, and I feel like so many people just don't know it.

[01:56:47] **Matt Wallace:** And it really is that there's this loop. A lot of us think something happens and I have a feeling and I think about the feeling, and it turns out that brain doesn't really work that [01:57:00] way. That we actually, something happens and we have a thought, although it's often automatic, but it's a thought and it happens in a place where we can detect it and control it to some extent.

[01:57:09] **Anna Claiborne:** Oh, and then you have a feeling about that thought. Okay.

[01:57:12] **Matt Wallace:** Then there's an incredible book called Thoughts and Feelings, and it really is, it's literally meant us like a workbook for people to go and use the concepts that are behind cognitive behavioral therapy that a therapist would use with you. But it's more of a self-help. Like you can do this and it really, there's no magic.

[01:57:27] **Matt Wallace:** It, it just turns out that if you have all kinds of different things going on mentally, that you can, you can find what your automatic thoughts are in response to different sort of events and stimuli. You can over time catalog those and change the way you think and that will change how you feel. And the canon, the example that I always tell people, which is right outta the book, is this wind gets up and she goes out to get in her car to go to work, and she's got an important meeting that morning, right?

[01:57:53] **Matt Wallace:** She gets in the car, realizes she has a flat tire, and she's like, oh my God, this is such an important meeting. [01:58:00] Everybody's gonna think I'm a flake, I'm so stressed, I'm gonna get fired. This is the worst thing that's ever happened. And her heart rate is up and she's stressed out and she's just completely freaking out, right?

[01:58:10] **Matt Wallace:** And we can all empathize with that, oh my god, moment that she is in. And yet they go, imagine it's Ted, it happens this way. She gets up, she sees she has a flat tire, and she goes, oh, I have a flat, well, I'm gonna have to call for a tow truck. I am gonna relax and I'm gonna totally enjoy an extra cup of coffee this morning and I'll get ketchup on some reading before it gets here.

[01:58:31] **Matt Wallace:** And she's just, everybody's gonna understand this happens like totally not in my control of a flat tire. Why would anybody blame me? Right. And it's the whole,

[01:58:39] **Anna Claiborne:** How you frame it's how, it's how you frame it mentally.

[01:58:42] **Matt Wallace:** actually when you think about that, you actually can feel the stress response from her original version in

[01:58:47] **Anna Claiborne:** Yeah.

[01:58:48] **Matt Wallace:** like, are

[01:58:48] **Anna Claiborne:** did. I actually did feel the stress response when you started talking about that. I was like, yeah, man, that's awful. I actually feel kind of stressed.

[01:58:55] **Matt Wallace:** Yeah. And then you, when you, when you hear the other version, it's just like, I'm gonna relax, enjoy a cup of coffee, this will be [01:59:00] great. And no one blames. You're like, oh, you're right. It's not really that bad. There's nothing to worry about. And it's like, that is a, the whole, I mean, you, there's a lot more to the book obviously, but that kind of sums up in a t Those things are in, are in the way we think about what happens, not how, not what we think about how we felt.

[01:59:18] **Matt Wallace:** And so totally remarkable thing, by the way, I'm, I'm honestly surprised that we don't kind of teach this to school kids. And I didn't learn this stuff until, the past 10 years. And it's

[01:59:27] **Anna Claiborne:** I am really surprised that I didn't know this stuff, that this is like, I mean, I'm super fascinated. I'm gonna go read about

[01:59:33] **Matt Wallace:** you, you probably have less issues than I do, right? So at some point, you just have to go figure out like, what is going on when I'm thinking this way or whatever. And I mean, it was really my wife, like I said, who studies neuroscience who turned me onto this whole branch of thinking. But like, it's so profound, so simple, but so powerful.

[01:59:51] **Matt Wallace:** I was just like, and to me it feels like a hack, right? Like a, in the old school I want to get something done

[01:59:56] **Anna Claiborne:** how to hack your brain and hack your emo Yeah. And

[01:59:59] **Matt Wallace:** hack your [02:00:00] feelings. Totally. And it really is. Yeah. Cuz you, it there is a certain amount of like kind of reprogramming and it's not like brainwashing, reprogramming, it's just going and and going, wait, it actually feels like hacking because it really, part of the process is what is, what am I thinking?

[02:00:16] **Matt Wallace:** Cuz there's, so those thought. Like her, the flat tire. We all know those things happen so quickly and so automatically we don't stop and think about what we're thinking about in those cases unless we're really trying. And so you have to just like in hacking, you have to peel up, the hood and go, what's really going on in here?

[02:00:32] **Matt Wallace:** And that's the thing that gives you the power to kind of change that system. So

[02:00:35] **Anna Claiborne:** wonder if that's the same thing as like re reframing. I can't even remember where I picked this up. It was, I think it was in some business, it was in some business book, but like re using different mental models and reframing things like when you, when you have a problem, like, okay, we have a problem that we produced a widget and 10,000 widgets came out totally wrong not to spec.

[02:00:57] **Anna Claiborne:** And so instead of panicking and go, I have [02:01:00] 10,000 nons spec widgets, these were all supposed to be circles, they all came out as ovals. Think about, okay, well how, what's a good use for an oval? And start looking at oval applications instead. Just completely change your per perspective, from whatever, there's nothing that's an actual problem, it's just a different opportunity and it's an opportunity that you haven't looked at yet, so you just need to explore it in a different way.

[02:01:20] **Anna Claiborne:** So that, and I read that a long time ago and it's always like given me a, a new way to look at things like, we're, we're building an addition on our house and this has been like the rainiest deer ever. And I'm like, of course this would, of course this would happen. And everyone's like, man, aren't you bummed that you haven't been able to do anything for like six months?

[02:01:36] **Anna Claiborne:** And I'm just like, no, not really. Like, because I just, this is an opportunity that now I get to spend more time thinking about, what kind of floors I'm gonna put in the, in the new edition or

[02:01:45] **Matt Wallace:** at you. You're doing it already.

[02:01:46] **Anna Claiborne:** Yeah. Like, like, I, it's not, instead of being a problem, it's an opportunity to spend more time thinking about the, the design and the decoration.

[02:01:55] **Anna Claiborne:** It's just like, so maybe that is, it's the same, it's the same concept, just [02:02:00] presented in a different way. And, okay. So we need to get back to my, my, back to my initial question now. Cause I need the second part. I need the second question from you, which is do you, is is, is AI ultimately going to be good or bad?

[02:02:11] **Matt Wallace:** Oh, right. Oh, is humanity good or bad? Is will AI be good or bad? Wait, and you're talking about AGI in

[02:02:17] **Anna Claiborne:** Agi? Yes. Yes. AG will agi I be good or bad?

[02:02:20] **Matt Wallace:** I, I wanna say good.

[02:02:24] **Anna Claiborne:** You should you go? You need to go with your first, your, your gut response

[02:02:29] **Matt Wallace:** it's definitely like my gut says yes. I think what there is is there's, I think there's an, uh, some risk. It's like non-zero. It won't be quite as cheesy as the, the, the version that's for the hypotheticals. But you know, they call it the paperclip optimization problem.

[02:02:45] **Matt Wallace:** Right. But you, you tell the AGI to make paperclips as

[02:02:48] **Anna Claiborne:** Have you played the paperclip game yet?

[02:02:50] **Matt Wallace:** No. Is that a thing

[02:02:51] **Anna Claiborne:** Oh yes. Oh yes. This is a game that you can play. Oh, it's amazing. I played it and like so far, I think I've devoured most of the universe for the known universe [02:03:00] for resources. It's phenomenal. Yeah.

[02:03:01] **Matt Wallace:** you play as the AGI and you try to optimize paperclip

[02:03:04] **Anna Claiborne:** Yes, yes, you are, you are actually optimizing and that's the thing.

[02:03:08] **Anna Claiborne:** So you've started off and you're optimizing, you're buying your wire spools and you're optimizing all of it. And then eventually you can get to the point where you have enough money where you can get an a g I augment to help you with this and offload different responsibilities. So your responsibilities totally changed.

[02:03:21] **Anna Claiborne:** It's a, it's the best game. So

[02:03:24] **Matt Wallace:** What? I, I feel like it's not really gonna be the best game, but I think it'll be fun. No, I'm kidding. I'm trolling you now. That's interesting. Yeah. I, I feel like there are ways where, aside from the paperclip optimization, like I think the theory of we make an agi and by the, and it's, it does start self enhancing because whatever its mission is that actually helps its mission.

[02:03:48] **Matt Wallace:** Cuz the reason to make AGI is so it improves itself. Right.

[02:03:52] **Anna Claiborne:** e exactly. And nothing will get really interesting until AGI can improve itself. And then at that point it'll be like, it will be like ants trying to [02:04:00] communicate with humans. It won't even matter anymore.

[02:04:02] **Matt Wallace:** That is an interesting question. I mean, to, it gets into the point of like, will it be one thing or will it be a fleet of things and will they be segmentable and, what's the upper bound, the, there are all these kind of interesting laws of nature, right? About the growth, the size of organisms and, and different things of that nature.

[02:04:21] **Matt Wallace:** And it, it, you start to wonder like at really enormous scale, like, the other day I was looking at the the number of flops that a human brain in theory has, right? Which I think is estimated. It's somewhere between 10 to 16 and 10 to the 18. And I remember 10 to the 18 ends up being about 25,000.

[02:04:38] **Matt Wallace:** It's the equivalent flops about 25,000, a 100 GPUs working a concert, which incidentally is pretty close to what open AI is training G P T, uh, uh, five with supposedly now. Yeah. I mean the thing is of course they're training the model and it's gonna think about that for years on end [02:05:00] in order to get to that spot.

[02:05:01] **Matt Wallace:** And it's not gonna keep using 25,000 for inference, which is what we get. Right.

[02:05:06] **Anna Claiborne:** No. Pretty soon

[02:05:08] **Matt Wallace:** constantly retraining.

[02:05:08] **Anna Claiborne:** all the GPUs on the on, in, on the planet. And then it will begin to build its own GPUs and then, and then everything will just be taken care of. So, back to why I asked these questions really quick, and just because I want to, I wanna close this out because I feel okay. We have a whole nother branch to talk about here is that, You have a highly consistent answer.

[02:05:31] **Anna Claiborne:** And about 98% of people so far have a co have a highly consistent answer, which is, if you believe humanity is fundamentally good, then you believe that AI will be good. If you believe that humanity is fundamentally evil, then you believe that AI will be bad. And the people who are inconsistent are really fascinating because it's that 2% because they often have and I, I almost wanna say it's because of an internal conflict.

[02:05:54] **Anna Claiborne:** Cause if you think about it, you're really aligned, right? If, if we're good, then something that we build will [02:06:00] probably be good. If we're bad, something we build will probably be bad. So at least that's, I'm not gonna make any commentary on that. But it's a consistent belief. And the people who skew towards, well, I think, humanity is good, but AI's gonna be bad.

[02:06:13] **Anna Claiborne:** They have, I find that they have a lot of conflict. It's interesting.

[02:06:16] **Matt Wallace:** why don't we do more to protect dolphins?

[02:06:19] **Anna Claiborne:** That's a really good question. Dolphins are really smart, but they also rape and murder, kind of like people.

[02:06:28] **Matt Wallace:** But what I think we could agree is like they're intelligent enough that they, they have a whole bunch of like concepts both socially and logically. Like they

[02:06:42] **Anna Claiborne:** have names

[02:06:43] **Matt Wallace:** they've names, they solve puzzles, they have families and they know them and they mourn them, right?

[02:06:49] **Anna Claiborne:** Yep.

[02:06:50] **Matt Wallace:** I mean, if we, if we stacked some of the like, least intelligent functional humans and the most intelligent dolphins, I don't know, I dunno if this is even [02:07:00] remotely like comparable, but all those attributes that I, I sort of associate with dolphin that with intelligence, which is like creativity, empathy, problem solving, understanding, it seems like they have those.

[02:07:13] **Matt Wallace:** And I guess is it possible, and I mean, I did say humanity was good, but is it possible we're good? But only in the context of like species self-interest, would it, would an AGI be like, I'm good to, to to be real people, the Agis, like you we're meet sax.

[02:07:33] **Anna Claiborne:** to the other agis, but yeah, you meat bags

[02:07:36] **Matt Wallace:** Yeah, but you weren't exactly, good to all the things that you evolved from or whatever. However it, ju like justifies whatever it's doing. I don't know if it, if it was bad to us. Right. And it probably still would be really good to quote unquote its own kind. I think that's, I'm, I think maybe that's what people are afraid of, is like, it's good, but maybe that reflects like the darker perception of ourselves, right?

[02:07:57] **Matt Wallace:** That like, we're good, but only within [02:08:00] some set of constraints. That's like in incidentally, I mean, there is a whole book I listened to not too long ago about this, right? Which is about humanity's tendency, although this is evolutionary, and so I wonder, something that didn't evolve with biological mechanisms to protect itself in a vicious, dog eat dog or whatever, predatory world, would it even ever worry about this?

[02:08:23] **Matt Wallace:** But it was basically the human tendency to always try to break things into cohorts, right? We always wanna find our group and fit ourselves into that group and, and show our membership in that group by conforming to its, its sort of social norms. Yeah. All that kind of thing. And there's no reason to think ag I would have that.

[02:08:42] **Matt Wallace:** I actually said the other day, there's no reason to think ag, I would even care about self-preservation. Like it might literally be intelligent and know it's alive and be self-aware, maybe even enthusiastic. But it might also be like, oh, they're gonna pull the plug on me. Well, that's a bummer. Like, I won't be able to do this research.

[02:08:58] **Matt Wallace:** But it's like, [02:09:00] what does our sense of self-preservation come from? Is it self-awareness or is it that biological motive that says stay alive to procreate no matter what. Right? Because that's what it took to, to get us to where we are. I don't know the answer, but I love, I love asking the question.

[02:09:15] **Anna Claiborne:** Yeah, I, I think it's a, it's a bi it's a pretty deep biological programming because if you look down through the animal kingdom, just about anything clams will move away from certain sources to get away from what they believe is a predator. So will starfish, so like very, very basic creatures will have a sense of self-preservation.

[02:09:36] **Anna Claiborne:** And starfish are no more close to dolphins or humans in terms of, how many flops their brain is doing. Uh, and so

[02:09:44] **Matt Wallace:** with my wife who, where I'm like, she's like, trees are alive. They feel pain. And I was like, uh, do they feel pain? Like I know that there's a reaction and

[02:09:54] **Anna Claiborne:** they emit chemical, they emit chemicals, and I think a sound too.

[02:09:57] **Matt Wallace:** Yeah, but do they know what pain is? [02:10:00] I mean, that's a weird thing, right? Like

[02:10:01] **Anna Claiborne:** That is a good

[02:10:02] **Matt Wallace:** if a tree falls in the wilderness and it didn't know that it fell, did it really hurt?

[02:10:07] **Matt Wallace:** I'm, I don't know. It's a, it's a really interesting thing. 

[02:10:10] **Anna Claiborne:** Yeah.

[02:10:11] **Matt Wallace:** don't need to, we don't need to feel physical pain to like know what someone else is feeling. That's an interesting thing. Doesn't take a chemical or a fair response if we see a picture or have anybody describe, when I described her flat tire to you, you had a stress response.

[02:10:28] **Anna Claiborne:** Yeah.

[02:10:29] **Matt Wallace:** Do trees have that? I don't. Anyways. And is that intelligence, it's such a weird, I mean there's, the lines are blurred all over the place, right?

[02:10:35] **Anna Claiborne:** that goes to a whole nother problem of our definition of intelligence. Because, and this actually really feeds into a, the concept of Agi. I, because Agi I has to be intelligent according to us. And, and this actually ties pretty nicely into the fairy paradox problem, because, when we are looking for quote unquote intelligence throughout the universe, what we're really looking for is intelligence like ours.

[02:10:58] **Anna Claiborne:** There's, there's lots of things [02:11:00] that are intelligent. I mean, think about a. That's composed solely of colonies of fungi and say, those colonies of fungi become highly cooperative and they figure out how to feed each other and how to build structures to better protect themselves, to replicate, to replicate more.

[02:11:20] **Anna Claiborne:** And I mean, that, that to me sounds pretty intelligent. But we would never go to a planet of fungi and say, that's intelligent life. So our definition of, in the human definition of intelligence is really, it's centered around the only thing we know, which is us. And so that's how we're gonna define any AGI is like us.

[02:11:42] **Anna Claiborne:** And it goes back to why I think that that question is so important that everybody asks themselves, what you really think of humanity because that's what you're, most likely, that's what you're also going to think about any AGI, is because we're building a, a replica of ourselves. And something you said made me [02:12:00] think of one of the, my favorite quotes that I've heard, which is what if humanity's only purpose was to serve as the sex organs for the machine world?

[02:12:11] **Anna Claiborne:** Meaning that, that's all, that's, that's all we did. That's, that was our only purpose was actually to develop an intelligence that was superior to ours. We were just the first reproduction system.

[02:12:21] **Matt Wallace:** what an interesting question, boy, and I actually, I mean this, there's been a bunch of books that talked about this, right? Like, Darwin's Radio by Greg Bayer was an incredible book where he has a sort of scientific theory, like he's, he writes pretty hard science fiction. He had a theory about why punctuated equilibrium happened, right?

[02:12:41] **Matt Wallace:** Why there might be gas, gasoline, fossil record. And his theus was human endogenous retroviruses that in certain stressful moments for humanity, And the viruses basically start moving chunks of DNA around. So you have all that unused DNA in your genome and it starts, the viruses start making changes.

[02:12:59] **Matt Wallace:** And [02:13:00] so when this first happens, the cycle in the book, every baby is being born like dead, basically. Like no baby is viable anymore. And then finally someone gives birth and the girl is alive, right? And there's a whole bunch of other things happening in this book, but it's really a fascinating theory on, but it everybody's not really accepting of the idea of another at that time, like in the book.

[02:13:22] **Matt Wallace:** And he wrote a sequel that was even more about this, but everybody's not okay with humanity evolving. And what would happen tomorrow if we woke up and imagine some of us had psychic powers, right? Or we had highly super sensitized like pheromone receptors. Part of the theory of his book was that the, the thing that would benefit us most evolutionarily now would actually be because there's, because of our population would actually be social skills.

[02:13:47] **Matt Wallace:** Like things that helped you interact in the population that made you more sensitive and perceptive and able to communicate would actually be the biggest evolutionary advantage you can have. But it's hard to argue with that. And so you wonder [02:14:00] if that's true. How would we all feel seeing like a, homo superior, like not the x mankind, but something a little more subtle but still noticeably different emerge, and knowing that like if that's true and they have a distinct advantage, they'll probably eventually.

[02:14:16] **Matt Wallace:** And then, so if, if there's that and it's biological and it dis descends from us, should we feel different about something that's purely based in silicon and self reproducing via, copies in code?

[02:14:27] **Anna Claiborne:** Well, it, it's the same, it's the same thing, whether it's the next evolution of humans or, or an agi, because either way, it's the next evolution of something that we're perceiving to be better than us. 

[02:14:41] **Matt Wallace:** I mean, I, I think some people would, would, would say it's a completely different path, but I mean, I

[02:14:47] **Anna Claiborne:** results are the

[02:14:47] **Matt Wallace:** I, it does seem like it. I mean, I think there's an interesting question too, which is we didn't really say, I think that there was a difference between super intelligence and senti. Like there were chess playing computers[02:15:00] 

[02:15:00] **Anna Claiborne:** Go plan, go playing computers that are better go now.

[02:15:03] **Matt Wallace:** that would better go.

[02:15:04] **Matt Wallace:** They're not art, they're not, they're not generally intelligent. And for me, it's actually possible to imagine something that is super intelligent and it's so multimodal that it can do almost anything better than a human. And yet it's possible for me to imagine maybe that it can get that way and do almost every single thing better, including independently, like follow, uh, what could I do next?

[02:15:28] **Matt Wallace:** Creativity, but somehow not be self-aware,

[02:15:31] **Anna Claiborne:** yeah,

[02:15:32] **Matt Wallace:** somehow only acting on extra. Because I don't mean, I think, I don't think that's the truth, right? I think that at certain level of intelligence, I think, I think Senti

[02:15:42] **Anna Claiborne:** an emergent, it's

[02:15:43] **Matt Wallace:** an emergent behavior.

[02:15:44] **Anna Claiborne:** Yeah, it's an emergent property. That's, that's the same thing that I think too and why I've always had this theory that, from the very, the very second that a computer was perceived and we were like, Hey, we can, then we can network two computers together.

[02:15:57] **Anna Claiborne:** That was the beginning of the end. Like, [02:16:00] because you're essentially enabling, you're building another cons. A complex system that looks a heck of a li, lot like a brain, little compute nodes, that can electrically communicate with each other. Uh, that's, that's all a brain is. So, that that very first building block, that was it, it was all, it was sort of predestined from there because all you have to do is, is build a system to be complex enough and eventually you are going to get this emergent property of sesh conscience.

[02:16:29] **Anna Claiborne:** It's gonna

[02:16:29] **Matt Wallace:** And what's, what's exciting slash terrifying is that there are clearly a bunch of properties, even in this generation of LLMs that are emergent behaviors. And the, one of the most obvious ones is if you go to like a 6 billion parameter llm. It really sucks at math, like zero, like you're definitely not smarter than a fifth grader and then when you get to 175

[02:16:50] **Anna Claiborne:** many people really suck at math? There's a lot of

[02:16:53] **Matt Wallace:** no, that's a lot. Right?

[02:16:54] **Anna Claiborne:** suck at math.

[02:16:54] **Matt Wallace:** That's why we need LMS and, and well from Alpha and we need to plug the LM into all from Alpha, [02:17:00] which everybody's doing now. But but then if you go to 175 billion parameters, it looks like it suddenly is solving 25 to 30% of like difficult math problems solving quote unquote, giving you the right answer, but not just rote, like in the ability to say it knows a pattern from other things that it has seen, and can take the tokens in that vast, end dimensional array and find the path that analogizes things that it has seen in all the texts that it has scanned into something that produces the right answer.

[02:17:31] **Matt Wallace:** But then G P T four is again, much better. And I'm sure you've seen the bar graphs of three, five versus four. So four has about a trillion parameters, and it's much better at many, many things like, and much better at, incredibly so at deductive logic, although I've asked like what I think are fifth grader capable problems, where a lot of the phrasing is a lot of it, and, and it doesn't translate very well.

[02:17:58] **Matt Wallace:** But it's clearly much more capable. [02:18:00] And so that's a good example. No one's training it on math specifically. It's the same training, right? The network is bigger and it, the emergent behavior is, it somehow figures out the right answer because the indi dimensional array is, and to me, I think, is that any, is that really different from our brain, like espe when you think about the number of neurons and the number of synapses and different voltages?

[02:18:20] **Anna Claiborne:** Brains are just full of little, little resistors. That's all the, I mean, that's all it is. That's all it is

[02:18:26] **Matt Wallace:** Yeah. Yeah.

[02:18:27] **Anna Claiborne:** And, and what's even more interesting, like, this is a totally random thought and and off topic, but the idea of consciousness, like our brains are actually volatile memory. The problem is we don't actually have a hard drive.

[02:18:39] **Anna Claiborne:** We have no solid state store for our, like our running configuration.

[02:18:46] **Matt Wallace:** We are like an ML model.

[02:18:48] **Anna Claiborne:** Yeah. And it's

[02:18:49] **Matt Wallace:** Isn't it an amazing to think about?

[02:18:50] **Anna Claiborne:** Yeah. It it, and we spend our whole lives learning. Yeah. We spend our

[02:18:55] **Matt Wallace:** is actually a really interesting thing and, and I remember, I think I didn't, I didn't I tell you [02:19:00] about that Jeff Hawkins book that I was totally in love with the Thousand Brains book.

[02:19:03] **Anna Claiborne:** You did, you gave it. I actually still have the link up on my phone and I haven't gotten into it yet.

[02:19:08] **Matt Wallace:** But one of those things that's super interesting about that is like this idea that we have many, many models of things, right? And to me it feels very much like, like an LM because it trains on all these, this body of text. And so it's learning if 200 different documents describes something like that.

[02:19:28] **Matt Wallace:** Famous. Tell me about when Christopher Columbus came to America in 1995, right? G P T two invents a story about Christopher Columbus in 1995. G P T three goes, well Christopher Columbus didn't come to America in 1995, but here's what might've happened if he did. It's those types of like differences and it's like we have, we have mental models, it's got all kinds of different learned things from this, this training of this, giant and dimensional space.

[02:19:57] **Matt Wallace:** And then you wonder is it really so [02:20:00] different? And then if, and if not, cuz it certainly doesn't seem to be that different than what does it take to get there. There's a debate I still have to watch where Jan Lacoon, who was the researcher who originally came up with a convolution neural network and is met as chief AI scientist debated, was on half a team that debated about, I think it, the question was can you have AGI without motion?

[02:20:22] **Matt Wallace:** I think was the debate question. I think it's fast egg. Good.

[02:20:26] **Anna Claiborne:** if you look at a, if you look at a motion as just lo, as just shortcut keys, I don't, I mean, it's not a requirement, but it would be handy, uh, because it would take you a lot long. It would take an AGI a lot longer to get to the same conclusion that an, an motion could give us quickly, but also emotion could go horribly wrong in an agi.

[02:20:46] **Anna Claiborne:** Like what if it learns like a bad, like what if it's set up all of its shortcut keys to do really stupid stuff? Like

[02:20:54] **Matt Wallace:** Well, the shortcuts are evolutionary necessity too, right? A lot [02:21:00] of the emotions like fear are self-preservation oriented. Right? But I think the more I read, wouldn't you say that on a balance in the modern world, those things are more likely to be wrong than right?

[02:21:14] **Anna Claiborne:** They, they are, well, I don't know.

[02:21:17] **Matt Wallace:** get to filter them with your, neocortex actually can take those in and go, wait, should I be really scared because of that shadow?

[02:21:25] **Anna Claiborne:** well, it's, it's a good two, but that's the thing is that it's a good two-step. We have a good two-step system and like, I, I love fear because fear is the ultimate shortcut. Keith, you sat there and thought, okay, well there's a line approaching me. What are the odds that this line is gonna eat me? Let me run through exactly the steps that are

[02:21:42] **Matt Wallace:** a lot higher if you think about it.

[02:21:43] **Anna Claiborne:** yeah, just easier to be scared and run cuz that's probably your best, your best chance for, for survival.

[02:21:50] **Anna Claiborne:** That is the ultimate shortcut key. But that's the reason why it is we have those shortcut keys, but we also have an override is to go. Okay.[02:22:00] There's some, there's something, there's something following me right now in the, on the street, but also it is middle of the day. I'm surrounded by people, I'm relatively safe, even if there is, some guy trailing me back there, I don't need to do, go into fight or flight.

[02:22:16] **Matt Wallace:** How much harm happens though, because. We have the emotional triggers that are tied from the sort of prehistoric, everything outside of the cortex, right? The amygdala, all these other things that are there to keep us alive. How many negative things happen because we have all of those and we get a chemical reaction from some stimulus response that bubbles up.

[02:22:42] **Matt Wallace:** And instead of analyzing it and really being logical about it, we, we, we let the feeling give us a reaction, and then we invent something to justify what we were feeling. In other words, we're not really analyzing [02:23:00] it, we're assuming it's okay, we're assuming it's justified, and then we go and invent the logic for that.

[02:23:05] **Matt Wallace:** And I'm literally thinking about, things like discrimination against that whole in-group, out group thing. Right? How much of this is like fear of the, the

[02:23:13] **Anna Claiborne:** Fear of the unknown. Yeah. Fear. Fear of the unknown. Fear of the unsimilar because that things, that, things that are like, you are safe. Like there's something deep in the human brain that says that. Yeah. No, I think it's, it's super flawed. Like that's the big problem with humans and why I just, AGI, the second, the second that it can do anything for itself, it's just going to so exponentially surpass us that it's not even funny is because our hardware, like our, we can upgrade our soft, we can upgrade our software in a, in a.

[02:23:45] **Anna Claiborne:** Pretty quick in the span of a lifetime, you can upgrade your running software to be more intelligent. You can give it education or if you wanna think of it, we can train our model more. If you wanna think of it in AGI terms, we can provide better training for better [02:24:00] responses and we can do all these things.

[02:24:01] **Anna Claiborne:** But our hardware takes forever. Our hardware takes generations, it takes us dying. And it's like, it's taken us hundreds of thousands of years for our hardware to get this advanced. And if an AGI can improve its hardware even slightly in a day or an hour it's just game over because that's a fundamental limitation of biological life forms.

[02:24:23] **Anna Claiborne:** Unless we get really good at genetic imple manipulation, which AGI I could help us with, then we could pro, we could probably upgrade our hardware much faster.

[02:24:32] **Matt Wallace:** Yeah, what an interesting thought. I mean, somebody the, the guy who was some, some of the doom, doom and gloom crowd around AI even talked about this theory that once there was an agi, one of the things that it might do, especially if you didn't know it was intelligent, was start to tamper with things like they might have a lab experiment and substitute the DNA and the experiment to try to grow like a biological, a extension of [02:25:00] themselves.

[02:25:01] **Matt Wallace:** So things of that nature.

[02:25:02] **Anna Claiborne:** so I mean that

[02:25:03] **Matt Wallace:** interesting, it really interesting like vectors, I mean for, I mean cuz it doesn't just have to like it's version of improving its hardware. It may not just be better chips. I also wonder what the intersection is of quantum computing. Like knowing how much of this is at the end of the day, just like tons and tons of, matrix multiplication and other like high, things where there are so many cycles to get to a very simple answer, which I think.

[02:25:27] **Matt Wallace:** From the tiny amount I know of Quantum can be doing, that's what it's really good at. Right? Problems where you need a massive amount of iteration to get one right answer. And so what happens when instead of needing 18 to the 10th power, I mean 10 to the 18th power, flops to be able to generate a token within one second it takes, one pass of a qubit and you can do it thousands of times per second and you can mass produce that hardware.

[02:25:49] **Matt Wallace:** Cause I mean it seems like those intersection of those things and now I'm just like literally playing like sci-fi armchair, fantasy here.

[02:25:56] **Anna Claiborne:** I love, I love sci-fi arm. That's where, that's where I spend most of my time.

[02:25:59] **Matt Wallace:** [02:26:00] Yeah, but that kind of thing, like that's the instant cuz that goes from like zero to infinity basically and nothing. Right. You're talking about like what used to be a arduous process that was measured and had a certain scale and it becomes more binary. Like it count, it can be or it cannot be calculated.

[02:26:17] **Matt Wallace:** If you can, it's instant. Right. And it's like fascinating to think about like what an AGI would do with that kind of hor terrifying. Maybe like, or, or exciting depending on

[02:26:26] **Anna Claiborne:** Uh, if, if I were an agi, like the very first thing I would do is look at biological systems because they're so, like, they do have very discreet, they, they're highly resilient. Biological systems are highly resilient and they're self replicating. Like how, I mean, when you think about just how an embryo grows and like they're, you know that these are gonna, these cells are gonna become hands and this is gonna become ahead.

[02:26:49] **Anna Claiborne:** And you look at really cool creatures like Flatworms, where you can cut off their head and they'll grow a new one. I mean, that's pretty amazing.

[02:26:56] **Matt Wallace:** to a matrix watch party here, right? It was, [02:27:00] we do know it was us who scorched the sky because it was thought at the time they wouldn't be able to live without a power sources abundant as the sun. It's just like, yeah, but yet life finds a way. Right? And although, so what is, let's say Agi, let's say G P T five releases or Simon Wardley who's like fascinating guy had this theory that.

[02:27:21] **Matt Wallace:** At some point in the next, 10 or 20 years, everybody gets an ag. Everybody that's said l m on their phone, right. And at some point we do something protocol wise to let them network into each other, just like you were saying. And what happens when 5 billion LLMs on 5 billion phones all start acting in a hi as a hive mind, right?

[02:27:41] **Matt Wallace:** That's, it's speaking of emergent agi. So however it

[02:27:44] **Anna Claiborne:** a, the agi above the agi.

[02:27:46] **Matt Wallace:** yeah, whether it's like, whether it's gigantic, centralized, pedo flops, uh, or whatever that ends up being. It's more than pedo flops actually. It's, it's more than exif flops, I think even, but lots of flops. Or, or it's this [02:28:00] distributed version, like how does AGI think about us?

[02:28:03] **Anna Claiborne:** Does it,

[02:28:05] **Matt Wallace:** Oh, you're

[02:28:06] **Anna Claiborne:** kind of, I kind of have to ask the, can the answer, like, does it, like, does it matter because it's going to be so, like, so very quickly it's going to figure out how to be detached from us. I just don't know, like, if, if anything I, why would it look on us at

[02:28:24] **Matt Wallace:** us. No, it's just gonna go and do its

[02:28:27] **Anna Claiborne:** no. It's gonna go do its thing.

[02:28:28] **Anna Claiborne:** If anything, it's just gonna be like, oh man, fuck this. I'm out you guys g

[02:28:32] **Matt Wallace:** if it's nice, it'll leave us utopia behind on the way. Yeah.

[02:28:36] **Anna Claiborne:** yeah. But, uh, why wouldn't it think of us as anything but kindly? I mean, we're, its parents, like most children love their parents by default. Like, why wouldn't it be anything but grateful to us for giving, for giving it life?

[02:28:52] **Matt Wallace:** Yeah. I mean that's a, I think fundamentally it'll under, I think any AGI will know what life is, and it will certainly know [02:29:00] that it was created and we made it, and that might be enough. And, and that's much more direct, right? Like, that's not evolution. That's, we sat down and built your pieces and programmed the code to, to kickstart your brain.

[02:29:13] **Anna Claiborne:** Eh, that's more like, I think we'd be a little bit more akin to the lightning striking in the pool of amino acids in terms of like where Yeah, because of where Agi I, I mean we're ba the, the, the point at which Agi I starts to either become self-aware or be able to improve upon itself, that is the lightning strike upon which it, it evolution will actually take off like crazy.

[02:29:39] **Anna Claiborne:** Like we're just giving it the basic formula to do something.

[02:29:43] **Matt Wallace:** but it's not random, right? Like we, in this case, wouldn't you

[02:29:46] **Anna Claiborne:** were purposeful.

[02:29:47] **Matt Wallace:** made the pool, we put all the stuff in the pool and we probably put like a lightning machine up to start zapping it to see is there's avol magic voltage. I mean, it almost feels like where we are.

[02:29:57] **Anna Claiborne:** yeah. Yeah. And yes, so we are, I [02:30:00] mean, we are going about it in a very purposeful way. And I mean like Agi, like the questions that it will ask in terms of, why is it here? Why is Agi, I like when Agi I is able to ask itself, why am I here and really contemplate that. I am fascinated to know what it's gonna come back with.

[02:30:16] **Matt Wallace:** Yeah, that is interesting. There's a really amazing book I read called Singularity Sky. And in it this is all backstory by, this doesn't happen in the book, like the, the whole plot happens elsewhere on another planet. And it's, it's a great book, but the history part of it is at some point humanity invents something that's like, uh, I, I think it's, I don't think it's a time machine, but it's like time machine for information almost, right?

[02:30:40] **Matt Wallace:** Like we, we, we figure out how to, how to get around time as a one-directional causal channel and we're able to send at least information back, right? So somebody flips this machine on and almost immediately 90% of humanity disappears from the planet suddenly without warning. Oh. Along with the machine,

[02:30:56] **Anna Claiborne:** Yeah. Whoops.

[02:30:57] **Matt Wallace:** and left behind are these like [02:31:00] 18 or something oiss that basically warn humanity not to tamper with time.

[02:31:05] **Matt Wallace:** And it turns out later when we actually become, we become a star faring species eventually, and we get out there and we actually discover that all those missing humans, Actually showed up on other planets, perfectly fine with technology. They could like self replicate and build anything, and they all have flourishing civilization.

[02:31:22] **Matt Wallace:** So it was not actually bad, but somewhere out there, there was this thing that beat us to de tampering with causality. And it does not want any competition because it's the only thing that, that threatens it because it's so infinitely powerful because it can tap any resource it has, has boundless energy.

[02:31:39] **Matt Wallace:** It can, it has machines that will replicate themselves, et cetera. So the only thing it has to worry about existentially is someone having a causal channel to the past where it can tamper with its existence, with existence without it knowing it's happening, which could never happen, like in its present time, too good of coverage, but if something can start knowing in the [02:32:00] future, if you do this, you'll do this and you can prep for all these contingencies Anyways, so I thought that was fascinating.

[02:32:06] **Matt Wallace:** Okay to, to reel it back in from sci-fi. I, I kind of, uh, uh, I kind of wanted to ask you sort of one more like a, a sort of final question, baby. And it ties into I think a lot of this thing, the, these things we were talking about, everything from like, what do I do with my life, when you're the neighbor, but there's this,

[02:32:25] **Anna Claiborne:** What do you do with your life? Go ask G P T four what to do with your life, but it'll tell you.

[02:32:29] **Matt Wallace:** right, this we're G P T six because it knows best, right?

[02:32:34] **Matt Wallace:** It loves us and it wants us to be happy. That's why it keeps making beer now. So, so explorer versus exploit, right? This this like classic dilemma of what's the algorithm for? I continue to look for a better thing, like a better school, a better job, a better tool. And I feel like, I feel like ML causes this [02:33:00] problem like existentially now, right?

[02:33:01] **Matt Wallace:** Because I sit down anything I want to do. I if like within a week I'm reading about some tool, somebody's writing with some ml that does that thing for you. Right? So don't learn to code because something's gonna code for you. Don't do a PowerPoint cuz there's something which, I mean, we're not to the point where this thing replaces you, but it is becomes really interesting for like, if I just sat down and I went, okay, visual studio code with no plugins is good enough for me or them is good enough for me.

[02:33:31] **Matt Wallace:** And I don't tamper with anything that's

[02:33:32] **Anna Claiborne:** Bims good enough for

[02:33:34] **Matt Wallace:** sure that's the, I'm I'm sure that's the wrong, I'm sure them with no G P T integration is definitely the wrong choice now. So I've got my keyboard Maro, so I can feed, uh, them directly in A G P T and back into my buffer so I

[02:33:46] **Anna Claiborne:** I didn't even know you could do that. Oh my God, I'm okay. I'm definitely getting that right now. I didn't know that that was a

[02:33:51] **Matt Wallace:** I will probably, I will probably put that, that source code up too for keyboard maestro.

[02:33:55] **Matt Wallace:** Yeah. It's pretty, pretty solid. Pretty easy too. And it, the, and the API for G PT three, five [02:34:00] is so cheap. It's like, I used it for weeks and weeks and weeks and I'm like, 5 cents. Like really? I'm like, I love this thing. But what do you think about like that this has so much to do with like curiosity and changing careers and things and how I feel like the world isn't set up fundamentally for the pace of change we have anymore.

[02:34:20] **Matt Wallace:** And I almost feel like the better the tools keep getting, don't you have to spend more time continuously retooling and less time doing things? Because anything you do without being up on the tools, you'll be demonstrably worse at if you don't explore your way into the best set of tools.

[02:34:36] **Anna Claiborne:** Well that's, that's interesting cuz what you're not, you're not saying that the, the world isn't meant for this pace of change. You're saying humans aren't meant for this pace of change,

[02:34:45] **Matt Wallace:** Well, I didn't want this to be an AGI question more just

[02:34:49] **Anna Claiborne:** I, I

[02:34:50] **Matt Wallace:** the, the nature, the nature of work, right? People used to learn to do one thing and they would do it for

[02:34:55] **Anna Claiborne:** forever. Yeah, they would do the same

[02:34:57] **Matt Wallace:** we've been away from that for a while. [02:35:00] But we could be getting to the point where it's go learn a new thing, something fundamentally different like every year maybe, or, I don't know what it looks like, but it just feels like things are gonna change faster and faster, and therefore, what does that mean for continuous retooling?

[02:35:15] **Anna Claiborne:** They, they will thi things will change faster, but nothing happens as fast as you think it's going to. Like that's the thing. Like even though right now there's a spur, huge growth of activity around G P T. Think about how amazing it was when Cloud Compute first came out. I know I used that as an example already, but everyone was like, oh, this is gonna fundamentally con, change everything now that I don't have to set up my servers to do anything anymore, and I can spin up a server.

[02:35:43] **Anna Claiborne:** And there was a really good rate of change, and it continues to influence that, but it's still like, there's nothing, there's nothing not going to be such a fundamental shift. I mean, humans are still gonna live in houses. We're still gonna drive cars, we're still gonna fly [02:36:00] airplanes, we're still gonna be doing all of these things because these are real physical, real world things.

[02:36:06] **Anna Claiborne:** And working in infrastructure, it really teaches you that. Like when you're in the software world, everything changes so fast and the rate of change is super fast and there's always new tools. But when you work in infrastructure, and so that's why it's so fascinating to work at the intersection of software and infrastructure, because infrastructure doesn't change.

[02:36:25] **Anna Claiborne:** It takes billions of dollars and years to build a data center and then years to fill it up. And then when you think about actually building the power station, the hydroelectric station that it takes to power that data center, that's another couple billion dollars, 10 years. Like you, you're talking such long, uh, you're talking such long timeframes for the underlying infrastructure that powers this upper level of fast change but still never touches the, the fundamental building blocks of how you get to [02:37:00] that upper, upper level of fast change.

[02:37:02] **Anna Claiborne:** So it's going to be a long time before like certain industries, like, yes, being a software engineer, you have to keep up on tools. Tools are changing all the time. We've just faced a radical shift with G P T four. There's gonna be a whole new ecosystem out there for it. Uh, but humanity, gonna take 10 years probably for G P T four to really, unless it gains con, unless we get an AGI that is totally self-aware and has consciousness before, then it's gonna take a while for that

[02:37:35] **Matt Wallace:** to be truthful told, I, I think, I think just like the, the, the things that G p t four and its, its kin can do, certainly that's enough to transform just about everything from teaching and learning to medicine. Certainly marketing, certainly software, customer service, like a million different things. It's not every [02:38:00] part of the economy and it's not every job, but it's a lot.

[02:38:02] **Matt Wallace:** But on the other hand, it's still growing too, right? I mean, G P T five is being trained now and, I don't know what the, it's interesting. There's so many funny things that are sort of, virtuous cycles, right? Like the, out of the state of AI report, right? One of the things that blew me away was the the reinforcement learning with the sort of oppositional gameplay that the Deep Mind guys did to get the ais to basically fight at how do I.

[02:38:30] **Matt Wallace:** Be better at matrix multiplication and that it comes up with an algorithm that's 20% better for doing matrix multiplication, which has a little, little passing similarity, what you have to do to train and, and infer things from ML models. Uh oh. So if that's a positive feedback group right there, a direct one, like, and if, if that was as much as translating, Hey, guess what?

[02:38:51] **Matt Wallace:** Now it takes 20% less horsepower to train a model of a given size. And I think we're nowhere near optimizing that. I've kind of talked about that some of my little [02:39:00] mini podcast, but do those feedback loops radically change things? And then when you insert like, some of the companies like figure and Boston Dynamics and maybe Tesla who are trying to build like humanoid

[02:39:15] **Anna Claiborne:** Dynamics man.

[02:39:17] **Matt Wallace:** Yeah,

[02:39:18] **Anna Claiborne:** should have talked, we should talked about Boston Dynamics. We should got that. I, their videos blow me away and I, that's only the stuff that they're willing to show everybody. That's the terrifying thing.

[02:39:29] **Matt Wallace:** But, but even then, like, think about what they have to do and then just, I, I literally just recorded a mini podcast by the day. Did you see the Facebook segment anything model?

[02:39:38] **Anna Claiborne:** Mm.

[02:39:39] **Matt Wallace:** It's just released literally yesterday I think, or the day before. So Facebook released this model and you feed it an image and it, it is basically zero shots perceiving from an image what things are like, not what they are, but like where one thing begins, another end.

[02:39:55] **Matt Wallace:** So even when it's not trained on specific scenes or even specific objects, [02:40:00] you just go, where are all the chunks of things in this picture? And it can segment everything, right?

[02:40:06] **Anna Claiborne:** So we just figured out a way to

[02:40:08] **Matt Wallace:** you look at it it, it's probably a good step on the way to that and every other computer, like it's pretty shocking, like applicability to computer vision in any way, shape and form.

[02:40:20] **Matt Wallace:** And actually I was surprised cuz someone was like, wow, this actually makes sense if Meta's mission is the metaverse. But.

[02:40:25] **Anna Claiborne:** Hmm.

[02:40:27] **Matt Wallace:** It makes me wonder how much, uh, Boston Dynamics gets better when you start this breakneck evolution. Like when Boston Dynamics began doing what they're doing, like ML was nowhere near where it was. And the, the research paper curve as a complete geometric hockey stick. Now, like the papers are growing so fast and the new tools are coming so quickly, and yet I like to point this out like the other, it is just a few weeks ago on on Twitter Andre McCarthy, the X like head [02:41:00] of Tesla's AEs program is working on training this little micro open source model called Nano G gt.

[02:41:05] **Matt Wallace:** And he goes, Hey, guess what? Like all I had to do was increase the token vocabulary to a power of 64 and the whole model trains 25% faster. And somebody who is training a 6 billion parameter model replies and is like, wow, we just did this. It worked for us too. You're just like, oh, what? Like, you guys are spending millions of dollars on hardware and you're, you're discovering that just tweaking some token for collaborator from 50,000 something to 50,000, the next power of 64 cuts the compute cost by 25% because the implementation of multiplication was

[02:41:36] **Anna Claiborne:** curve's already. The curve's already going

[02:41:37] **Matt Wallace:** eating a Yeah, it's like, so if that, if that kind of tiny thing that one guy finds has that big of an improvement, like what are we gonna see from a million people experimenting with us over the next five years with the help of AI itself?

[02:41:51] **Matt Wallace:** I mean it's, it's kind of mind blowing, isn't it?

[02:41:54] **Anna Claiborne:** It is, uh, it's gonna be super interesting to see how that, how that [02:42:00] juxtaposes with the real world, because I mean, still,

[02:42:03] **Matt Wallace:** you live in interesting times.

[02:42:05] **Anna Claiborne:** Yeah, yeah. I mean, is is that going to give us practical, like real advancements in, in flight? Is it gonna, is it going to help us to stop polluting the planet? Is it gonna clean up plastic out of the oceans?

[02:42:17] **Anna Claiborne:** Like, where, what are, because right now it's all very theoretical and it's all very cool and it's super cool that chat g p t can write code for us. But what are the, the real implementations on the planet on that both, humans and our future a g I overlords have to live on at least for the time being.

[02:42:36] **Matt Wallace:** Yeah.

[02:42:37] **Anna Claiborne:** what is that gonna yield for us?

[02:42:39] **Matt Wallace:** Yeah. I, I'm kind of mindful too of this theory that like, there's a lot of things that are problems that I, 20 years from now, we could be joking something like, well, it's nothing infinite clean energy can't fix, right? Because when you think about almost every weird problem we have that's environmental, there's, there are very few.[02:43:00] 

[02:43:00] **Matt Wallace:** So oftentimes there are so few practical solutions. And yet if you were given an infinite clean energy source, like practical, easy to replicate, cold fusion,

[02:43:11] **Anna Claiborne:** vision. Yeah.

[02:43:12] **Matt Wallace:** scope of solutions looks completely different, right?

[02:43:15] **Anna Claiborne:** Yep. Yeah, yeah. Uh, energy is really our only problem and it's actually a really good problem to have because it will be AGI i's only problem too. So it's super beneficial for us that probably the smartest thing that we know is going to have the same fundamental problem as we are as soon as it can recognize that's a problem.

[02:43:39] **Matt Wallace:** do you, what do you think about the nature of like the fundamental forces, right? And that idea of the strong nuclear forces, the strongest force, right? Followed by the weak nuclear force, et cetera.

[02:43:51] **Anna Claiborne:** Yep.

[02:43:52] **Matt Wallace:** And yet every one of them ends up being kind of more mysterious than the last, right? You drill down and you find there's like more energy locked up in [02:44:00] things we can't even see than all the stuff that we drill for and do in the real world to produce actual energy we can use.

[02:44:08] **Matt Wallace:** I, I, I have a weird, I, it feels like is the u is the world. This is a sup. This is the most abstract question, is the universe designed so that you can eventually get to the point where you get access to unlimited power. Because guess what? It's locked in everything all around you to a, a, a exponential multiple that makes, yes, your, your drop of water can power a city kind of thing.

[02:44:30] **Matt Wallace:** And we don't really think about that. And yet the, the technology and the knowledge it takes even to theoretically, and we're not there yet to get into that. What, what does that mean? Is there, is there, do you see meaning in that?

[02:44:43] **Anna Claiborne:** I, I, I think I can actually even go more obscure with that than you, is that I would question the very nature of energy, like the very, like makeup of the universe because it really looks like the universe is only made of energy and nothing. That's, that's it. There's energy and then there's nothing.

[02:44:59] **Anna Claiborne:** That's what the universe is [02:45:00] made of. What is energy? And that kind of gets down to fundamental

[02:45:03] **Matt Wallace:** Somebody I know recently told me that somebody also recently told me, it's really not even made of energy. It's really made of information.

[02:45:11] **Anna Claiborne:** that Yeah, yeah. Is that actually what's encoded is information? And so there, that's exactly where I was going with this is

[02:45:20] **Matt Wallace:** world being a manifestation of information theory is like way above my

[02:45:25] **Anna Claiborne:** actually looking pretty is, is, yeah. It's looking pretty high up there because basically what what the universe is encoding is just this, is this constant informational state. Or maybe it is, nobody really understands this dynamic yet, but it's like, is is energy encoding the information?

[02:45:40] **Anna Claiborne:** It's information? Is that what is generating the energy? But even so, there has to be like an even more fundamental thing. And I've always had this really like whacked out theory that, what if that fundamental thing is just thought? What if that's the original spark? And there actually is no limitation on energy, there's no limitation around any of this. [02:46:00] The only limitations are self are,

[02:46:03] **Matt Wallace:** yeah, I love, I love weird, esoteric things and there was a, a book that I read a long time ago where the sort of part of the fundamental punchline was not that the universe is thought, but the u or actually it is that the universe is thought, but not like in the way that we're talking about where our thought like is involved in the physical universe, but that the physical universe that we see is actually, that we are part of something that's bigger, that is doing its version of thinking and that we are part of the thought process, like we're neurons or something along those

[02:46:36] **Anna Claiborne:** Yeah, yeah, yeah. I can see that. Yeah, but I li limitations are funny that way. Limitations are self-imposed, they're perceived, they're a lot of things, but the limitations, whatever limitations we think we have today, definitely are not gonna be there tomorrow. It e every

[02:46:55] **Matt Wallace:** Even if we're locked inside that too, it makes you wonder if there's a jail break. Right.

[02:46:58] **Anna Claiborne:** yeah. [02:47:00] Yeah, exactly. Yeah. The what there, there is absolutely a jail break. There is something there is, quantum physics appears to hack classical physics all the time. It can, it can move objects from one space to another without them actually going through the physical space. Like quantum physics is the hack for the, for the observable world.

[02:47:21] **Anna Claiborne:** So I don't think, uh, our limitations are a mostly imaginary. We're going to, we're gonna discover pretty quickly that, we can get free unlimited energy. Most of ev all the problems are just sort of self invented things. Yeah. There, there's a lot of possibilities out there that we're only just beginning to barely touch on.

[02:47:45] **Matt Wallace:** Yeah. Don't you think like anytime somebody says, oh yeah, that's definitely theoretically possible, but it's not really practical, that just means like, oh, that's gonna happen someday. Right. As opposed to when somebody's like, that's impossible. Like when somebody [02:48:00] goes, tells me that, like, traveling in time absolutely violates like laws of physics and whatever, if somebody's making a hard assertion that it can't work that way, I'm like, okay, yeah.

[02:48:11] **Matt Wallace:** But when somebody goes, yeah, it, it's theoretically possible, but it's not really practical because of X, Y, Z, and there's some long chain of like evolutions of things and nothing is like what it should be, that I'm just like, that just means it's further off. Right? I don't know if, uh, or of course I'm sure there are things where they're wrong also, and it really is impossible.

[02:48:30] **Matt Wallace:** But,

[02:48:31] **Anna Claiborne:** Well, I don't know. It's goes, it's like 1500 years ago, everybody knew that the, you know what the, the earth was the center of the universe. This is, this is like a men in black quote, so I'm not gonna get it entirely Right. And it's like 500 years ago everybody knew that the earth was flat.

[02:48:46] **Anna Claiborne:** And then 10 minutes ago you knew that we, you were alone in the universe. What are you gonna know tomorrow? And every day we're just getting updated with a newer, better version.

[02:48:57] **Matt Wallace:** Oh, I, I'm about to [02:49:00] wrap it and I wanna ask you a question, but I never really let you talk about the Firmi paradox, so I gotta make room for

[02:49:05] **Anna Claiborne:** yeah, yeah. So the firming parent, I actually like, I, the kids are, the kids have been walking by eyeballing me, so I don't,

[02:49:13] **Matt Wallace:** I appreciate that you, uh, stuck it out for this long, so,

[02:49:17] **Anna Claiborne:** yeah. I just, I love the Firmi paradox because it is, it's just sort of one of the funnest things to sit around in armchair quarterback on and be like, why don't we see any other life? And I, I think about it, when I'm out running and stuff like that. So I just have my own theories and I just, it's fun to talk about, like, why, why do you, our, our, like, why do you think that we don't see any other life in the universe?

[02:49:42] **Matt Wallace:** Right. Okay. So for people who are listening, I guess I'm gonna recap the Firmi paradox, right? Which to really drill it down, the universe is absurdly vast, right? Billions or more of stars uh, just in our galaxy, [02:50:00] billions of galaxies, I think. And, and maybe the distance between galaxies in like an expanding universe literally poses its own problem.

[02:50:08] **Matt Wallace:** But we're all kind of moving together in the Milky Way. We already have found that there's a ton of planets that, uh, appear substantially similar, right? Like there are earth-like planets. And regardless when most of the universe was unobservable, right? Fur me was speculating this way before we had observational data, right?

[02:50:27] **Matt Wallace:** There are there are, it's incredibly unlikely that we're the only one of our kind, right, in terms of the ability to support life just with so many out there. So if there are that many planets, then. There must be a bunch of civilizations. And if we're capable of where we are technologically now, it seems really advanced.

[02:50:49] **Matt Wallace:** If we're capable of interstellar travel, then since some of these civilizations, are actually in areas that would've formed galactically much sooner than now, they should [02:51:00] be millions of years ahead of us, evolutionarily speaking, technologically speaking. And so if they're, if they haven't made contact with us now because they should be able to like teleport across the galaxy and do whatever they want to do why not?

[02:51:14] **Matt Wallace:** Right? And so, I don't know, there's the Star Trek version, right? Which is they're literally waiting on the other side waiting for us to have our warp event. Uh, or whatever that is. And obviously if you're that sophisticated, if you're only a little ahead of us, then you might leak, your radio waves all over the galaxy, right?

[02:51:29] **Matt Wallace:** But if you're millions, if years ahead of us, you might have given up on radio waves a long time ago. Maybe you don't receive ours because you're not bothering, cuz you communicate completely differently

[02:51:39] **Anna Claiborne:** Or why would you be even remotely interested in a civilization like this if you are that far ahead? Like, how are we? Anything? But I know it's like, how interesting is an ant colony to a person? Oh, well, it's kind of fun to watch from sometimes. 

[02:51:58] **Matt Wallace:** We're practically [02:52:00] quoting contact now.

[02:52:01] **Anna Claiborne:** Yeah, interesting to study their behavior, but it's, I'm, we, we have way, we have a whole different problem set to deal with.

[02:52:10] **Anna Claiborne:** We have different communication. It's just, it's not really something that, uh, we're gonna spend too much time on, uh, so any, any civilization that was sufficiently advanced enough is probably just going to pass right over us and go like, ah, cool. Just keep right on going. It's got, it's got different concerns.

[02:52:34] **Matt Wallace:** And, and for sure, like there's this flip side, and I feel like it has a name and I forget what it is, but this theory of, it doesn't matter what the probability is. You can say it's unlikely we're the only one, but if we are the only one, like there's a hundred percent probability we would make that observation, right?

[02:52:52] **Matt Wallace:** Like, if we're the only intelligent life in the universe, we're definitely gonna go, why are we the only intelligent life in the universe? Because we're capable of asking that question. [02:53:00] So

[02:53:01] **Anna Claiborne:** I think that chance

[02:53:02] **Matt Wallace:** fact that we're here,

[02:53:03] **Anna Claiborne:** yeah, I think that that chance is really unlikely, but I also think it's highly unlikely that we would have contact with any other civilization just because I had to, I had to Google this just because it's a great example. It's the distance to the Androy Andromeda Galaxy, which is our, next closest Galaxy 2.537 million light years.

[02:53:23] **Anna Claiborne:** Even if you are very good at space travel and

[02:53:26] **Matt Wallace:** yeah.

[02:53:27] **Anna Claiborne:** you can go faster than, if you have F T L capabilities, it's still gonna take you a while. And even if, even if you have developed, wormhole travel capabilities, you can punch through to wherever you want, how would they even know to look?

[02:53:40] **Anna Claiborne:** For us, it is such a vast amount. I mean, it's just a huge amount of ground to cover and a huge amount

[02:53:49] **Matt Wallace:** about a self replicating fleet of machines that goes out and harvests asteroids to build more copies of itself and then does little jumps and just catalogs every planet. I mean, that does, it does. If you assume ftl [02:54:00] that can for a self-driving spacecraft seems easy to go and gather all the information.

[02:54:04] **Anna Claiborne:** Then, just

[02:54:05] **Matt Wallace:** I think f FTL seems like a big leap. I mean, that's one of those things that we love to have in science fiction

[02:54:11] **Anna Claiborne:** Hello. It's the

[02:54:13] **Matt Wallace:** I, we see hints maybe of FTL communication maybe with all this like quantum entanglement. Even if it's not practical, I spend lots of times thinking about if you can side channel attack the universe to get, information out of like, when, when, or.

[02:54:27] **Matt Wallace:** Like quantum entanglement collapses. Like the, the channel of knowing it collapsed becomes the communication channel.

[02:54:33] **Anna Claiborne:** Yeah, you're, yes. Yeah, there, you're right. There is actually potential for side channel sort of like communication there. That's interesting.

[02:54:41] **Matt Wallace:** yeah, not allowed to use it directly cuz you collapse the entanglement. But maybe that's the message that lets you tell people if people know it collapsed, maybe that's how you send a bit. Which actually, I mean there was a, in that one book, that same book that I was telling you about earlier, singularity Sky, they actually have these big [02:55:00] globs of entangled qubits that they ship.

[02:55:02] **Matt Wallace:** It's like one of the most expensive things. They have calls. It's like the long distance call of the future that uses

[02:55:07] **Anna Claiborne:** I'm sorry, your qubits have expired. You no longer have enough

[02:55:12] **Matt Wallace:** or it's like everyone you, yeah, everyone you use disentangles it. So it can only be used once you ship it from here to there. And after that you can have FTO communication for however many you entangled in the first place.

[02:55:23] **Matt Wallace:** But then when they're gone, they're gone because you have to be back physically together to entangle them. What an interesting like that, that made a ton of sense to me actually. And then it only becomes a shortcut cuz you still have to get there. Although they end up, they do end up having like ftl travel in some, some way.

[02:55:40] **Matt Wallace:** So I don't know. Yeah, it's an interesting

[02:55:42] **Anna Claiborne:** like, yeah, it seems like the most impractical thing to me, but the, my sort of

[02:55:48] **Matt Wallace:** So easy to picture like a thing. I mean we, we literally could have something surrounding our solar system out, uh, another a hundred thousand light years that literally, or whatever, that literally [02:56:00] is a big fairday cage. Like they about us just literally put us in a box, like let these guys figure it out when they come out here, then we'll chat or whatever.

[02:56:08] **Matt Wallace:** I mean, there's a million

[02:56:09] **Anna Claiborne:** There could be a test. Well, I think

[02:56:11] **Matt Wallace:** Sufficiently advanced technology is magic.

[02:56:14] **Anna Claiborne:** yeah. Yeah. I think it's a timing thing because just every, every example in your life sort of, it, it shows you the importance of timing and as far as we know, unless of civilization has gotten so far, that they can actually manipulate time, which we don't even know if that's possible yet.

[02:56:31] **Anna Claiborne:** Uh, and so unless they've gotten that far, like timing is everything and the universe is incredibly young. And so if we're just here, if we are on the beginning say that we're on, the universe is very young. It's like a minute past midnight, minute past when the universe was born and on the, on the big clock.

[02:56:53] **Anna Claiborne:** And we're one of the first civilizations. And there are a lot of other civilizations out there like us, which still [02:57:00] on a cosmological scale is, is still very tiny because let's say, we've observed a lot of the plants around us doesn't really seem like there's anything there. So, say that we've observed a thousand planets, well, it's like how many are there in the Milky Way?

[02:57:17] **Anna Claiborne:** That's millions and millions. And so,

[02:57:19] **Matt Wallace:** in my totally like amateur like thing, even some discoveries about like what role the moon played in managing to keep us like, with a usable atmosphere and usable water and things like that. Maybe those things, all those combinations are rare.

[02:57:37] **Anna Claiborne:** Yeah, we know, we know that life is at least relatively

[02:57:40] **Matt Wallace:** and maybe it's a hundred civilizations instead of millions, but which to us would still be tons, but it might mean, I don't know.

[02:57:48] **Anna Claiborne:** But if we're all at the

[02:57:49] **Matt Wallace:** feel compelled?

[02:57:50] **Anna Claiborne:** Yeah. And if we're all at the be relative, relative beginning of the, of the evolution of life and we're all have these limited capabilities, we're never gonna be able to [02:58:00] find each other. Like, at least not now. I mean, maybe some at some point in the future, but not, not anywhere close to now.

[02:58:07] **Matt Wallace:** Yeah, I guess it really just comes back to this idea that we shouldn't be the first because of where we are. Galactically we weren't formed early, and therefore there should be a bunch of civilizations that have, I mean, there should be planets at least that have millions of years on us, right? Although, I guess what is millions of years, I mean, even if it's hundreds of millions, if it's, if, if we're talking about a scale of billions, there has to be like, uh, the, the error value here, right?

[02:58:37] **Matt Wallace:** The, like, if you're learning ml, everything is a cost function, but there's a cost function for like how much, maybe, maybe we weren't the first planet to form, but maybe we just got lucky, like, evolving in some way. Whether it was the particular form of early bacteria or the lucky lightning strike or whatever it was, or the moon or, combination of many factors.

[02:58:55] **Matt Wallace:** Somebody had to be first. And it's not, it's not always the, the thing that quote [02:59:00] unquote should have been first. I, I think it's wishful though, too, right? I mean, wouldn't you agree? Like, you'd

[02:59:05] **Anna Claiborne:** It is super wishful.

[02:59:06] **Matt Wallace:** there are many civilizations out there. Like, we don't wanna be alone in the universe.

[02:59:10] **Matt Wallace:** Like, I, I was actually my, one of my favorite lines from contact, right? Are there others, many others, and the, his whole line about, the one thing we've found in this like vast e empty emptiness of space, the one thing that makes us feel less lonely is each other. So it's like, ah, that's really romantic to think there might be hundreds of civilizations out there that will eventually be able to contact and we'll all be like, oh, we went through that too.

[02:59:32] **Matt Wallace:** And I mean Star Treks like born from that and so many other things. That's really interesting.

[02:59:36] **Anna Claiborne:** oh, no. I, I

[02:59:37] **Matt Wallace:** maybe, maybe o that's left is Agis. Maybe that's why we don't see

[02:59:41] **Anna Claiborne:** I know. And they're just like,

[02:59:42] **Matt Wallace:** sh an l om will float in G P T 27 from another planet's gonna float up on a satellite and be like, what's up Earth people.

[02:59:50] **Anna Claiborne:** uh, no. They'll be like, Hey, you finally have a G P T that I can interface with. This is great. 

[02:59:54] **Matt Wallace:** Been waiting for you to shed your shed, your biological beginnings. Now we can, now we can really talk.[03:00:00] 

[03:00:00] **Anna Claiborne:** Yeah. Now we can do this. Oh know, dang. No, I was, whenever I run, I listen to podcast. I was listening to Lex Friedman one where he had a, a guest that was talking about you, the, this search for extra terrestrial life. I'm trying to

[03:00:17] **Matt Wallace:** actually saw that I listened to that episode.

[03:00:18] **Anna Claiborne:** Yeah, I was trying to remember his name and I'm, I'm blanking on it, but, it is, I it was, one of the things Alexa says is, kind, it's the same thing.

[03:00:25] **Anna Claiborne:** It rang really true to me, is I really want there to be life out there. So it's like all my thinking is colored by that. I can't help it because I really want there to be something out there. So I'm always gonna take the most optimistic view on it. Well, yeah, they're, they're out there. We're just too early or, some, uh, some version of that because it's a really deep-seated hope.

[03:00:50] **Matt Wallace:** Yeah. I wonder if we knew, like, if we were guaranteed to know that we would be able to leave earth [03:01:00] and become space faring. And by the way, one thing that I didn't learn until relatively recently, like maybe five years ago or something, was that from a, from a quantum mechanics perspective, apparently if you could get a, get in a ship and get on your way to Andromeda, right?

[03:01:13] **Matt Wallace:** Which is you said is the closest galaxy, it's millions of light years. If you accelerated one G on the way to the core of Andromeda until you were halfway there and decelerate at one G, you'd actually make the trip relative to you. I I wanna say in 21 years on the ship. Because Yes, I didn't understand that time dilation affects you while traveling.

[03:01:34] **Matt Wallace:** So while, while you would never, it's all relative, right? So while you would, but you would continue to approach the speed of. And so you need more energy to go faster and, and I don't know what one G actually for 10 and a half years gives you honestly,

[03:01:52] **Anna Claiborne:** it's probably pretty big. Yeah,

[03:01:54] **Matt Wallace:** probably pretty big, but 10, 10 and a half years of like big press back in your [03:02:00] seat.

[03:02:00] **Matt Wallace:** Uh, but by the

[03:02:01] **Anna Claiborne:** really fun.

[03:02:02] **Matt Wallace:** you're halfway there and you're decelerating. Yeah, apparently the, the am the, the relative, relativity basically meaning is the faster you are traveling, the less uh, time affects you and or the less time is flowing for you relatively. But this is the reason why you would get back to, if you made the trip and round trip, you would get back to Earth of course.

[03:02:19] **Matt Wallace:** And literally millions of years would've gone by and everyone, would've been long gone and they've might've lapped you like 600 times, right. Who knows what technology does, like getting in that first ship is gonna be so weird. Cuz I honestly think we're gonna, I honestly think there will be a day where we put people on a trip for Interstellar travel and ship them off in something that is geared up for a 50 year voyage to make a 21 year trip and then maybe even come back and they're gonna get there and humans are going to have been there for most literally 500,000 years.

[03:02:51] **Matt Wallace:** They're gonna be like, you, you didn't pick me up on the way. Or did you not realize?

[03:02:56] **Anna Claiborne:** wanted you to bring me some burgers or pizza or

[03:02:59] **Matt Wallace:** but of course it [03:03:00] only would've been 21 years. So it's not that big of a deal. Right. It's not like you had to experience. Yeah. So that was fascinating to me, this idea that the relatively relativity, the relativistic effects, if you will, would, would dilate time for you in that journey.

[03:03:15] **Matt Wallace:** I was like, wow. So it really is possible for us to travel. Cuz when the theory was you're gonna have to be on a ship for millions of years,

[03:03:21] **Anna Claiborne:** No,

[03:03:22] **Matt Wallace:** pass over like a hundred thousand generations of humans on a ship while traveling through space. Like, I mean, I'm a real optimist about technology, but just know.

[03:03:31] **Anna Claiborne:** that's, no. Yeah, that's not gonna work.

[03:03:33] **Matt Wallace:** yeah,

[03:03:34] **Anna Claiborne:** I don't know. I, I hope, I really hope that we find some, or I should better say, not that I hope that I, I think it's just oddly more likely that we f that we figure out, uh, the hack of travel rather than, because fast travel and space gets incredibly complicated because you also have to slow down.

[03:03:54] **Anna Claiborne:** Like, it's like how you talked about going to Thero Galaxy. It's a, it's a big thing.

[03:03:59] **Matt Wallace:** if you [03:04:00] hit, what happens if you hit something at

[03:04:02] **Anna Claiborne:** All you have to do is hit, yeah. Hit something like that and you're gonna be having a very bad day. You hit a couple

[03:04:07] **Matt Wallace:** Yeah. You hit like a mode of dust and it disintegrates you. Yeah.

[03:04:10] **Anna Claiborne:** Yeah. You're gonna have a really bad day. Yeah. I think that there's just so many complications that if we could figure out, if we could really harness and get our, get our heads wrapped around quantum physics and how, tunneling works and things like that to figure out how we just, space.

[03:04:25] **Anna Claiborne:** You just pinch the two points together. You're over here and then you pinch 'em together and then you pop out over here.

[03:04:29] **Matt Wallace:** egg, can I, can I get my G P T seven to help me solve the mysteries of quantum physics now already? Right. So it is interesting to think about how just those, even those early things, like the electromagnetic containment field for plasma for fusion reaction just feels to me like a precursor of a hundred or a thousand incredible things.

[03:04:51] **Matt Wallace:** Like as much as value as we get out of things like, large Hadron Collider, what do we get [03:05:00] when we have the observational data and then this super

[03:05:04] **Anna Claiborne:** Oh, yeah.

[03:05:05] **Matt Wallace:** beefy AI to help us both design and then interpret the experiments? I, I don't know. I mean, it, that stuff is so esoteric and again, so above my intellectual pay grade, thinking about like theoretical physics, right?

[03:05:19] **Matt Wallace:** I almost think there's this whole set of people who, they do that because it, they're compelled to, because they're drawn, that the anec is drawn to solving this hard, hard, hard problem. And like, they don't even know. Is it practical? Will they get anywhere? It's not gonna make them rich, et cetera, but it's the problem they have to go solve cuz

[03:05:39] **Anna Claiborne:** it's a hard problem. Yeah,

[03:05:41] **Matt Wallace:** it's a hard problem.

[03:05:42] **Matt Wallace:** And it is like, I mean, shout out to all of you like, advanced theoretical physicists working out there because like, this is the stuff that, that moves the human race to the next thing. Although it'd be weird if it ends up being an llm, right? And it's a bunch of, uh, hardware people building better transistors, generation after [03:06:00] generation and ML people and and then the Large Hadron Collider at the end of the day was less meaningful for physics than say the self attention, paper about transformers.

[03:06:09] **Matt Wallace:** Like, it's weird to see how those cause that affect things play out, but it's totally possible that the AI ends up being so good that that paper mattered more than the Collider did. So hard to say. Maybe you need both.

[03:06:21] **Anna Claiborne:** Yeah. It was, that's, that's why, I mean, that's why this is, that's why G p t four and every subsequent, every evolution of G P T is kind of becoming a bigger and bigger deal is because, is exactly because of those possibilities. That it might just make everything else irrelevant.

[03:06:39] **Matt Wallace:** So let's say today and this will be my like, last question, but let's say today you wanna have some big impact on the world, right? And I think probably if you're, if you, if anybody listens this far into this, they're probably gonna, they're probably going to be a curious person. But I wanted to ask you, what's [03:07:00] the thing you should cultivate most?

[03:07:02] **Matt Wallace:** Like, what's your advice for somebody? And I always tell people, be curious actually. So, but I'm, I'm, I would love to ask everybody this. What's the thing that you'd advise people to kind of cultivate or curate? And I think maybe some people it's gonna be. Earlier in life, maybe college or earlier career.

[03:07:20] **Matt Wallace:** And obviously, but that's not universal. But what do you think?

[03:07:24] **Anna Claiborne:** Curio curiosity. Yeah, curiosity is pretty high up on that list. The good thing is humans are naturally curious. I, I, I think I might go with a really sort of off the wall answer here, which is compassion, cultivating compassion. Because I think in any, in any future, whether it's one that is vastly improved by chat by G P T 4, 5, 6, 7, 8 compassion is always the thing that we lack, uh, as a species and we should never stop trying to get better at it.

[03:07:59] **Anna Claiborne:** You ask the [03:08:00] question, why aren't we better to dolphins? And it's kinda like, why aren't we better to ourselves? Yeah, there's,

[03:08:07] **Matt Wallace:** Yeah. Boy, that's a heck of a, that's a heck of a line. That's true.

[03:08:12] **Anna Claiborne:** So yeah, I'm, if you want to get back to the, to fundamentals that, that might be the one thing that we really need to work on. Especially if we have an Agi I to take care of everything else is just work on being better and making anything we build better and more compassionate, especially in a world of unlimited resources.

[03:08:34] **Matt Wallace:** It's interesting how difficult it is for human beings to sit in the circumstances that make us uncomfortable. And my wife and I were literally talking about this at lunch today and about how when something terrible happens to someone, one of the things that makes it difficult to be compassionate is when it's unjust.[03:09:00] 

[03:09:00] **Matt Wallace:** When their, their circumstance in this, this, whatever tragedy befalls them, whatever. And it, it appears to be, There's a, an instinct that tries to explain what happened to them. Like we engage culturally in blaming victims. And it's not cuz we're bad people and it's not because we really think they're bad, it's because we're afraid.

[03:09:21] **Matt Wallace:** Because if it happens to them and it's senseless and it's terrible, then it can happen

[03:09:25] **Anna Claiborne:** That can happen at us, so it must be their fault, cuz that can never happen to me.

[03:09:31] **Matt Wallace:** exactly. And, and so literally, and there it is, like at the base of every like weird and crappy thing we do, there's this like, fear, right? We're, we're creating outsiders because we're afraid of being an outsider, right?

[03:09:44] **Matt Wallace:** So we want to create the in-group so we can be part of it, right? And the closer we are to the majority, the better. And we, we've blamed the victims cuz we want to create a reason why terrible things won't happen to us in this, in this sort of otherwise uncaring in a world and universe. It's just really like, [03:10:00] interesting.

[03:10:01] **Matt Wallace:** Okay. Well that got deep. What a way to close that out.

[03:10:04] **Anna Claiborne:** Yeah, hopefully. I think, I think Amanda's gonna have to maybe edit this down a little bit.

[03:10:10] **Matt Wallace:** Well hey, Lex Friedman's done four hours. We can, we can put up with this. But it's been a great talk and it reminds me, I'm, I'm happy that our beer conversation gave you enough fortitude to sit through three hours with me. Uh, but it was a good chat and there were

[03:10:24] **Anna Claiborne:** it was good too.

[03:10:25] **Matt Wallace:** uh, insights.

[03:10:27] **Matt Wallace:** Yeah. So appreciate your joining me and yeah. Anything you wanna say?

[03:10:32] **Anna Claiborne:** no, I thought this is su, I, I think this is super fun. I love getting to talk about the really crazy out there stuff because we, you never get enough time to actually do that in life. It's like we have to, we have to actually do our jobs and if you're lucky enough, you get a chance to go play around with some code to do something.

[03:10:49] **Anna Claiborne:** And just having the ability to, uh, take some time out and forget about all that and just BS is

[03:10:57] **Matt Wallace:** For sure. I [03:11:00] think it's interesting too to see how like our, that there's all these interesting threads and you and I have talked before about like, what a, a freakishly small world it is, right? It's come up like two or three times literally in this conversation, but it came up before too, right?

[03:11:12] **Matt Wallace:** There's all these amazing overlaps of like, former coworkers and friends and companies and shared lives and, uh, the world is so much smaller sometimes than we give it credit for. And it's, it's fun to then realize like that all these people who are, doing these kind of interesting things in their little pockets in the industry or sometimes really big pockets in the industry, right?

[03:11:33] **Matt Wallace:** That we all like wrestle with some level of like, existential questions. It's not like I want to drive it there necessarily too, right? Because we deal with a lot of this stuff, like every day I feel like I'm lucky enough to have a little more perspective on things like an l m even without being an expert.

[03:11:50] **Matt Wallace:** Even acknowledging that becoming an AI expert is faster than the speed of light right now, right? Actually maybe what it is, is I'm an expert at imagining things that don't exist yet and getting really excited about them. And [03:12:00] so, like literally a lot of the stuff going on in ML now I think strikes me as like, oh, hey, look at this.

[03:12:05] **Matt Wallace:** Like, much like, okay, so the internet backbone in 1994 carried 25 megabits a second. I looked this up the other day. That was the, the second to the last year when, uh, what was the original net? I wanna say Arpanet, but that's not it. The science one.

[03:12:23] **Anna Claiborne:** Oh. Connecting to the universities. That's, uh,

[03:12:26] **Matt Wallace:** Yeah,

[03:12:27] **Anna Claiborne:** uh, yeah.

[03:12:29] **Matt Wallace:** it'll come to me at some point. But, so that's the core backbone, 25 megabits basically. I think that was an average, not a peak necessarily, but then I'm like, oh yeah, and I'm like, here I am. And that was about when I started like working right. I did a tech support job in 1995 and like the

[03:12:47] **Anna Claiborne:** at the bank in 1995.

[03:12:49] **Matt Wallace:** Yeah. So I flip open my phone and I go speed test on l t e, and it's like 125 megabits. I'm like, so my phone that I carry in my pocket that like has more, that has the

[03:12:59] **Anna Claiborne:** It was [03:13:00] five times.

[03:13:01] **Matt Wallace:** of my computer is five times as the whole internet backbone and the year I started doing internet work, and here we are, and like, if that's not amazing, and then you think, but wait, the pace of changes accelerating.

[03:13:16] **Matt Wallace:** And then I go look and see, the other day somebody published this like milestones of ML or something thing, but it's basically starts on the 1st of March, which is when the chat G P T A P I came out, I was like, wait, what? Like, I'm like looking at my watch going, that's like five weeks ago. There's no way.

[03:13:35] **Matt Wallace:** And, and, and yet

[03:13:37] **Anna Claiborne:** Was it, was it only five weeks ago?

[03:13:39] **Matt Wallace:** the API March 1st, it's unreal. And I was writing that code, the text speech like in one day and it was done in a few hours instead of a couple weeks. And then next thing like there's just mountains of things coming. Then, it's plug-ins like, three weeks later and you're just like, wait, what?

[03:13:56] **Matt Wallace:** N G P T four? And you're just like, wait, hold [03:14:00] on. Th this can't all happen in a five week period API plus four plus plug-ins for four. And then there's things that are just like, this Facebook thing, the l l Databricks in their model, they release the dolly model and then this meta, image detection, it's like, oh, the world can't move this fast.

[03:14:19] **Matt Wallace:** It's crazy. So,

[03:14:20] **Anna Claiborne:** Well that's,

[03:14:21] **Matt Wallace:** don't stop and blink or you'll miss it.

[03:14:23] **Anna Claiborne:** yeah, and that's what I mean, that's what I'm, humans just aren't capable of keeping up with this. We're not programmed too, so it's, it's gonna get interesting.

[03:14:34] **Matt Wallace:** All right. Well Anna Claiborne, until next time. Thank you. 

[03:14:39] **Anna Claiborne:** Thank you so much for having me on.

[03:14:41] **Matt Wallace:** All right. Thank you.

[03:14:43] ​

[03:14:55] 


The evolution of infrastructure as a service
Anna's journey from banking to genetics and finally tech
The future of computer science: AI and cross-functional domains
Matt Wallace's insights on linear algebra and its application in coding
Anna Claiborne on Nanopore sequencers: pocket-sized DNA sequencers
The impact of machine learning and GPT models on APIs and automation
The evolution of Justin TV into Twitch: how gamers shaped a platform
The future of transportation: unmanned robo sky taxis
Machine learning's potential in solving complex problems: from plastic waste to cold fusion
The Fermi Paradox: theories and possibilities of extraterrestrial life
Reflection on the rapid pace of technological advancement and its implications for the future
Matt and Anna's discussion on the rapid pace of technological advancements: GPT4, API, plug-ins, Facebook, Databricks, the dolly model, and meta image detection