Curiouser & Curiouser

Making Sense of AI: Trust, Scale, and the Human Role

Season 1 Episode 1

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 46:41

AI is moving fast, but good judgment still takes time. In this premiere episode, Mo Sadek and Julie Tsai explore AI, security, and why curiosity and human perspective still matter as technology scales.

🔗 Podcast: https://alice.io/podcast

Follow the show so you don’t miss the next episode.
New episodes every two weeks. Stay curious.

00:00
I do believe that the real win about all of this is freeing up humans to do more  of  the deep human thinking that we do best in terms of synthesizing information, prioritizing really quickly sometimes around things that have not been explicitly defined. We need to set a

00:16
clear constraint, like work in this sandbox, not in this other one. That second area is like workflow processes, how organizations and humans work and getting a better picture on that. But in order for it to happen correctly, humans still need to be in the mix, closely inferring, hey, that is a good description of a process in our organization, or this is not, because this is making way too many assumptions. If AI has ever made you stop and think, wait, what is happening?  You're not alone.  I'm Mo, and I'm a security researcher asking the same questions.

00:45
On Curiouser and Curiouser,  we're having open conversations with experts,  researchers,  and leaders working at the edge of this space,  talking through how AI is taking shape,  what's shifting,  and how people inside the work are thinking about it as it happens.  So join us and listen in as the conversation takes shape.

01:04
Welcome to Curiouser and Curiouser. My name is Mo, and this is our first episode. I'm a security guy. I don't know if I'm an engineer or a project manager or product manager or an incident responder as most people know me as, even though I've never formally been an incident response. I think it's just something that all security people have had to do. And over 10 years, I think, unfortunately, my career has been a little too long and we've seen way too many.

01:34
Too many weird things. m I remember the first wave of like cool things happening in security. It was cloud and um we saw like  all these jobs get created and all these jobs disappear and people taking on all this hype and all these new vulnerabilities and stuff.  And then a couple of years later we've got like all the zero trust stuff, right? A couple of years later we see platform consolidation and we see all these vendors buying up other vendors. We are in a very interesting world in terms of security and

02:04
One thing that I've learned as a practitioner, um instead of asking what the heck is going on, it's why are things going on and how things are happening and becoming more curious about things as they occur in our world. Curiouser and curiouser. I would say so.  So for our first episode, I thought it'd be really important to bring someone who is

02:27
in my opinion, the embodiment of what happens when you are too curious.  And uh I'm really uh excited to introduce Julie Tsai. uh Julie has been a practitioner for at least 20 years. Yeah, that's right. least. But I think you've had this really interesting background  and you've gone from being that super technical person, really low level in the weeds. Yep.  And you

02:55
jumped around and you developed this interesting skill set throughout a bunch of really interesting orgs.  And by the end of it,  you ended up not only leading a company through a couple of successful exits, think, now  you're...

03:10
actually the tastemaker.  I think it's a cool title. If you don't have a plaque with that, we'll make sure you go. I like that. I like that a lot better than the  concept of going to the dark side. Although maybe that's just part of the gig insecurity. We're always peeking around the corners at  both the good and the bad. Yeah, exactly. I think this is like uh you're a really great first person to have.

03:37
And I say that because there's so many things changing right now. But  I think it's really important to take a step back and instead of focusing on the what is happening,  we focus on how we got here.  And I think you  are one of those people that has that big picture vision.  And we can kind of like talk about like exactly how we actually got to the place where we're at.  Which I've been avoiding saying it this entire time. oh

04:05
AI. Yeah, that was admirable. I tried my hardest not to It was going to be there. Yeah. So I guess we can start off with just like one, why don't you tell us a little bit about your own experiences other than coming from my lens? I was definitely driven by curiosity through most of my career, just wanting to understand how things work and not necessarily. I I came into this field

04:34
not expecting to be in security, not expecting to go as deep as I did into the technology. I found that when I was coming out of school, the most interesting things were happening on internet, basically Web 1.0. So you can kind of clock where I was in terms of time. But it was changing everything around us. originally, I had gone to school planning to be a journalist where I could satisfy all those different.

04:59
you know, interest in lot of different things, but it was really clear that there was this kind of tectonic shift that was happening in how people were both uh creating and distributing information. I knew that like, okay, well, I have to understand more of this.  And also that coming from a technical and engineering family, I've always had a core belief that the how  really does make a big difference, that the how actually will influence the range of

05:29
what  is going to be and also people's lens on the rest of it. So that worldview ends up getting impacted a lot by  actually touching the stuff that you have to work on or understanding how those things fit together. And so I spent a lot of my career as a sysadmin, like  about 13 years and then a lot of years doing  basic uptime for reliability, but always  often working in these teams that didn't have separate security teams. We were just doing security as part of the job.

05:58
Over time,  had found that  a lot of the messiest problems I couldn't stay away from  that were also critical for the org ended up being insecurity. so  after  probably kicking around for about like 15 years and in full stack, know, one of my interviewed at a very large company and

06:18
There was a really insightful VP event. said, I think you should go talk to the security chief. He'd be really interested. I said, no, security has its silos and I like seeing all these different things. But I ended up talking to him, really liking what the teams were working on and the impact that they could have. Being able to apply a lot of the DevOps knowledge, which in the earlier iterations was in some ways one of the first major

06:47
kind of like autonomous agent kind of  developments in our business and really liking the impact and the chance to pull everything together and to see like in this landscape of all these different things happening  with all this complexity, where  were the major critical issues happening and like how could we solve not just that but improve the whole as we go, you know?

07:12
Because was those moments, we have that saying, right, like never waste a crisis. It's those moments where you can kind of  shake the org away, can shake away different practitioners in terms of, hey, you know, like if we turn around and look at this in a different way or  start peeling it back into fundamentals, like you can get  a whole bunch of different things going at the same time. So that's kind of what pulled me into security. And I think that we are on

07:39
this  really interesting era right now. I like to think in the years of being part of technology and engineering, this idea of punctuated  equilibriums, where you have these big breakthrough moments. And I think we're at the ascent of one of those in terms  of the scale and types of impacts that AI can have.

08:06
And then things will sort of shift and reintegrate and there's sort of a new landscape and a new reality, but that's still informed by the old ones. So you have these different moments in time that really make a difference. And I think we're at that onset of one right now. Yeah. So it's like kind of tough to think about like how we kind of moved from,  you know,  all doing all these things and security being this very much hands-on.

08:32
piece, right? Where we all really need to be deep into weeds and we all need to be building all these things  and really on the ground level.  I've noticed that with,  at least with AI and some of the things that are certain to come in, practitioners are finding ways to,  I wouldn't say become less hands-on,  but maybe change the way and change where their focuses are going.  So it's interesting to see where the actual value prop for

09:00
like AI and like specifically the security industry is.  So  in your perspective, where do you think you see the most or where have you already seen the most value for it? Yeah, absolutely. I think there's two major places where I think people who are in the space  should continue to push really hard on. One is that as you know, information security for a while now has been suffering from a sort of flood  of too much information, too many tools.

09:30
Most teams are  are nowhere near staffed up or equipped enough to like manage all of that information. And so very quickly, when  you're trying to instrument a new infrastructure, like you start bringing in some tools and the massive information that everyone has to deal with in terms of just volume of logs, volume of interactions, volume of.

09:50
different identities and then where is it going out into in the larger worlds, right? You know becomes very quickly untenable for most teams.  These are machine scale problems now, right? You know, because we were seeing this sort of shift  people like to make analogies about how the different industrial ages have applied to computing and we can think of it a little bit as moving past that sort of first and second stage of like  manually created automation to automation and robots that create themselves, right?  And

10:19
I think that what has happened with most teams  is that  with all of the new  tools and information and logs, it's been, if you don't get ahead of it with using the technology smartly or knowing where you need to uh dial in, it can sometimes feel like a liability trap for teams where they have too much information and they're just never gonna get ahead of it.  So I think that there is a...

10:46
really a clear, clear mandate for AI is very good for processing large amounts of data. And so let that become part of what's informing the organizational brain, the industry brain around security and go from there. Now, that said, I do think that a lot of the sort of quicker actions that some of the companies are taking in terms of headcount and org are

11:14
possibly premature. think a lot of it comes back, in my opinion, to the pressures that the leadership have in terms of showing quarterly profits and yearly profits and being able to show, we're going to be able to achieve these automation wins and cut this headcount. But the reality is that all of that automation and uplift needs tuning. It needs refinement. The airplane is still being built while we're flying it.

11:44
What's really essential is that the  practitioners and the experts are still close at hand, not just to refine on quality, but also to keep looking at it in terms of are we building in the right assumptions? A lot of times, just when trying to work with  some of the AI for less well-defined tasks, it will make certain assumptions about what it is that I'm looking for and try.

12:09
too quickly to be useful, too quickly to say, like, here's your answer. And it's like, no, actually, the relevant signal in all of that was the ambiguous word in there. And so kind of figuring out, hey, we're at the stage where AI and the people who are helping to create what this is producing, it still needs to be trained to fit us rather than us to fit it. So I think that just the scale problem is a huge one.

12:37
I think the other insight that I heard recently from Professor Dr. Stuart Evans is that,  hey,  we're at this place now with  agent automation and what AI can do that it's not just about the data and analyzing it, as has been the game, I think, for the last maybe 10 years.  But it's really at this point about systems of action, not systems of record.  And for that, you need really good insight on

13:06
You workflow processes how an organization works and i would argue even the hidden. The hidden pockets of what's going on that aren't all clear in the documentation you know actually like such and such process only works because you know  maybe you two or three very seasoned people who are working directly with the head of production change like kind of overseeing it you know so what is that what intelligence are they bringing to the table in terms of being able to spot check when there's an issue or not you know and.

13:35
What, when we offload some of those things from them, are they now able to do to really improve the organization and the processes? We should be thinking about, like, what is that freedom up to do? Because I do believe that the real win about all of this is freeing up humans to do more  of the deep human thinking that we do best in terms of synthesizing information,  prioritizing really quickly, sometimes around things that have not been explicitly defined.

14:00
you know, and being able to say, actually, the machine needs to work like this, or here's a place where we need to set a clear constraint, like work in this sandbox, not in this other one. You know, so I think that second area is like workflow processes, how organizations and humans work and getting a better picture on that. There's a lot of potential there for offloading  some of  the drudgery and things we don't think about, but in order for it to happen correctly, humans still need to be in the mix closely.

14:28
closely inferring, you know, hey, that is a good description of a process in our organization, or this is not, because this is making way too many assumptions. It's kind of interesting to see how not focused in security, right? Yeah. How the entire tech industry has adopted these types of philosophies  around like humans still in the loop. Yeah. But it's not how I would like, how you would like initially imagine it.

14:55
So if you look around and even on a LinkedIn, right?  Actually, if you go and search different, I like to look at what people are hiring for. Talks about what teams are doing,  what they're really focused on internally, and it gives you little bit of signal into what sometimes roadmaps look like.  And I've seen a lot of these tutor jobs come up for  hiring humans to do an AI thing.

15:24
or teach an AI something, some skill, right?  And it's never focused in a single domain. There's teachers, there are accountants,  there are lawyers, right?  There are all these different types of things.  And when you think about it, it's because we're running out of things to train our data to train AI on, right?  And we're at a point as  humans where there's only so much written stuff we have.  And  eventually the content that we're starting to...

15:53
create is now AI synthesized, right?  So we're losing a bit of originality as  time continues to go. So how do you get it? You explicitly hire humans to do a human job and you teach AI how to do the human job in a new way every single day. Getting back to the original source. um And  another thing I recently heard is that  if you look at like a GPU or like comparing a GPU to headcount.

16:23
Right. For uh the same amount of money that you'd spend on maybe one or two humans, you know, at like a Bay Area average, you could  get a couple of GPUs or quite a few GPUs. Right. And you have like 10,000 people right there.  Right. In terms of like AI agents and um this again, scalable resource that never sleeps. It doesn't ask for PTO.  And I heard a funny joke that it's now write-off.

16:53
Right? As  a business expense, I have no idea if that's true not. And you look at the people who are affected by this the most, it's not going to be the people at the top who are thinking, right? It's  the people who are entering fields. And there's a lot of concern around like, how do we help these new folks get into something, right? How do we train them to do something?  And it's a little bit scary to think about when  you see leaders talk about, well, instead of

17:22
having this new head count, why don't we see and validate and see if AI can do it first?  Whereas this is like, I think a perfect opportunity to bring in someone new to the field.  It's not like companies are exactly creating environments where  you can retain talent for many years at a time. mean, we've seen talents  get cycled and  very quickly in the last couple of years because of an AI.

17:46
I think we're at a very interesting  dichotomy between the practitioner and the organization, as well as technology, where they're all kind uh coming to an impasse in some way, where it's like, how do we make the stakeholders happy? How do we make people happy? And then how do we actually utilize people to the best of their capabilities? Yeah, completely. It's interesting. Nowadays, I think that I get hit.

18:10
hit up on LinkedIn more often for uh colleagues like kids rather than themselves, like looking for those initial opportunities. And I think it does get to, um you know, some of the  thinking I think in orgs that's a little bit short-sighted, you know.  I am a believer that there are certain lessons we can learn from history that it's not going to repeat itself, but it will have,  that it rhymes. There will be parallels. And if I think back to like what was happening with um

18:38
cloud and even web 1.0  aside from the decimation of certain industries is that what ended up happening, at least in the sysadmin world, let's say for cloud, is that I think the jobs  ended up getting squeezed up the stack where people would start becoming more like DevOps or  software uh aware, or they went deeper into the stack where they were working for the  bigger computing cloud companies, like the Googles, the Amazons, the... um

19:07
let's say, black space, the market was changing. There was still work that needed to be done, but it was changing. Or people went out laterally, kind of spread into these other areas. And I think that, you know, like these shifts, and that's, that was what I would call an incremental change. Like, if I think about another one, right, like, so Web 1.0 had like,  and even today had massive, massive changes on, let's say, journalism, right, that used to be a whole different industry. And, um you know, there'd be institutions and people who kind of work their way up.

19:33
And  I know because I kind of have apprenticed in that area a little bit when I was in college. But if you see today, there's way more content than there's ever been. Right. In a lot of ways, it democratized both the access to that information as well as that content creation. Like people of all stripes have access to creating something that's considered like commercial or production level quality and like being as big as they want to be. And we see that with, um you know, people uh being able to make names for themselves, like in

20:03
in very, quickly, in months, like not even a year, whereas like in the older days, like you might have worked like for a decade to achieve a name somewhere, right?  And so  all of that is to say that, you know, it doesn't change the driver of  what's needing to be done. What's needed to be done is companies are trying to make things go more efficiently. People are trying to automate away the drudge work parts of their jobs to kind of free them up a little bit. uh

20:32
And all of those things, the opening up of time and the opening up  of different facets that are  coming together and not seen before can open up whole new fields. And I think that we're hearing bits and pieces of it and we're at the early stages now. know, let's say like doctors being able to access, m you know, trained LLMs on particular disciplines are able to diagnose an issue much more quickly for a particular patient, especially people who are

21:02
outside of RMD hubs. Maybe they have access to information in a whole different way that wouldn't have been possible before.  Maybe legal cases  can get expedited more quickly in terms of helping people process not just the gritty work of uh just preparation, but also  of uh helping people expedite the solutions faster. Like let's say being able to say like, 98 % of these cases with this kind of information is going to go in this direction. Maybe it will get people to where they need to land ultimately.

21:32
You know, so I mean, there's a lot of things that like you can see in like, hey, if that can make something go faster, it doesn't just free people up for idleness. It frees people up to now think about things in a different way and apply those efforts and like what's that next thing, you know? Yeah. And I think you bring up a couple of like really, really interesting points, especially around enablement,  right? Yeah. Where like AI has for sure like enabled people to do some amazing stuff. Yeah. I know for me.

21:57
The last time I did a coding interview, I was a junior level coding person and it was a couple of years ago. Nowadays, I've spoken to a couple of people who are doing coding interviews and a lot of it is like, oh, well, we're actually asking people to use AI in these interviews and then be able to explain how it's getting to that solution and why it actually works. But there's that part, that second part, which I think is

22:26
the piece we're losing um the most nowadays.  A lot of these solutions that you're talking about and the things that people are expecting AI to do, right? Being able to read content,  summarize it,  or be able to contextualize data and then give you an accurate um end goal is not always gonna happen, right? um There's still this kind of fuzzy,  do we actually feel good about the results?  Do feel like the quality of this is

22:55
Good.  Do we know that it's like foolproof?  I think these things are like, again, these are like questions that are up in the air, but for businesses, sometimes it's like a very much like a, well, this is a make or break moment.  But specifically there's this,  my friend was recently on a podcast about, she works for Raspberry Pi.  Raspberry Pi, massive education uh program that they have.  And you do a lot of community service.  And one of the interesting things that she brought up on the program,

23:25
was that  with AI, we're kind of losing this piece of education, which is kind of like a struggle.  Right?  So  right now, a lot of these like A to Z solutions,  we miss out on learning the rest of the alphabet.  And we kind of are starting to accept Z.  And as we see this in orgs, we're seeing things go wrong. Because eventually enough times you keep stacking, stacking, stacking a bad solution or a bad outcome  that you're not really catching. Eventually it comes down.

23:55
So  I guess in that sense, when we look at solutions, let's specifically look at security really quick.  What is the right balance of getting things done and then learning how to get things done?  Or when should a solution maybe not actually solve your problem,  but make it easier to solve?

24:14
Yeah, I think that's a really good point. Like, you know, I'm around enough us our school aged children to kind of watch a bit on how they learn nowadays.  And I think that where I've come down on a lot of this in terms of, you know, sort of these basics of like, when is it OK to take a tech technology assist versus doing the work yourself is that  you should take the technology assist when you actually don't need it on some level, you know, because now you already know how to do it. You know, I think about like that transition when you tell kids like, hey,

24:44
Now you're going to use that scientific calculator to do your math because we want you to do all this advanced stuff. But the point at which you give them that calculator is when they already know how to do their basic arithmetic. Because if they don't learn it in the beginning, they're probably never going to learn it later.  And so when I think about how  people are ingesting information, think a lot of it, both from a technical and a layperson standpoint, is getting back to understanding why I trust something. uh

25:11
um It's great that it gave some nice, uh smooth, shrink-wrapped answer, but tell me why is it I trust that particular piece of data  or  that conclusion?  For instance, I was recently looking for  some validation on different things that had taken name, different names and who they were owned by and that kind of thing. So I asked, is this a name that is owned by someone? And  it would come back with an answer like...

25:39
you know, it's likely X, Y, Z, but  the tell in there to me was the likely, you know, it's like, okay, it's not for sure. You're trying to give me an answer that sounds conclusive, but the truth is that like, you know, this hasn't actually been taken. So I think getting people in orgs back to that point of understanding, like, why do you know this is true? And in the same way that like a good manager or like a good, uh you know, process check person,

26:08
knows how to size up something from a spot check. It's like um when we're doing security testing, right? Like sometimes we can't test every single component.  So what we might do is we might test a sample, right? And say like, OK, based on this, I'm making some assumptions about the quality and I'm going to keep checking.  And I think that with how we use AI, I think that part of what needs to be built in is the sense of like,

26:34
When am I double checking on whether it's still on the track of  giving me insights versus it's now starting to go down a derivative,  more skewed track? This kind of idea when we think about  information getting watered down over time.  And I think this has shown up in a couple of AI studies in terms of  something going further and further off the rails, depending on which way you were steering a conversation.

27:03
Right?  You know, where it's like, OK, it's picking up more more derivative things from the data that's there. As you said, like, the  massive data has been em analyzed, but it's not picking up new context. It's not  picking up  new aspects  of the whole view. Like, and I think this is where we're starting to see  more of a  push towards, m

27:27
smaller, more specific samples, world samples, things that are tied to more like empirical points to verify. It's like, what are those kind of tethers to the truth? And when are we checking in? And how do we teach ourselves, the AI, but also ourselves as consumers,  when something is coming across as high integrity information? Is this high integrity information or is this low integrity?  As a human, I might naturally pick up like I have

27:55
low confidence about this thing because there are all sorts of inconsistencies in a situation  or someone's reputation.  And then the AI is methodically trying to  absorb that intuition. But there might be a whole data set or whole area that it hasn't absorbed yet, and you may or may not have visibility.  So getting back to knowing, OK, what's in that sauce? What data sets were gone in there? What were the weights?

28:23
I hear the argument from some that the traceability or the explainability won't scale. it's because of the billions and trillions of calculations that are going on that we can't reduce it all down. But I think this gets back to having uh meaningful abstractions or meaningful summaries at the end of the day.  There has to be transparency and there has to be human readable transparency. That's right.  Just because it's operating in a way that we don't understand doesn't mean that we can't understand how it's getting there. Yes. oh

28:53
Right. That's right. And I think this it actually explains a trend that I saw last year at Alex Deimos had done this really good talk. one of the points of the talk that he made was there's the adoption of AI, right, which teams are adopting AI the fastest. Can you take a guess at the top one? Incident response. Right. Yes. Dart cert teams are one of the fastest growing teams for adopting this stuff.

29:23
It's because they have so much data, Right.  That's right. A lot of these um things that they have to do is operationalized. It's already very transparent in how you respond to certain types of things. um For years, we've had  organizations that have been doing um anomaly detection, right? Basically machine learning, which again can morph into AI if the vendor is actually doing it properly. Right. That's right. But again, these teams are really well positioned for it. On the other side,

29:52
One of the lower adopting  teams for AI is one of my favorite teams, application security.  I love that. It's my bread and butter and I love it. I actually think that's one of the biggest gaps and really interesting things in just terms of like how, like I think if you solving the AppSec problem actually solves a lot of other things.  Not necessarily around SAS, DAS, even though there are some really great um AI solutions to those spaces.

30:22
I think that the threat modeling piece is actually one of the more interesting parts for me. When we look at how AI, offensive AI frameworks have kind of evolved over the last couple of years, with like auto attack and even like some of the open source frameworks that have been created by other countries. A lot of those, we're seeing more ability for AI to go and identify different.

30:51
areas in a perimeter or an application and go plan an attack sequence, right?  I  would love to see this kind of be utilized in an application security kind of way without actually having to like hit an environment or like really exploit, right? Instead being able to like kind of map and go through it. And  in this case, I think AppSec has been a little bit slower to adopt it because

31:18
there's honestly one, too much work for aspect professionals. Yeah, that's right. Just drowning.  Literally, all the time.  But two,  the other side of that, there's an expertise that's needed for these very niche areas.  And  the great part about  AI  is that it knows all the known patterns, but it doesn't know the things that haven't been found by humans before. Yeah, that's right. So it can find patterns, but it can't find novel new  or...

31:46
At least it's not finding novelty as much as we'd like to. Let me ask you this. m One of my contacts,  Powell, likes to say, you know, generative AI has no place at defensive scale. And I understand what he means, because I want to be able to trace it back. I want to know what happened. What do you think about generative AI as a source of creativity for those attack generations? Is it useful or is it too random? No, I think it's really good, right?

32:13
If you want to talk about random, we can talk about fuzzing, right? Sure. Yeah. Like, as a really low-level concept of just like,  oh, we just throw a bunch of different strings or we create like these,  we have some known patterns that we are going to use  or we have a couple of characters we want to try, right? Yeah. Eventually something's going to stick. Yeah, that's right. It's just super, super simple. It's going to happen. In this case, human creativity ends at what you know as an individual.

32:40
So if you can't think beyond the things that you know, you're kind of done, unfortunately. This is like a great thing about AI, right? You kind of, you may not be an expert in all of these other fields. You may just know enough, right? But you can go and quickly expand on your set of knowledge very quickly. So from the education standpoint, going back to a little bit about, you know, stopping and struggling. AI is actually great at helping.

33:09
an offensive security person struggle.  It's a really good way of really  teaching yourself something fast,  then being able to iterate and POC it very quickly.  So if you know enough about maybe writing your own tools  or  let's say for instance, oh like one of the fun things I had been playing around with was there's this open source, there's this dude open source like Elderpliny or something like that.

33:37
It's named after the beer. I was wondering. It sounds like the beer. It does.  He did this really cool thing where he has this tool that goes and does a bunch of these permutations um on characters, right? So you could go and say something in a different language or you can use different characters from uh different Unicode, all this stuff, right? Yeah.  And  it's like so good at prompt injection, but not by itself, right?

34:04
But if you know the type of prompts you're trying to make and you just need to change a couple of characters, like you can get everything else through, it's really interesting to see like, okay, well, what if I just swap out a couple of characters or something that looks like this, right? um I know in a couple of the things that our team has found, m we got through a, you know, something by using, uh or like a guardrail  by using invisible characters.  And  there's this,  one of our researchers actually said this like really great thing.

34:33
which was you're never actually, like when we're doing offensive security, at least in uh the spectrum of AI,  you're just asking it to do something it already wants to do, right? You're never asking something that it cannot do, or you're not asking it to  break its rules, right?  It's always something that's within the scope of its ability.  You're just asking it in a way that it can't understand, or isn't supposed to understand.

34:58
I could see that. And this is where I think it's going to be really interesting to see, what are the things that AI is generating that's new versus what is it that humans generate that is new? And I'm going to challenge a little bit this idea of humans only being able to create what we know. Because I think that if we go back and look at what were major breakthroughs in any realm, people were pushing through something in a new way.

35:28
And I think that if I get back to what is it that drives true creativity, useful creativity, I think there is a combination of the curiosity and the discomfort, which is driving the purpose,  and experience. I'll use an example of uh Mark Burgess, who is the father of uh the DevOps tooling and a lot of the automation and early versions of agents on systems.

35:53
got the idea for a self-healing system that would enforce what it knows to be correct, even beyond an explicit instruction, thinking about people getting sick on an airplane in a bit of a petri dish environment. Or if I think about when a penicillin was discovered, people having that accident of the scientists had, I think, discovered it after having some mold occur in the lab environment. And I think what's

36:23
really key there  is this combination of different experiences that are, would say, today outside the realm of what AI can experience, you know, that kind of experiencing that combination of the physical and the biological constraints  and how it motivates a different thinking about systems and the purpose of what one is trying to do. Out of all the

36:49
things that were happening on that airplane or all of the things that were going right or wrong in that lab, why did that person or that consciousness pick that one thing as being relevant?  And that, I think,  is a core feature of human intelligence.  Being able to intuit or to prioritize something quickly from an almost infinite amount  of inputs that haven't been described in consistent ways.

37:18
Whereas when we're training AI, AI  is able to hit this level of scale and complexity on a particular type of data that is created,  digital data that is organized in a particular way, that it knows how to ingest.  We  as animals are picking up a lot of information from different places. And when you think about  children or animals or how they interact with threats, they do very, very quick anomaly detection.

37:45
this is unfamiliar, this feels a little bit off and I might be able to pick up it was a sound, it was a smell, it was a rhythm in how someone did something. And why did I pick that one thing versus everything else? Yeah, it's a lot of intuition and being able to see like patterns and entropy, right? Yes, that's right. So like we're good at pattern detection. That's right. And it's kind of biological as well, right? That's right. So it's being able to see,  there's like the Popscike book Blink, right?

38:13
Being able to know that within 30 seconds of an interaction, you're able to understand if this person's going to be your best friend or not. Something like that. know, right? You just have that gut feeling. So this is like, you're right. There are things that only humans can do because it's just, it's like that intuition is the one thing that keeps humanity in, I think, ahead, right?

38:38
I think there and again, I'm not pro robot or probably. It's just something that's happening in the world and we are deeply edit. Yeah, love AI  just in case, of course. But there's a company right now that is just doing AI based research. Yeah. And what I mean by that is using AI to do research tasks. Yeah. Right. And that's all it does. So you ask the question and it only does and it's tens of thousands of dollars to make a query.

39:05
because of the amount of resources it takes to run the models and actually go through all this data set. It takes like 24 hours to get back to you.  And that has, and I'll probably have to link to a resource of some sort,  but they were able to find some, actually make some breakthroughs specifically in the field of diabetes because of it, right?  And it kind of leads you to think like,  can,

39:34
AI recreate the human experience at some point.  And  I think it will eventually. And what I mean by that, I think today, if I had to think about, and again, I wasn't alive then, but if I had to think back to when  computers fit into a single like one bedroom apartment, Yeah.  Now we're at that same place with AI.  We're like the amount of resources it takes to do this one thing is incredibly huge. Right.

40:04
it's going to eventually be really cheap to do this one thing, right? I imagine that- We just sit down. Exactly. I imagine that as like supplies chains open up and people actually start using AI to make better chips, right? Right. And we are able to sit more on that little piece of silicon, right? Or whatever then the next thing is, I'm not in hardware implosion. But it should be expediting all those patterns very fast. Your point about like why is it that socks and shirts are-

40:30
getting that benefit first because we have  procedurized certain things and then you can just apply some scale with some parameters of creativity, right? And so,  you know, absolutely, you know,  what it should be doing is being able to  rip through these masses of information and events that are happening that normally might take, you know, uh armies of people or years to like put together all that information to land in the right.

40:58
place, right? So that, think, is like a huge step forward for us in so many ways. um know, there are many, many areas where, you know, say, hey, if we had the ability to apply this analysis quickly to this like huge mass of data, you can get to, um you know, a new research insight or a better decision quality, right?  And I think that's where we can all benefit from it. But I think that

41:23
In addition to this idea of what we are bringing to the table, I think we also have to be aware  of uh where are we going to  make sure we are not abdicating.  I had a friend in school who used to talk about um the slippery slope argument. Like, OK, we're on the slope, it's just going keep going. She said, that's BS. I get to choose where to stop on that slope. I get to choose. So on some level, you know,

41:53
We are the ones who are launching the AI revolution and saying applying it to all these things and making decisions on whether to make decisions on it or not. In that process, we can't abdicate our own sense or our own knowledge on when to be there or when to say, okay, this right now needs to be that human decision or I'm in the process of improving this, so we're going to stay close. Yeah. Just to wrap up.

42:20
Right? Yeah. We've gone over all these things that AI can and can't do. What's something that you're looking forward to doing this year or something you're really hoping that you've seen this year? I would love to see AI help make advances, especially in medicine and healthcare. I would love to see those inventions of, know, lot of the stuff around information on the gene or, you know, immunotherapies that are happening seems to be making these massive, massive strides.

42:50
um tailored and personalized medicine, um or even helping people to get through insurance bureaucracy. I feel like that would be really human-centered thing.  Of course, in the cyber world, I have hope that  we  can get a little bit more or a lot more of the advantage back to the defenders. But I  think it's going to come back to not just using AI, but also about

43:15
getting back to first principles on what does it mean to be smart about the attack surface, right? Because a lot of us have felt for a long time that it's that leaky bucket that you're trying to get the water out of the boat, but it just keeps coming back in. And so I think that if it causes people to kind of wake up a little bit and be like, wait, how are we doing this in a way that's just not sustainable?

43:43
I think that would be pretty huge.  But I do believe through all of this,  it's not a moment for  hiding or being fearful. It's a moment for really understanding what's going into this, that we have created this, and to stay engaged to  make it better. There is all this potential.  There are all these things that we are fearful of, but at the end of the day, we're the ones who hold the keys on this.

44:11
Yeah, no, I agree. And I mean, again, I wish we had like another hour to talk about like, you know, all the great civic stuff that we can do. Yeah, yeah, yeah. One of your others. Yeah. But, you know, exactly. Right. Yeah. So sometimes soon. But, you know, there's a there's a lot that we can gain. Right. From this. And we really should be focusing on enablement. Right.

44:38
That's right. understanding the risks that are associated and then making some smart decisions around those. Absolutely. I don't think we should be shying away from this at all. I think we should be going head first, but we should be conscious and we should be cognizant, right? Yeah. Like you said, you should never let a good incident go to waste. Right. I'm hoping that we don't have to one  here. Right. I'm hoping that we can just kind of move forward. But I think that's what we've got for today.

45:04
Is there any like thing that you want to plug? Are you going to be anywhere? Do you want to share your social media?  Sure, so if you're interested in getting in touch with me or seeing some of  the things I'm involved with, I  will be working on some events with Ballistic Venture Capital. We run a series called the Shift Now webinar about  application security and AI, as well as with AI insiders in the Bay Area. I'm on email and on Twitter and my

45:31
You can find my contact information on julietzhai.net. uh it's always a pleasure to like to, you know, to hang out with you and get the, you know, really dive into all this stuff. I feel like in  some weird way,  as security people can, um you know, help people to understand what the components of trust are. I got into security because product is so hard for everybody to understand. And I think this is where like,

46:00
we have an opportunity to make product a lot better through quality, right? Yeah, that's right. Safety and quality in this age, especially, think safety and quality are one in the same. That's right. So when you can trust something and when you feel safe about it, that's it. So that being said, if you need me, it's needmo.info. Yeah, no, happy. Thank you so much for letting us here and here are your dogs and everything. Yeah, yeah. We're excited. Until then, see you next time.

46:29
Did this episode help cut through the noise?  Like or subscribe so you don't miss what's next.  Thanks for spending time with us.  Until next time,  stay curious.