Mahesh the Geek
What is mission-critical AI, and how is it shaping our future?
Join Motorola Solutions executive vice president and chief technology officer Mahesh Saptharishi as he and AI experts explore the science, the challenges and the incredible potential of AI when it matters most.
Mahesh the Geek
Bold, Responsible AI with Brigadier General (ret.) Patrick Huston
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In the debut episode of Mahesh the Geek, Motorola Solutions EVP & CTO Mahesh Saptharishi is joined by Brigadier General (ret.) Patrick Huston, who shares his unique journey from military service to becoming a legal expert in AI and cybersecurity. Mahesh and Patrick delve into topics such as: Human-machine teaming, regulatory challenges, AI's impact on intellectual property law and fair use, the need for AI risk management standards, cyber risks, open source LLM security and licensing concerns, AI's transformation of work through automation and the potential for escalating errors with AI agents. General Huston’s insights are grounded in his message for those considering leveraging AI – be bold, be responsible and be flexible.
General Huston is an engineer, a soldier, a helicopter pilot, a lawyer, a technologist and a corporate board member. He’s a renowned strategist, speaker and author on AI, cybersecurity and quantum computing. He is a Certified Director in the National Association of Corporate Directors, he’s on the FBI’s Scientific Working Group on AI, on the American Bar Association's (ABA) AI Task Force, and on the Association of Corporate Counsel’s (ACC) Cybersecurity Board.
Let's geek out.
Follow Mahesh on Linkedin: linkedin.com/in/maheshsaptharishi
Follow Motorola Solutions on social
Linkedin: linkedin.com/company/motorolasolutions
X: x.com/MotoSolutions
Instagram: instagram.com/motorolasolutions
Facebook: facebook.com/MotorolaSolutions
Youtube: youtube.com/MotorolaSolutions
Never miss an episode by subscribing.
Please leave us a review on Spotify or Apple Podcasts if you liked what you heard!
Mahesh the Geek Ep. 1 Transcript
Mahesh (00:00)
I am obsessively interested in technology critical to solving for safer. And yes, that makes me a geek. As a technologist, I'm incredibly excited about the benefits of AI, but experience teaches us that ensuring positive impact requires us to be grounded in our understanding of risk and the problems we're trying to solve. With that in mind today, I have the privilege of speaking with General Patrick Huston, a truly fascinating guest with a unique background.
General Huston is a renowned technology lawyer focused on AI and cybersecurity. His path wasn't always focused on technology. His career began after graduating with an engineering degree from West Point before diverse experiences as an army ranger and helicopter pilot. He later pursued law school and a distinguished legal career. What makes General Huston's perspective so valuable is how his experience as a soldier, a lawyer, and a corporate board member informs his thinking on AI.
His expertise in AI legal matters emerged later in his career when he was chosen for the Pentagon's Responsible AI Board, becoming the Army's primary spokesman for AI and cybersecurity legal matters. Within the context of mission-critical AI in public safety and enterprise security, this blend of military, engineering, and legal experience directly connects to many of the topics we'll discuss today. I hope you enjoyed this conversation as much as I did.
Patrick, welcome to the show. Thank you for being the very first guest on the Mahesh the Geek show. Really looking forward to this conversation. So let's get into it.
Patrick (01:40)
Hey, I'm delighted to be here.
Mahesh (01:42)
You have an absolutely fascinating background and I want to have you explain it in your own words to the extent possible. So maybe we'll start there. What led you to West Point?
Patrick (01:56)
I grew up hearing stories from family members who had served and it was clear to me that they were all very proud of their time in the military. And so I think at some point I realized I just want to experience that same thing. And then when I was in middle school, I read a book about West Point and I was hooked. So that was just the path I decided to take. And then once I got to West Point, it was a classical engineering school. So I just naturally fell into the path of engineering. There honestly wasn't a whole lot of thought that I put into that.
I'm embarrassed to say.
Mahesh (02:26)
So as you transitioned into the Army after graduating, describe maybe your early career.
Patrick (02:33)
Yeah, there were three kind of things or phases that occurred. The first was ranger training. And for me, that was incredibly powerful in terms of learning how far you could push yourself. And then also the importance of teamwork there. The second phase was as a helicopter pilot. And Mahesh, it turned out I was not a very good helicopter pilot or very unlucky. Everything that
could go wrong seem to have gone wrong with me. And so I started looking for some other option and that was when I decided to go to law school. And I just felt very lucky that the army, the same employer would allow me to try out so many different jobs, trying to find something that was a good fit.
Mahesh (03:15)
And over time, you became an expert on AI, cybersecurity, and even quantum computing. So describe that trajectory for me a bit.
Patrick (03:25)
Yeah, that was much later in my legal career and that was pretty accidental as well. The Pentagon was starting to invest heavily in AI powered autonomous weapons. And these are kind of the killer robots out there that can select and engage targets on their own. Obviously some pretty heavy duty legal and ethical issues associated with that. So they would need to have a lawyer on there. I turned out to be
the one that was chosen to do that. And that's how I ended up on the responsible AI board at the Pentagon.
Mahesh (03:57)
And did your background both as an engineering major and perhaps even as a soldier, did that help inform your thought process there?
Patrick (04:07)
Absolutely. What I found was that the engineering analytical process turns out to be quite similar to the legal analysis. Legal analytics is just kind of going through looking at a problem kind of objectively, ⁓ breaking it down, assessing it. Same approach you take in engineering as you do in the law.
Mahesh (04:25)
Now, fast forwarding many years to current time, you now sit on the boards of various companies. You're in the conversation in terms of what's the latest and greatest in AI, risks, benefits, et cetera. How have you seen AI evolve over the past many years?
Patrick (04:42)
I would say the conversation on AI has been a complete 180. There used to be kind of a, it was a niche topic that some people would discuss, but now it is a completely mainstream discussion. I credit OpenAI with the release of ChatGPT in late 2022 as kind of the big turning point there. I will say that my parents are both in their 80s and even they are talking about AI, so you know it's mainstream.
Mahesh (05:11)
Here at Motorola Solutions, we are very much focused on public safety and enterprise security applications. And when we think about AI in the context of public safety and enterprise security, we look at it through a mission-critical lens. And if I think about the definition of what mission-critical teams is, it's small integrated groups of experts trained to resolve complex problems in time-constrained, high-stakes environments where failure can have
catastrophic consequences. And now through your military career, you have probably been part of various mission critical teams. And I just want to get your thoughts on just from a people's perspective, how are mission critical teams formed and how are they trained?
Patrick (05:57)
Yeah, great question. I tell you that that definition fits the military's approach to this type of problem solving perfectly as well. It is all about teamwork and research shows us all that you need to assemble talented teams if you're going to solve really tough problems. Obviously you want smart people on those teams, but what you really need if you're going to solve complex problems is diverse teams. And I'm talking about cognitive or intellectual diversity. So
people with a variety of personal and professional backgrounds who look at problems and can identify potential solutions differently and just bring different perspectives. So really these multidisciplinary teams are super important when it comes to solving problems like this.
Mahesh (06:40)
And now if I connect this back to assistance of various sorts, whether that's Claude, Gemini, et cetera, oftentimes we as individuals interact with these AI tools in natural language. They get very close to passing the Turing test even in many cases. You could almost think of them as being a member of a team or a partner of sorts in whatever task that you're trying to perform.
If you can think of AI now as a member of these integrated groups of experts forming that mission critical team, what do you think from a reasonable expectation standpoint, what properties of a mission critical team should you expect to transfer over to an AI member of the team, so to speak?
Patrick (07:30)
I love this concept and I think it's such an important topic. I think of it in terms of human and machine teaming. There are clearly things that we as humans can do better than any AI system or machine. Think about things like leadership, common sense, empathy, ⁓ humor, I think certainly. But we also have to acknowledge that there are things that the machines can outperform us on. ⁓
ingesting mass amounts of data, doing rapid data computations, or handling boring, repetitive tasks where you or I would just stop paying attention and could get dangerous out there and just stop doing our job. For me, the key is to combine humans and machines in ways that leverage the respective strengths of each. And I know that people out there are saying, hey, are you going to go with the humans or are you going with
the machines. And I really think that's a false dichotomy, you have a much better choice. And it's to, you can have both and the best of both, you can combine this in a way that really does make the most powerful combination of humans and machines. And that is this concept of human machine teaming or human augmentation.
Mahesh (08:47)
think that's very well put and just in terms of crunching raw numbers, analyzing raw data, and to your point, the ability to pay attention. Humans watching video, for example, especially CCTV type of video, security video in other words, tend to get very bored after about 20 minutes of watching that video. And there multiple studies that have shown that after about 20 minutes,
the ability to detect anything that is truly important drops to about 20%.
Patrick (09:23)
We've all sat there and done those, I am not a robot click boxes and go through the recapture things. Would you want to spend your entire day going through and clicking out which ones contain an automobile or bicycle? I wouldn't.
Mahesh (09:35)
Now, that also means that you trust the AI to detect, to be able to present information that you do need to pay attention to that you can trust it to be that entity that taps you on the shoulder and says, Hey, there's something important happening there. Pay attention. So trust accountability, resilience, reliability. These are all very important ingredients in a good team, a good mission critical team.
How should we think of these concepts in the context of human AI interactions?
Patrick (10:06)
I think of it in very similar manner as we would bring on a new human employee, someone fresh out of college. You don't give them the biggest, most important task that you have right off the bat. You kind of get a good feel for them. You start working with them, test them out, figure out what their strengths and weaknesses are, help them get a little bit better. Same thing when you've got an AI system. You got to test it out a little bit, figure out, you know, what am I comfortable with it doing and what things are less reliable. I will say that in the legal world,
there is an inverse correlation between reliability and liability. So if something is unreliable and you employ it, you have a high risk of legal liability. So again, very similar with this concept of AI. Make sure you can trust it. And if you can't, then don't employ it for things that are important.
Mahesh (10:56)
If I pull on that thread a little bit, trust in people, trust in team members tends to develop over time. It requires a certain amount of successful interactions, understanding strengths and weaknesses, but both parties adapt. Both individuals or many members of a team interacting together, they all adapt to each other. Do you see AI and humans adapting together? And if they're both adapting together,
Do you see an interaction set there from a guardrail perspective, from understanding when the AI actually graduates to the next level of interaction, that being prescriptive, or is that something that is more organic in nature?
Patrick (11:39)
I think it's organic and you've got to ask yourself the question, when are we going to switch reliance from humans to the machines? I have some thoughts on that. The default standard, I think, once the machine starts outperforming humans, then you can switch your reliance. Assuming that performance, you have some confidence in its ability ⁓ to kind of maintain. The other thing that I think is a factor thrown in here is usually when you've...
Once you've trained people up, they've gone through the apprentice stage and they graduate to become a full blown, whether it's a computer scientist, whether it's a sales marketing person, whatever it is, you kind of trust them to go out on their own. AI is a little bit different in that regard because of the concept of machine learning. AI evolves over time. So what you have on Monday is different from what you have on Friday. And hopefully it's learning and evolving in the right direction.
But that's not always the case. Sometimes it starts taking a wrong turn. So you need to have some guardrails built in to reassess it periodically to make sure it's going in the right direction. And if it's not going the right direction, you pull it off, reassess, and take another full look at this before you re-employ it.
Mahesh (12:51)
I think that's a great segue into this notion of responsible AI. think we hear talk of responsible AI across the board today, but it'd be good to dig into it a little bit. think when we most often hear about responsible AI, think we hear terms such as bias and explainability and such frequently. How do you define responsible AI and do you think it actually goes beyond just bias and explainability?
Patrick (13:18)
I do think it goes beyond that, but I think bias is one of the most important aspects of responsible AI and it's the thing that tends to concern a lot of people. The other is hallucinations, just making stuff up. But the bias in AI outcomes is real and the source of it should not surprise us. These algorithms are trained on data sets. The data comes from historical human data in most cases and we as humans have our biases.
So the data sets just reflect our human biases. What you've got to watch out for in particular is minimizing the impact of those biases. And in some cases, the AI can exaggerate those bias and really worsen them, which is bad. You've got to know that bias is an integral part of the AI process and you have to kind of account for it.
Mahesh (14:11)
And we as humans tend to be somewhat biased in our own opinions and approaches as well. And sometimes our strengths and our weaknesses tend to either appear or get amplified in moments of stress when our cognitive aperture is reduced, when we're trying to make a decision, perhaps a very impactful decision, but it is under a life critical situation. How do you think about
the role of stress and cognitive capacity at any given moment and how AI itself should be presenting a notion of confidence or a notion of uncertainty in a manner that is consumable to that user at that moment in time.
Patrick (14:58)
It's very tough to do, but I'd say it's very similar to how you assess another human being. And, you know, if you're going to look at me and assess whether I'm trustworthy, there are probably a number of factors that you're looking at, and many of them subjective and hard to really articulate. And why you might assess me to be less reliable than someone else on your team. could have something to do with how long you've worked with me or them and all these sorts of factors that compile in. think this, you run into the same problems when you're trying to assess
the reliability of an AI system. What I will say is that there are people who have tried to break this down and break it up into different categories with AI principles that I think is at ⁓ minimum, it's a useful starting point. You've got, I'm gonna take Microsoft, for example, because they've got a list of AI principles that I think is very representative of what you see across the market, both in the commercial sector and in government sectors.
but they've broken it down into six AI principles that you should at least consider. Number one is fairness. Number two is reliability and safety. Number three is privacy and security. Four, inclusiveness. Five, transparency. And six, accountability. And you'll recognize several of those words that you have already raised during our conversation here as things we are concerned about or should be concerned about with AI. ⁓
As I said, the Pentagon's AI principles are almost identical to that. And for any organization that is thinking about leveraging AI more, I would suggest to not reinvent the wheel. Just grab one of these sets of AI principles out there, take a good hard look at it, and modify it to fit your organization as a starting point.
Mahesh (16:46)
Now, as I think about those, each of those principles, how should I also be thinking about it from the standpoint of a country or a region that has a particular culture, that has a particular belief system? those very implementations of those principles vary in some substantial way?
Patrick (17:05)
Yeah, I was going to distinguish between the principles, which tend to be pretty universal, and then the application of the principles, the implementation, which does seem to vary quite a bit. Where we've seen the greatest consistency is in Europe. ⁓ The EU has the strictest rules for data privacy. Here in the United States, the federal government has taken a hands-off approach. ⁓ We're left with a patchwork of different state and local laws and regulations that are
vary kind of significantly. You have really strict laws in places like California, Illinois, New York, and Colorado. And then, on the opposite spectrum, you've got places that are relatively unregulated across the United States. This makes it really difficult for companies who operate in multiple states across the United States because they're trying to comply with multiple frameworks that are out there. They're left with two choices. Neither of them is perfect.
One choice is to just pick the highest standard like California and apply it across the board. And you've got one simple standard that works everywhere because you're meeting the highest standard. The problem if you go to a state like Texas where the standards are much lower, you are far exceeding the standard. You're to have competitors who are just barely meeting the standard, but they're meeting the standard and they're doing things without the same cost and overhead that you are. they're going to be able to outperform you.
in states where they have lower standards. The alternative approach is to just do different approaches in each state. Of course, that's hard to keep up with the laws. I've heard something like 600 plus pending AI statutes across the United States right now. are pending bills that are pending implementation or adoption in one way, or form. Keeping up with it is next to impossible. And then if you try to actually implement those laws, ⁓
across your workforce, can be really difficult. So not an easy task for companies right now.
Mahesh (19:06)
EU has one consistent approach that they're trying to go after in terms of regulation of both AI and more recently the EU data act as well. And then California passing its set of legislation. Do you think that there's a world in the future where some of these different philosophies on how to regulate AI will start to converge? And if so, what areas are likely to converge?
Patrick (19:33)
I think the answer is yes, or I'd like to see it. A few years ago, we saw several of the tech giant CEOs converge in Washington, DC, testify in front of Congress, asking for federal regulations in this area. That astounded me. I've never seen anyone go testify before Congress asking for regulation. And then it hit me why they were doing it. We talked about this patchwork of US laws. They were hoping to get a single federal law that would
kind of override these individual state laws and make it easier for compliance nationwide for them. We are seeing that that still is a possibility out there within the United States. And we are seeing a similar move across Asia for ⁓ Asian countries to adopt a EU like consistent data privacy regulation or law out there. I think what we'll end up with is some countries adopting it, some not. I don't want to
to forecast what the United States Congress is going to do. That is a folly I'm not going to fall down into at this point. But I think there is a possibility that over time, you can get a more systemic and consistent approach to this across states and countries.
Mahesh (20:50)
So as we live today in this inconsistent regulatory landscape, do you think it is in any way materially affecting the evolution of AI applications?
Patrick (21:01)
Yes, I think it is. As I said, we can't hope that the regulation will keep up with the technology. It simply won't. But what we're hoping is that it's not going to interfere with innovation. And I think that's what we're what everyone really hopes for out there. As you're doing this, there's a there's a cost benefit analysis ⁓ is you're looking everyone wants to leverage these tools for their effectiveness. ⁓ They want to get those
improved efficiencies, they want to cut costs. But everyone's also equally concerned about privacy. You've got credit card information, health records, state secrets or trade secrets, depending on what organization you work at. And you really need a capability that protects this while giving you all those efficiency gains that you get. So balancing those desires with those needs is super important. And it's kind of a traditional cost benefit analysis here. Everyone is weighing that.
that the rules slightly impact people in different states differently. But overall, I think you see that tension there as people are trying to strike that right balance.
Mahesh (22:09)
You know, the other area that at least perhaps my last question on the law and regulation side here is intellectual property law seems to also be greatly affected now with AI playing a much more active role in content creation, in discovery, in writing code. How do you see that space evolving in the coming years?
Patrick (22:33)
Another great question and it's going to depend on some of the pending lawsuits right now, some these copyright lawsuits. There's a concept called Fair Use Doctrine, which means you can essentially go out there and for limited uses, ⁓ tap into copyrighted and protected information that's out there. And that's what these LLMs have gone out and done. They've trained on this publicly available information.
The question is, has their reliance, their training, is it fit within this fair use doctrine or have they gone beyond it? They've certainly gone beyond probably what was envisioned at the time, but most believe that they've not gone beyond the way the fair use doctrine is currently interpreted by the law. So there are several pending lawsuits trying to decide how far to take fair use doctrine in the context of generative AI and these large language models that have trained on all this publicly available data.
I don't know how it's going to turn out, but it's really going to, I think, drive a lot in the future. But for the most part, it has not slowed down development in most industries. I think Hollywood is the exception because they've kind of come to an agreement with the unions out there that they're going to not use generative AI for certain tasks. But that was kind of an agreed upon legal solution and not a court forced one yet. So we're going to see what the courts come down and do, and that's going to drive the future.
Mahesh (23:59)
When I think about cloud from on-premise solutions in the public safety and government world, one of the things that helped alleviate a lot of concerns and at that point in time, the movement to the cloud implied that perhaps you have less control over data, perhaps there were more cybersecurity risks that would come in place, et cetera. Standards and compliance frameworks helped create some level of uniformity in terms of
What should applications conform to and what would give confidence to a user of the solution, a procurer of the solution, that this is a solution that they could confidently adopt and not be worried that an on-prem solution would likely always be the better choice than a cloud-hosted solution. Do you think, and this is perhaps a complicated and long question, do you think that there's a similar thought process for AI? You talked about standards, you talked about the way to...
create a baseline for human performance and understanding when AI has exceeded that baseline performance. Do you think that there's a need for a FedRAMP-like standard for AI applied to mission-critical applications?
Patrick (25:13)
The short answer is yes, I absolutely do. the longer answer is that we are starting to see that NIST has generated, as you mentioned, a cybersecurity risk management framework. They've also generated an AI risk management framework. It's a little complex, but it can be adapted to any company, any industry. And although it's complex, the good news, and this is kind of ironic, I think, is that there are generative AI tools.
that can help automate a lot of the difficult steps in ensuring compliance with it. ⁓ I'm on the board of a company called AI Comply that does this. There are several other companies that do it. But it's actually ⁓ helpful that we have those standards to achieve cybersecurity and AI safety, and also very useful that we can leverage AI to help achieve those standards. think it's a very promising future there.
Mahesh (26:09)
Besides potentially being able to jailbreak some of the large language models out there in ways that were not intended, what do you see as the key cyber risks associated with generative AI?
Patrick (26:23)
I would say from at least the corporate setting now, I see three cyber trends. The first is a growing cyber vulnerability. Number two ⁓ is that AI produces solutions to those vulnerabilities. Only AI can keep up with the systemic assessments, creating patches and deploying those patches fast enough to keep up with the AI threats that are out there. So that, again, a little bit of irony here that you're...
using AI to counter the impact of AI cybersecurity. And then the third trend is the growing importance of cyber insurance for corporations. Across the government, they're all self-insured, but corporations are realizing the importance of getting cyber insurance to counter some of these, not to counter the threats, but to help them offset the risks associated with these threats.
Mahesh (27:16)
I'm a big fan of open source. think open source has contributed to incredible software growth over the past few decades. Even I've benefited a lot from open source in my own career. As I think about open source LLMs and besides, I would say the many of the closed foundation models that are leading the leaderboard in terms of performance benchmarks for various LLM tasks.
There are quite a few out there that perform really well and are also open source and they're leading to lots of derivative language models that are perhaps specialized, fine tuned, et cetera. So when I think about the model garden, so to speak, of language models that are out there that are indeed open source, the supply chain or the factory that has created that open source model, which takes us
raw material data of various sorts as well. How can we think about supply chain integrity in the creation of that open source model? Do you think that there's a risk there, especially for the type of applications you may want to apply these models to? And what are all the cyber risk that perhaps comes with adopting those models?
Patrick (28:33)
Yeah, I'll flip my answer and say that the benefits I'll start there. The ability to democratize the availability of these really powerful AI tools is incredible. Everyone can now use these tools to solve problems, either at their company, in their industry. And it's really incredible that for free, you can grab these, adopt them to your needs and have this powerful tool. That's the benefit. The risk
The key risk from my perspective is data security. I'm probably one of three people in the world who have actually read several of the licensing agreements. That's a gross exaggeration, but most people don't read the licensing agreement. What I will caution you though is that that licensing agreement often gives the initial creator of the LLM some sort of ownership or licensing right to use both the product that you create
as well as the data that you generate and enter into the system. so sometimes it's limited to them just using that data to retrain their model, but that's usually the lower end. ⁓ Other uses are sometimes available. That risk to your data, if you build a system without understanding that that risk is out there, then you are really opening yourself up for a devastating impact to your business if you don't.
going to this ice white open.
Mahesh (30:01)
almost like a GPL style issue with building derivative models and intellectual property ownership, it seems. Is that right?
Patrick (30:11)
That's right. It's not just the ownership, but sometimes the ability just to use the data. Ownership is great, but the license to use the data can be just as devastating when used improperly.
Mahesh (30:27)
super interesting, maybe going from regulation and law to really applying these agents in these mission critical situations. There's a lot of potential there and there's lots of interesting things and agentic AI is something that is ⁓ catching on. But besides agentic AI, what other exciting AI capabilities are you tracking and are you seeing out there?
Patrick (30:51)
I would say two. The first is the more obvious one, generative AI. We've talked a lot about that. I would say it's being used in every industry. As a lawyer, I have used these new AI legal assistants, generative AI legal assistants to see what I can get out of them, to become familiar with them. I have been so impressed with how well they do it and how quickly they do it. Last week, I was at a medical conference and I don't have any medical background.
but the discussions around the potential for these powerful medical tools, AI-driven medical tools, is really impressive. And just think about what it can do, both in our most advanced hospitals, but also in places in the world where they don't currently have access to good healthcare. And just expanding that access through telehealth or remote assessments, it's pretty impressive. And I think has...
the potential to do a lot of good. So that's number one is just as we continue to expand our use of generative AI. The other area that kind of the long term use, I think that has a lot of potential humanoid robots, ⁓ these things that can just step in and handle tasks that again, you and I don't want to do and don't it shouldn't have to do. It's really powerful. And we see these humanoids that are starting to become commonplace.
⁓ and affordable to regular human beings and not just to the billionaires that are out there. I think this is an area where we're see a lot of change, especially as the world population is being reduced and we're gonna have a hard time ⁓ filling our workforce with capable people. This is, think is a great way to put these robots into roles that are kind of at the lower end of the spectrum that humans don't wanna do.
and free us humans up to do more rewarding, satisfying work.
Mahesh (32:49)
Maybe that's a good lead into what type of professions and what type of roles do you see changing and perhaps to generalize it a bit further. How do you see human work or human led work transforming in the coming years?
Patrick (33:05)
I think for the first question, I would say nearly every field, every industry out there, generative AI can help do that industry better, smoother, faster, easier, and cheaper. There are some exceptions. There are some things that we don't want to transition over. But as a general rule, every industry can benefit from AI. As I said, I don't think we should rely on it.
or implement it until we are confident in its ability to perform those tasks. That tells me we're going to start with things that are simpler tasks. I'm here in San Francisco where we have driverless taxis now, these Waymo cars that are really fascinating. I'm a complete believer in them now, but I will tell you the first ride I took in them, I was very nervous and very uncomfortable. Being driven around,
And seeing that the wheel is turning, but no one is up in that front seat turning it, it was very uncomfortable. But I have come to really appreciate the capabilities that the car has. It drives better than the human drivers that I'm used to driving with. stops at stop signs, uses its turn signal. It's very smooth and it's not perfect, of course, but it is far more perfect than the human drivers out there. And I think that's an example of things that, you know, think about your daily commute. ⁓
Think of how productive you can be if you weren't having to worry about driving. That's the sort of task that I think we can start looking for AI to pick up from us. Cleaning, cooking, other tasks that are kind of monotonous and really aren't that fulfilling of roles that I think AI will be, is a starting point taking over.
Mahesh (34:55)
How important do you think is that human AI interaction design in deploying the types of applications like a self-driving car where you need to get a level of comfort as a passenger in being able to ride one of those vehicles?
Patrick (35:11)
⁓ Great point. I think you can never take the humanity out of this because we are all people and we're going to be evaluating these things. We're going to be very skeptical before we place our lives or even our livelihoods in the hands of these systems. So getting comfortable with them is going to require this interaction and that interaction is going to require ⁓ systems that are designed to work well with humans. It kind of goes back to the early days of computing.
where you have a natural interface, a natural mouse, of easy systems that are intuitive. And if you don't have that, then people won't be able to rely and gain the confidence in them for them to take it to the next level.
Mahesh (35:55)
Maybe even extending that a bit further, today we think of the human AI team, and in many cases we refer to AI in the singular, as agentic AI, basically AI that has some level of agency to autonomously make certain decisions and collaborate. As you think of teams of these agents, how do you think teams of agents, along with a human, or perhaps even in some cases without a human?
How do you think that in a mission-critical setting introduces new challenges or risks that we should be concerned about?
Patrick (36:35)
For me, I think of groupthink and how that can be problematic with humans. I think there is a risk of agentic AI groupthink where they start building and relying on each other and maybe feeding each other's false data and things can, because of the speed at which they can calculate and operate, it has an opportunity for things to go wickedly awry very quickly. So I think you have to build in safeguards that account for that sort of groupthink.
Mahesh (37:06)
I think of social media as a pathway to convey information, both true information and perhaps even false information. And the rate at which information tends to propagate in our connected world today is incredibly fast. So I think about, hey, what is the risk of a connected set of agents that can function at the speed of computation? How do you stop it from doing perhaps something that is not?
what you intended to do and perhaps something that could actually lead to some harm. And that seems like a very complicated question, but an interesting one.
Patrick (37:42)
I agree. We saw in the early days of AI, some AI algorithmic trading tools for the stock market on Wall Street. And it led in some cases to these AI systems bidding each other up, creating this small escalation, rapid escalation that caused miniature stock market crashes. And so when I think of that potential use in other areas, you wouldn't want to have something like that happen in your doctor's office. And you certainly wouldn't want to have something like that happening in the national security setting where it leads to a
flash war, you really need to have safeguards in place to prevent this. It's super powerful tools, but they can also go awry either in the hands of people who don't have the right intentions or even with the best of intentions when people don't fully anticipate the risk.
Mahesh (38:30)
What industries do you think today are most effectively and efficiently applying AI?
Patrick (38:38)
I think the banking industry is one and the advertising and marketing industry is another. We see the power of suggestions ⁓ for everything from Netflix to Amazon purchases and really leveraging it there. The banking industry has a ⁓ lot of regulation and also therefore a lot of potential to use this to eliminate some of the burdensome oversight that's required to look for fraudulent transactions.
to monitor systems. That's why you get so many of these fraudulent card alerts now. It's not some human that's watching and looking for this, but it's these systems that are being used to try to protect your money, try to obviously also protect the bank's money from their risk and liability. But I think wholesale uses like that are very common, both the financial and marketing and advertising worlds.
Mahesh (39:31)
Any examples for industries that have really tried hard not to adopt AI?
Patrick (39:37)
Yeah, I think that the two industries that seem to be most conservative and resistant, I already mentioned, are the law and the medical professions. And probably for good reasons, there's so much at stake in both those areas, but also such a well-developed professional approach that these AI ⁓ tools can be game changers. And a lot of people are hesitant for their entire, for example, in the law, nearly every law firm
has their business model based on the billable attorney hours. ⁓ And if you can do something with an AI assistant that can dramatically cut down the time it takes to accomplish a task, well, then your model goes out the window. So I think for things like the legal profession, they're looking to change from a billable hour model to a flat rate model ⁓ and have the clients and the law firms split the...
cost savings that come from using AI. Medical profession is similar. You're seeing some areas where these medical tools can be really used well and outperform doctors such as reading charts ⁓ and looking for cancerous cells. ⁓ They've been proven to outperform humans in many cases. Other tasks, really, we just don't have that same comfort level. not there yet. So I think areas like that where there's so much at stake.
I think we are seeing some hesitancy and that's probably well placed.
Mahesh (41:09)
It's quite interesting that in those types of professions that historically have required a lot of training, a lot of assessment of skills, and really talent in some ways to absorb a lot of information. They also have a fair number of qualification benchmarks, whether it's the bar exam or for each specialty within medicine, there's a unique test you need to take and qualify and intern, et cetera.
You would think that in areas where there are such clear benchmarks, we could actually very objectively measure, can AI actually be helpful to your earlier point of establishing a baseline as to what human performance is and asking the question whether or not AI can exceed that.
Patrick (41:54)
Yeah, I think you're right too. we've seen where chat GPT, once it got to the point where it could pass the bar exams, ⁓ it really shocked a lot of people. And I think there's tremendous potential here. We just want to use it, but use it responsibly.
Mahesh (42:09)
Are there any final thoughts that you'd like to share?
Patrick (42:12)
Yeah, I will. I would like to leave a message for anyone who's considering leveraging AI or other ⁓ transformative technologies like that. And my three tips for them would be number one, be bold. You simply can't be competitive in today's world without leveraging AI and other technologies like this. Number two, be responsible as you do it. You have to understand the risks and then take active steps to mitigate them. You don't go into this blind.
You don't just sprinkle AI pixie dust on everything and it's going to be perfect. You have to understand and manage those risks. And third and finally, I would say be flexible. Some things are going to work out well. Great. Keep them up. Other things are not going to work out. Just be ready to pivot, adapt, adjust, ⁓ and continue fine tuning your plan. I think if you do those things, be bold, responsible, and flexible, you're going to have a successful approach here.
Mahesh (43:09)
General Huston, this was a pleasure talking to you and thank you for being the guest on the first episode of this podcast.
Patrick (43:17)
Thank you so much for having me, Mahesh. It's great to be back with you.
Mahesh (43:21)
In this episode, we discussed human machine teaming, emphasizing the potential of combining human and AI strengths. This is a complex topic at the intersection of human psychology and algorithmic capabilities that we'll continue to dig into in future episodes. We touched on the challenges with the regulatory landscape, the impact of AI on intellectual property law, debates around fair use, and the need for standards like NIST's AI risk management framework. We also covered cyber risks.
the unique risks of open source LLMs regarding data security and licensing, how AI is transforming work by taking on monotonous tasks, and the potential risks of teams of AI agents escalating errors. Finally, General Huston offered three tips for leveraging AI. Be bold, be responsible, and be flexible. We'll be digging into specific application areas in public safety and enterprise security to explore the topics discussed in this episode
more concretely. I hope you found this conversation useful and will continue to join me on this Geeks journey in solving for safer. Thank you.