The Macro AI Podcast

AI and the Enterprise Threat Landscape - with Ed Dunnahoe of GuidePoint Security

The AI Guides - Gary Sloper & Scott Bryan Season 1 Episode 19

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 33:09

The Macro AI Podcast: Episode 19 - AI and the Enterprise Threat Landscape 

Join hosts Gary Sloper and Scott Bryan in this engaging episode of The Macro AI Podcast as they explore how Artificial Intelligence is transforming the cybersecurity landscape for enterprises. Featuring special guest Ed Dunnahoe, VP of Innovation at Guidepoint Security—a trusted cybersecurity provider for over 40% of Fortune 500 companies—this episode dives into the evolving challenges and solutions in enterprise security driven by advancements in AI and machine learning. 

Kicking off the discussion, Ed shares his background and journey to his current role at Guidepoint Security, offering a glimpse into his expertise at the forefront of cybersecurity innovation. The conversation then shifts to key areas impacted by AI in the enterprise threat landscape. 

The hosts and Ed first tackle Governance, Risk, and Compliance (GRC), exploring how AI introduces new complexities and opportunities for ensuring regulatory adherence and managing risks in large organizations. Next, they discuss Application Security (AppSec), examining how AI tools have reshaped the practices of identifying and mitigating vulnerabilities in software applications, fundamentally altering the AppSec landscape. 

The episode also covers Incident Response, where Ed highlights the challenges IT teams face in responding to security incidents in an AI-driven world, drawing from real-world conversations with industry leaders. The discussion then turns to Identity Management, addressing the emerging challenges of securing identities as AI technologies introduce new vulnerabilities and authentication complexities. 

Finally, the episode delves into AI TRiSM tools (Trust, Risk, and Security Management), which are designed to ensure AI systems remain trustworthy, secure, and compliant with ethical and regulatory standards. Ed shares insights into the latest TRiSM tools and their role in helping enterprises navigate the risks associated with AI adoption. 

Wrapping up, Gary and Scott thank Ed for his invaluable insights and invite listeners to share the episode and connect on LinkedIn for more AI-driven discussions. Tune in to this episode for a high-level overview of how AI is reshaping enterprise cybersecurity and what businesses are doing to stay ahead of the evolving threat landscape. 

Send a Text to the AI Guides on the show!


About your AI Guides

Gary Sloper

https://www.linkedin.com/in/gsloper/


Scott Bryan

https://www.linkedin.com/in/scottjbryan/

 

Macro AI Website

https://www.macroaipodcast.com/

Macro AI LinkedIn Page:  

https://www.linkedin.com/company/macro-ai-podcast/


Gary's Free AI Readiness Assessment:

https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


Scott's Content & Blog

https://www.macronomics.ai/blog





00:00
Welcome to the Macro AI Podcast,  where your expert guides Gary Sloper and Scott Bryan navigate the ever-evolving world of artificial intelligence.  Step into the future with us  as we uncover how AI is revolutionizing the global business landscape  from nimble startups to Fortune 500 giants.  Whether you're a seasoned executive,  an ambitious entrepreneur,

00:27
or simply eager to harness AI's potential,  we've got you covered.  Expect actionable insights,  conversations with industry trailblazers  and service providers,  and proven strategies to keep you ahead in a world being shaped rapidly by innovation.  Gary and Scott are here to decode the complexities of AI  and to bring forward ideas that can transform cutting-edge technology  into real-world business success.

00:57
So join us,  let's explore, learn  and lead together.  Welcome to the Macro AI podcast, where we unpack AI innovations driving global business forward. I'm Gary Sloper. And I'm Scott Bryan.  And today we're going to take an inside look at enterprise security and how recent advances in machine learning and AI have quickly reshaped the threat landscape and how businesses are responding to these changes. Yeah, that's right, Scott.

01:24
We're actually joined today by our friend Ed Dunahoe, who is the VP of Innovation at GuidePoint Security.  For the listeners out there, GuidePoint provides cybersecurity solutions and services to over 40 % of the Fortune 500 companies. So we're really grateful to have Ed here today and his perspective and insights  about how large enterprises are handling security for artificial intelligence.

01:59
Ned, thanks for joining us from the Pelican State, also known as Louisiana.  We get a lot of questions from listeners  looking to understand how AI is changing the landscape of cybersecurity and you're just, you you're right there in the middle of it. So I was thinking maybe you could just kick it off a little bit about your background and how you kind of just evolved into your current role at Guy Point. Yeah, sure, guys. Thanks for having me. ah Yeah, so

02:27
I started off my first cybersecurity job was as a consultant doing penetration tests and risk assessments and spent some time doing that, living out of a suitcase and then spent a few years beyond that. noticed as a consultant, I had some weak spots on the infrastructure side. So I spent some time on the blue team after that defending what I had previously been attacking to kind of round out my skillset and then came over to guide point.

02:55
It'll be 10 years ago in a few weeks actually. So  been around for a little while. um And in that time, most of that time was spent  leading our  network penetration testing team. So doing like your typical red team type assessments,  social engineering and penetration testing. um And then right around, I guess.

03:16
Chat GPT-3 was really uh when I got super interested in AI and kind of  what the impact was going to be  on  really everything.  But obviously cybersecurity was  a big focus there. uh And then,  towards uh the beginning of last year, we didn't have a lot of people internally that were focused on AI. uh

03:44
and we're getting a lot of questions from our customers about AI and how  they could adopt it safely and securely. um And again, we didn't have anyone focused on answering those questions. So um I was interested in it and  there were a few other people internally that were interested in it. So  kind of took it upon ourselves to start figuring that out.  And  a lot of the AI tools were...

04:09
going to be conducive to some of the process work that I was going to be doing. And then towards the end of last year, it became very obvious that AI was becoming so ubiquitous that it couldn't be someone's side project. It had to be someone's job. So it's kind of already in that position with that focus. So we kind of stripped off the other responsibilities and made my focus 100 % on AI and figuring out how GuidePoint's going to use it and kind of coordinating internally.

04:38
to also help uh our customers  figure out how to use it for the services that they're asking us for um as it relates to AI. So  it's kind of how I ended up where I am. yeah. No, I mean, I think that's great,  especially how  you've had to control your learning curve as an organization at the same token. And I think that's indicative of where a lot of companies are not even thinking about today.

05:08
to your point, can't just be a side project. It's coming, it's here, it's progressing, it's moving forward.  And  when I look at what you and GuidePoint and your team, what they do on a day-to-day basis, I you have the visibility into the entire landscape of security threats, challenges, and how IT teams evolve  to address those challenges. In the new landscape of artificial intelligence, I would imagine GRC, so government.

05:35
know governance risk and compliance is really a hot topic. I mean, are you seeing anything there, especially you know where you came from and now to where many organizations do have a focus in GRC? ah Just curious, you know what you're seeing there from a landscape perspective. Yeah, it's obviously in high demand right now, because it's  one of the first things arguably that you should do as part of establishing an AI program. uh So.

06:04
There are some customers that have come to us that have kind of put the cart in front of the horse and somebody shoved the chat bot into the website without telling anybody.  Now they're trying to put it back in the box a little bit.  But yeah, there's a lot of people that are  starting to look at it.  Okay, this isn't going away. We need to establish like the rules and the policies that we're going to follow, what we're allowed to do, what we're not allowed to do,  how we're protecting the data  and so on. um

06:31
than building off of that, because that's kind of how things are going to happen.  Again,  there's some facets of it that you can go backwards, but there's also other concerns about  what industry you're in and whether or not there's any sort of compliance obligations and  that sort of thing. it's,  yeah,  there's a lot of questions about it. mean,  there's  nation state governments that haven't figured it out yet. So it's all brand new. Everybody's trying to figure it out all at one time.

07:00
It's really unique in um in typical cybersecurity, a lot of stuff has already been figured out, so to speak. So you can kind of look at what other people are doing and adapt it to fit whatever your particular circumstances are.  But you don't have that in AI. Like everybody's  just figuring it out for the first time. yeah,  it's tough. I mean, I've always said that.

07:26
cybersecurity is a bit, I mean, uses the adage of drinking from the fire hose. uh But I mean,  AI is all the fire hoses.  It's just so much information. There's  infinite content. Everybody's talking about AI now.  It's too much to consume.  Ironically, a lot of people have  created AI tools to help consume more AI content  because it's just too much to sit through and  listen to and...

07:54
and read and what have you. So  staying up to date is tough. um But yeah, mean, those are some really, the GRC topic  provides  companies with a baseline, but there's a lot of really critical questions that need to be answered. And it's not just one person's job to answer that. mean, the business has to do it just like any other kind of your typical GRC function. You're to have to do it all over again for AI because it's different.

08:22
Yeah, you're probably seeing your you guys work with a lot of large enterprises, are they are you getting a lot of just requests for conversations kind of like, you know, any new technology on the that's that's new to the tech curve. They probably just call you up and ask your opinion, right? Pick your brain. Yeah, actually, that's a bit of a challenge for us right now. Because I mean, we've got I mean, we've we've put together a working group internally.

08:50
to kind of a landing zone for those types of questions where at least  whenever our sales team gets those questions from their customers, they know, hey, you can send an email to this distribution list.  And if somebody knows the answer, they'll help you out.  And that was kind of like a,  let's slam this together real quick so that people have a place to go. It's probably not the best solution in the world, but that's kind of where we are right now. ah But that's usually the first question that comes in.

09:18
which is tough because  it's just like, hey, I've got a customer and they want to talk about AI.  My first question is like, okay,  what about AI? Like that could mean a lot of different stuff. And we've got people  internally that are focused on the AppSec aspects  of  AI. We've got people that are focused on the GRC aspects of AI. We've got people that  are really familiar with different AI tools that are coming out uh that may not be.

09:46
It may not have one foot in any of those areas, those other areas. it's  those, those questions are really hard to answer because it's just like, okay, you got to give me some direction so that I don't have 40 people on a phone call to figure out which two are the right ones  to talk about it.  Um,  cause yeah, I mean, it's,  it's kind of like saying, I want, so I want you guys to do an assessment for us. It's like, okay, well, what kind, like you want to, you want to pin test, you want to a cloud assessment, like, um,

10:17
like a purple team, a tabletop incident response. Like, there's a lot of stuff done back there. So, well, I think to your point, mentioned, uh, it as many fire hoses, right? Uh, I'm gonna have to remember that one because I think you're spot on it. It, it, it encompasses so much. And when some, somebody says, want to talk about AI, it's which fire hose do I need to bring? Yeah. Yeah. And you guys have really mature practices across the board.

10:45
when it comes to cybersecurity. So like you said, you just have to, you have to kind of identify which area  is  you want to have the conversation about and then tailor that conversation to that particular practice. you know, let's, let's just talk about application security, for example. ah So for our, for our listeners, EPSEC refers to the practices and the processes of identifying, mitigating, and preventing security vulnerabilities in software applications, obviously.

11:14
And, you know, Ed, I'd imagine that the, with the new world of AI,  some of these tools that are out there have now altered the landscape of uh AppSec. Is, you know, what, is that what's happening or what are you seeing in regards to AppSec? Yeah, I mean, the AppSec space is really,  really heavy on AI. Cause a lot of people are building in AI features in their applications.

11:39
It's a natural fit for it, whether it's a web application with a customer service chat bot on it  or like  an application. I mean, there may be an AI application, but I mean, you have your note taking applications that are building in AI features to help you summarize notes and um like Slack's got AI and  like all the different collaboration platforms are building in AI features.

12:05
Yeah, mean,  everybody's trying to build AI into their tools to make their tools more effective and a better user experience. So  it's certainly a big deal and a big focus. em But obviously, as you bake those features into an application, you're introducing a whole new set of risks em that are unique to  your LLMs. em then you've got,  going back to the compliance concerns, you've got

12:32
like transparency concerns and discrimination and bias and toxicity and all these things to worry about  and transparency and like what you're using and where the data is going. So  it can get real complicated real fast. But  yeah, mean, there's a lot of tools coming out  that are AI powered  in terms of like  that are AppSec security focused. uh So like new

12:57
A lot of vulnerability scanners  are AI, like  for code analysis, for example. uh A lot of models are specifically designed to help write code. Like Claude has been  one of the top ones uh in terms of writing code for users  for a long time. uh So obviously,  some of those models,  if they're good at writing code, they're  arguably could be good at finding vulnerabilities in code. uh

13:26
I think a lot of people are using AI  specifically for like static code analysis. um AI might be better than  any static code analysis tool out there. Like you just use an LLM to do that for you instead of purchasing that tool.  I don't know for sure because it's not  my circus, but um I could see that certainly becoming a component of the AppSec program is just having an LLM look over your code and determine whether or not there's any vulnerabilities there to have it supplement.

13:55
some of your conventional scanning tools and what have you. Yeah. Yeah, that's interesting.  And so if we think about tools there on the AppSec side,  I I also think of, when I think of Guypoint and I think of the industry as well, just where you support a lot on incident response, and that's a very complicated area to begin with. What do you see that kind of comes to mind?

14:23
you know, around some of the conversations about incident response as it relates to AI and maybe where you see the trend is going, or maybe you don't see a trend just yet. I'm just curious your thoughts on incident response. Yeah, it's, there hasn't been, I don't know if there hasn't been as much emphasis on incident response as it relates to AI yet, or if it's just kind of flying under the radar and, or maybe there just hasn't been.

14:50
enough compromise of AI systems yet to warrant a lot of conversation in that space? I don't know. um But  if I had to pick one, I think  the thing that I've seen the most  out of some of the tools that are coming out that are supposed to help with incident response uh using AI  is just going to be, I mean, everybody's talking about uh agentic AI.  I think your agents,  like having an army of agents is going to be kind of the defensive.

15:19
cybersecurity, whether it's incident response or just like SOC analysis period.  I mean,  in both of those cases, you're looking at,  you're having to digest  a lot, like a wall  of diagnostic data,  whether it's log  data or  from  like network log data from like a Splunk or something like that,  or like

15:43
log files from the operating system or like the registry  like you're looking at these  kind of huge data sets and trying to spot patterns in there and that's  AI superpower right is digesting a lot of information and identifying patterns in it.  So  I think the the agentic piece is probably going to take off  quite a bit  because I think a lot of defensive security  problems  have been

16:11
attributed to the lack of bandwidth. Like you just don't have enough people to look at all the information.  Everybody talks about alert fatigue. Like,  a agentic AI can really help with that in terms of not only parsing that data  and enriching your existing alerts, but also like if there are certain alerts that  you always respond to the same way  or in the same few ways,  you can just have AI start to handle that for you  and then freeze up time for

16:39
for like the more in-depth like human analysis that's required to really look over something and spot really goofy things.  So I think  there are some tools out there that are kind of leveraging this.  I've seen one at a conference where they had a demo set up and  it was  a phishing email had come in  and uh their demo had an agent that kicked off whenever the email came in.  And there was one that started analyzing the domain and it was one that

17:09
pulled the attachment off of the email. There was one that analyzed the contents. There was one that analyzed the email header  and like all  the DNS records for the email server. And you've got all these different agents doing all this analysis all at one time. And it's dumping, it's pulling all that information into a single console. You can watch it happen  and it's analyzing this phishing email. And then it can actually, I mean, depending on how many hooks you give it into your environment, it can go and look to see, okay, who got this email?  And

17:37
you could potentially take it even further. It's like, okay, just whenever you figure out who's inbox it landed in, tell me who opened it and then rip it out of there and block connectivity to whatever it's calling back to like you could do a lot of stuff with it and an AI agent is going to do that way faster than a human can. Absolutely. So yeah, mean, you're gonna need a smarter, smarter army and a bigger army of agents acting on your behalf. Yeah. And mean, they're gonna do it instantaneously, which is mean

18:07
that's going to be  a key part of it. Because attackers are going to be using AI too. So they're going to be  kind of uh obtaining the same level of  enhancement. mean,  they're going to get a foothold, and they're going to have an AI agent that kicks off and starts doing reconnaissance on the mapping the internal network, figuring out where vulnerabilities are,  maybe kicking out exploits and  getting shells back and

18:34
landing on the box and doing all this, like it's going to be AI versus AI in a lot of ways. like a human is just not going to be able to keep up with that. I mean, I think that's kind of where people are going to have to get to. Yeah. Well, and I think it's funny. Oh, go ahead, Scott. I was just going say good insights. Yeah, go ahead, Gary. No, I was going to say you bring up a good point though, because I think a lot of folks in the industry associate agentic AI with more on the contact center. So voice.

19:04
inbound, omnichannel presence for customer service or ordering those types of things. But really,  where you're seeing that move towards  is really that threat protection and  mitigation real time, especially in this case where you gave the  phishing email example. So I think that's really interesting.

19:26
You know, I just have this, I have this like weird visual, I'm probably dating myself, but the movie War Games back in the 80s where, you know, all of a sudden everything just started attacking each other really quickly. And that's what it kind of reminds me of to your point where, you know, the threat will  be using AI and the mitigation will be using AI. So it's almost like, you know, who has the bigger AI at that point  to see who wins. So it'll be interesting.

19:54
Yeah. And I was just thinking about when you mentioned AI versus AI, that makes me think of another top concern would be, you know, the field of identity management. Um, I just, it must be  really getting complex. I mean, what, what are you seeing for challenges around identity management? Yeah. I mean, uh, I was just reading last night, um, Sam Altman, CEO of OpenAI, guess has this new device that's like, uh

20:24
little orb that you can get and it's  meant to scan people's eyeballs to verify that they're human.  It's like  some sci-fi stuff but it's happening right now. uh But I mean that's kind of,  I mean that's where we're at right? I mean you've got all these deep fake things going around like  I know everybody likes to lean on the story of uh that  Chinese company last year.

20:50
where  this guy had spun up a bunch of deep fake avatars  in a Zoom meeting and one of them was the CFO of the company and got him to transfer like millions of dollars somewhere. um Just convinced him because of like fake avatars in a Zoom meeting  to transfer all this money.  mean detecting stuff like that,  there's a lot of services now um that  need maybe like a two  or three second clip of your voice.

21:19
So yeah, you've got executives that probably have a YouTube video out there of them giving some sort of message either to the company or to  their customers or something like that. So if you have that video,  you've got enough of a sample  to capture their physical mannerisms, their  voice mannerisms, their tone and their cadence of how they talk  and  the different vocabulary that they use perhaps. um

21:46
You've got enough to mimic that and create a deep fake avatar of that person that you can then use to like, again, I mean, you've got the typical scam of, hey, I need you to go buy $500 worth of Apple gift cards and send me all the codes. Like you can start working on that kind of stuff. And it takes almost no effort now. It's really easy to do that. I mean, you can buy a subscription. It's not like you have to stand up a bunch of infrastructure.

22:14
You don't need any skill to do those types of attacks anymore. So  the barrier to entry is very low to do that type of thing.  And there's a lot of tools coming out now that can actually do that in real time. So like you can get on a Zoom meeting and have a conversation and it just swaps your voice. uh I saw one over the weekend I was reading  where someone's developed a model that can uh modify accents for call centers.

22:42
So if you call in to a call center that's perhaps overseas and you get really frustrated because you can't understand someone's accent, there's AI that can make somebody's accent, like flip it from an international accent to like an American accent or British accent or what have you, and make it easier to understand and improve the user's experience when calling customer service. I thought that was a really interesting use case.

23:09
because that does frustrate a lot of people. also,  like,  again,  you've got this really powerful tool  that if in the wrong hands can be weaponized to deceive people and trick them into doing things that they otherwise wouldn't do. ah And I  saw a statistic from  the 2024 Identity Fraud Report  that a deep fake attack was perpetuated every five minutes last year.

23:35
And like that's that's not going to get any lower. Like it's going to only  be constant. Yeah, absolutely.

23:44
that it's crazy because at some point  it's almost like, we go back to the old days? Everybody's just gonna wanna shut everything off. Cause it'll just be, it'll be just so much noise. ah And I know a lot of our listeners  have pinged us about wanting to hear more around uh trism tools. So trust, risk and security management for any of our listeners that  is new to that topic.

24:10
And they're really designed to ensure artificial intelligence systems are trustworthy, secure and compliant with ethical and regulatory standards. Kind like we've been talking a little bit about here today. I'm sure you're up to your eyeballs in evaluating TRISM tools. Can you tell us  a little bit about these tools and kind of what you're seeing? Yeah, they're pretty interesting. I think they're going to be pretty necessary for a lot of companies. um I mean, especially in our case where we have a lot of customers. ah

24:40
So I've been involved in trying to negotiate a lot of contract language with our customers as we're working through MSAs and what have you. And it's all over the place. And there's a lot of obvious concern over like what we're doing with that data. Because as a cybersecurity company, we get our hands on some really sensitive data. So we want to be very careful with what we do with that and who it goes to. having, and there's a lot of emphasis, again, in the industry, talked about this a little earlier.

25:10
but around transparency and ethics and making sure that there's the models of free of bias and toxicity and discrimination and all these different things. These tools can really help with some of that. but also if a company is looking to kind of go towards either aligning with the NIST AI risk management framework or like ISO 40 2001 is the new ISO certification for AI.

25:39
Those standards all include  some sort of requirement to uh have a lot of components that we talked about today. So like from governance and risk perspective, you've got to have a policy, like an AI policy and executive sponsorship and things like that. uh On the more tactical side,  you've got uh to maintain an inventory  of the different uses of AI, like the different ways AI is being used within the organization, what type of data it's touching.

26:06
things like that. So there's a lot of documentation that you have to keep up with and some of these TRIZM tools can help  with some of those compliance  requirements as well, whether it's  keeping an inventory of  what's being used, how it's being used, providing that transparency, uh and then also some of them have some of the compliance pieces built out  where uh some of them actually have the frameworks FAR, NIST,  and uh ISO 400001

26:35
built into the platform  where you can kind of track where you are and do like a self-assessment uh within the platform itself.  But I mean, some of them offer  model scanning to where  you can scan the models to determine whether or not the output is uh giving you,  is kind of tripping any of those flags uh as it relates to bias and toxicity and  discrimination.  But also, uh

27:02
Kind of your typical OWASP top 10 for LLM risks  like data poisoning um and  Data leakage and prompt injection and so on um So  yeah, I mean, I think the tools are gonna be really necessary. I don't I don't know  how much  some of the  more conventional infrastructure tools I don't know how much visibility they provide into AI specific stuff

27:32
Um,  but yeah, mean, the trism tools are going to really help a lot, I think, in giving people a little bit of peace of mind in terms of,  um,  figuring out like what AI is being used for within the environment. Cause I mean, we, everybody's talked about for years, the whole shadow IT problem.  Now you've got a shadow AI problem.  Cause you never know what people are throwing into the public version of chat GPT. mean, there's  go, I mean, you can go find countless new stories about companies.

28:01
like intellectual property that turns up in chat GPT and Claude because people are just uploading trade secrets into the public models.  And that's a huge blind spot. Like you can't rely on people to self-regulate. So some of these tools can really help close those gaps and  kind of give you some of that visibility and in some cases actually stop it from happening. Yeah. I think I saw, I was watching a financial cable news channel the other day and it said that the uh

28:30
the market for Trism tools will be about 8 billion by 2032 in the domestic US. I, my first thought that came to mind was that's way undervalued. I mean, these things are going to be touching so many areas of, know, explainable AI model, model monitoring, model ops,  you know, AppSec, like we already talked about. I mean, there's going to be just a lot of areas that these Trism tools are going to touch.

28:57
So I think that that $8 billion number is  way undervalued.

29:04
Yeah, I mean, think it's going to be like AI is not going away. So I think it's going to end up being as commonplace as I don't know if I'd go so far as to say like a firewall would be. But I mean, some of those network based tools like that that give you some sort of visibility. know there's some of those tools that offer agents that you can install on the endpoint for AI tools that are actually installed on the laptop that may or may not use a browser.

29:34
So, I mean, there's a lot of room to grow for sure. Yeah, I completely agree. And I think,  um, you know, to your point earlier where you're seeing a lot of these companies just throw information out into the public LLMs. don't, I don't necessarily think it's always done out of malice. I think it's, they're trying to learn and it's, you know, similar to what you said earlier, you know, somebody wants to talk about AI, but which firehose are we focused on? And I think that's.

30:03
probably the first misstep that a lot of organizations take is they feel that they need to go build something or create something instead of taking a step back and trying to figure out what their use case to solve  is.  And if there are tools or organizations similar to GuidePoint who can help leverage  all of the industry knowledge  and relationships that  are there that you have to help guide somebody down the right path before they make uh serious ethical

30:33
mistakes long term.  I think that's why  things like this, like these types of tools will only increase because of  where folks are trying to learn around AI, but they're not doing it correctly. Yeah,  there's a lot to pick up on. It's easy to get carried away too, right? mean, one of the examples I like to give is um like AI can...

30:57
My wife lives in spreadsheets, so she knows how to do all kinds of stuff in spreadsheets. I don't know what to do. Like, I know it can do it. I just don't know how to write the extremely long convoluted formula like she does. But now AI can help me do that. But if in spreadsheets in particular, a lot of times you'll get into a situation where if you're asking it for help, like, hey, I've got this problem. I'm trying to do this and it won't work.

31:22
it'll ask you like, can you give me a sample set of the data that you're looking at in the spreadsheet so I can help you figure out like what's going on? It's to get carried away in that process, right? And just throw some data into chat GPT or whatever it is you're using. that can help you finish solving the problem. But if you're not paying attention, like the spreadsheet might have sensitive data in it and you just shoved it up in there. So yeah, I mean, it's certainly not always intentional and malicious, like you said.

31:50
But I mean, it happens  and it's better to have some of those technical controls in place to help prevent that kind of stuff. Yeah. The good news now, Ed, is that you can be the spreadsheet master. Yeah, exactly.  My wife's probably not too happy  about that. Yeah, that's a  good point there, Scott. Well, Ed, ah I really want to thank you so much for joining us today and sharing your real world experience and insights.

32:19
We'd love to have you back on the  show in the future and really to hear what's changed in the industry.  And I'd also like to thank all of our listeners for tuning into the Macro.ai podcast. If you like this  episode  and our show,  please share it with your network and bring in additional questions to us. We'd love to answer those and  look forward to hearing from you. Yeah, I really appreciate it. uh

32:43
And for the listeners, uh hit us up on LinkedIn  anytime uh or at macrerepodcast.com. And  we'll catch you next time for some more interesting topics and AI insights.  Thank you.  Thank you, Ed. Thank you guys for having me.