AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

Can Legal Teams Keep Up with AI Advancements?

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 57

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 29:43

AI is moving faster than the rules designed to govern it—and legal teams are now on the critical path.

In this episode of the AI Proving Ground Podcast, Olivia Fleming, Chief Legal Officer at Edgewood Management, and Erika Schenk, General Counsel and EVP of Compliance at World Wide Technology, unpack how enterprise AI is reshaping the legal function—from gatekeeper to growth enabler.

They break down why the biggest risk in AI isn’t black-box models or hallucinations—it’s deploying tools no one fully understands. From fragmented data estates and fast-moving regulations to cross-border gray zones and emerging governance models, this conversation gets practical about what legal leaders actually need to know to help AI scale responsibly.

The takeaway is clear: when legal is involved early, organizations move faster—not slower. The enterprises winning with AI are building trust, governance, and accountability into the stack from day one.

If you’re rolling out AI at scale, navigating regulatory uncertainty, or trying to move fast without breaking things, this episode is required listening.

More about this week's guests:

Erika Schenk is General Counsel and EVP of Compliance at World Wide Technology, where she leads global legal strategy and compliance. Since joining WWT in 2014, she has built and scaled the legal organization, overseeing contracts, ESG, EHS, government affairs, and legal support for enterprise, service provider, and public sector teams. A trusted business partner, Erika has guided WWT through international expansion, acquisitions, and portfolio growth. She previously served as senior counsel at Boeing and as a partner at Bryan Cave LLP.

Olivia Fleming joined Edgewood in November 2007 and serves as the Chief Legal Officer. Olivia graduated from Fordham University with a BA and also received a JD from Tulane University Law School. Olivia was promoted to Partner in December 2018.

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions. 

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments. 

Setting The Stakes For AI Governance

SPEAKER_00

From Worldwide Technology, this is the AI Proving Ground Podcast. There's a familiar pattern that happens inside organizations during moments of massive technological change. The business sees opportunity, IT feels pressure, and somewhere, often in a conference room no one has time to be in, the legal team begins quietly sketching the guardrails that will determine whether innovation actually takes root or stalls before it starts. And in the age of AI, those guardrails have never mattered more. On today's show, we're talking with two legal leaders who are living this reality every day. Olivia Fleming is the chief legal officer at Edgewood Management, where she's helping steer an AI-first transformation inside a firm historically built on precision, confidentiality, and trust. And Erica Shank, general counsel here at WWT, where AI adoption is accelerating across thousands of employees, dozens of business lines, and a global customer base. She's navigating AI's promise, its pressure, and the messy middle that every enterprise now finds itself in. Together, they're confronting a challenge that IT leaders know all too well. How do you move fast enough to innovate without moving so fast you break something you can't fix? We'll talk about the evolving role of legal inside AI transformation, the bottlenecks that's still slow adoption, why governance is becoming the new backbone of digital strategy, and the lessons they've carried forward from earlier waves of disruption, like the birth of e-commerce to the rise of open source. It's a conversation about I it's a conversation about AI, yes, but it's also about trust, responsibility, and what it takes to guide a business through uncertainty without losing momentum. So let's hear more from Erica and Olivia. Well, Olivia, thanks for thanks for joining the show. How are you?

SPEAKER_01

Well, thank you for having me.

SPEAKER_00

Absolutely. And Erica, welcome.

SPEAKER_01

Thank you.

Meet The Legal Leaders

SPEAKER_00

We're happy to be here what you know, Olivia, we can start with you. What do you how do you see the role of a of a legal department or a chief legal officer playing in like AI transformation?

SPEAKER_02

So I think in this was the case before AA, right? Like risk identification management, right? Escalating those risk items up the chain and putting implementing controls to address those risks. And we've talked about it, right? Yeah. So and so I think AI is just an another tool that presents risk that we need to manage and mitigate. And so yeah, very much a risk management function.

SPEAKER_00

And and Erica, you know, worldwide technology is it's is emits its own AI transformation. I'm wondering what you see in terms of how you and your teams, you know, how you see yourself fitting in in that equation.

SPEAKER_01

It's about risk identification. It's about figuring out how do we help the business find its footing in a very uncertain and an unknown environment. That's the reality. We we know what AI is, we kind of know how it works, right? I hear the AI experts say that we don't really know in many cases how it works. And we're worried about just keeping everyone cognizant of the need to be careful, but also move quickly. And that's a really delicate balance. And I think Olivia and I have talked about how, in our in our role as general counsel, this is not a new territory for us, to be perfectly honest, because this has been true with any change in regulatory or you know, law that happens. We are always trying to figure out how to balance helping businesses move forward in the face of new rules, new ways of having to think about risk, new ways of having to think about doing business because of a change in law. So I think it's it's just a a new problem, but with an old playbook that we've been dealing with for a while. Right.

Legal’s Risk Playbook Applied To AI

SPEAKER_02

And you know, the way we look at it, we want to empower our people with tools, right? And this is just another tool and how that tool works and how we can roll that out to our people. And ultimately, how does that play out in the client experience, right? And what risk are there with that new tool.

SPEAKER_00

Right. Well, I I'm a little surprised, Erica, that you said, you know, this is nothing new because in my head, I'm thinking, oh, you you mentioned in your answer, we don't necessarily know how it works, or that's kind of a common phrase. And I would have thought that that would have put the two of you, our legal teams, more on their heels, but it sounds like this is just kind of embracing change and leaning in.

SPEAKER_02

We were just talking about it today. We were.

SPEAKER_01

It's like we're used to not knowing exactly what we're stepping into. I, you know, I think a lot about with AI, I've thought a lot about two different things because I'm old enough to have witnessed the dawn of e-commerce, right? And I remember when the internet was new and it was like, ooh, we gotta, you know, we've gotta figure that out. And so people were afraid of how does that work? What does this mean? Can I contract online? What happens if you buy something from me, you know, over the internet? What if I try to send you something over the internet? Well, how does that work? And and then also open source, right? When open source was kind of first bursting on the scene, there was a lot of fear around that. And in the legal community, I know, Olivia, you saw the same contracts I did. Thou shalt not put any open source into you know whatever was happening. And the reality that's everywhere now. AI is going to be the same. So it's the the fear of the unknown is definitely there, but it's something that we've just had to deal with before.

SPEAKER_00

Yeah.

SPEAKER_02

And I wonder, Eric, if you would agree with me, but one of the fun parts about our job is we are constantly learning, right? And so learning what open source is, learning what AI is, right? Learning about embeddings and what that means, and learning what questions to ask, what are the risks that we need to be looking for? So I think it's a it's a really fun time. It's really exciting, and and it's been very busy these past few years learning what AI is and putting the guardrails around that.

SPEAKER_00

Yeah. I wonder, do you what how do you do you think the legal team or your role specifically, do you think you're uniquely positioned to help lead the business during this time of disruption and transformation?

SPEAKER_01

I think if they're willing to involve us, the answer is yes. I think we can be an integral part of it. You know, there's always a tension with legal teams and business teams because listen, a lot of people are like, oh, great, the lawyers are here, it's gonna slow everything down. They're not gonna let us do what we want. You know, we we've worked very hard at worldwide on our team. And I know Olivia and I have talked about this. You feel the same to build that trust with the business that we are business partners. We're here to enable. And I think bringing us in early only improves the overall outcome because we can help spot issues that, you know, we're trained to spot issues that others are not, right? And if we can spot those early, we simply navigate through and around the problem and still get there. If you wait until the very end, then I gotta tell you to reverse, go back, turn left, you know, add these three things. That's disruptive.

SPEAKER_02

Right. And and to Erica's point, you know, so we are that value-added partner. I think we saw this, and you touched on it earlier, that digital transformation, right? 2017, 2018, and getting ahead of, you know, that cloud journey that we were all on, right? And I think we're very much following a similar playbook, spending time figuring out a couple of years, what is this, right? And now I feel like we're very much in that implementation stage. And I think being, you know, being brought into those discussions, like, okay, what is it that we need to be looking for? What do we need to plan out? Because, like what we say five years down the road, we don't want to be saying, well, why did we do it this way when we should have done it another way?

SPEAKER_01

Yes. And I mean, Olivia, you're very much a part of the implementation of AI in your company. I mean, you're one of the champions for that whole project.

SPEAKER_02

And that's been exciting. And so, and going back to what I was saying earlier, we're all about empowering our people, right? And so we want to make sure, and we're really excited, hyper-personalization that AI offers, we think that's going to be really critical for that client experience. And so we want to make sure that that's something that we as a firm are expressing. So we are very much moving towards an AI-first workflow. And our partnership with Worldwide Technologies has been a key piece to that, been very critical. Working with the team, I've been great experience.

SPEAKER_00

I mean, I love to hear that you're, you know, you're looking to make kind of that, you know, empower people with AI workflows to the extent you're you're able. How have you gone about doing that? Because one of the things we hear from clients so often is it's so hard to get that adoption, whether people are nervous about AI or they just not sure how to dive in and use the tools, or maybe they don't have the tools at all. So yeah, what are you seeing?

SPEAKER_02

Sure, absolutely. So uh so actually we're very fortunate. Everyone in the firm was really excited about it. So what we're hitting our pain point, our bottleneck, if you will, is more just getting over the hump of the orchestration layer. So the legacy systems, I'm sure I know we've we've spoken about that, but just getting everything into a unified system. That's that's taken a lot more time than we initially anticipated.

Lessons From E‑Commerce And Open Source

SPEAKER_00

Why do you think that, you know, just to kind of add on to her answer, why is that such a hard kind of uh bottleneck there?

SPEAKER_01

Data integrity is a problem, I think, for every company. We've been struggling with it for quite some time outside of AI, right? Go talk to our supply chain team about, you know, how they've been trying to get, you know, just get better integration of the data that lives over here and the data that lives over there. And AI is only as powerful as the data that it has access to. And so, you know, garbage in, garbage out, right? If it has access to garbage data, you're gonna get garbage answers. While companies have wanted to get that streamlined integration, AI has been a forcing function around it.

SPEAKER_02

And it's a new governance framework, right? And so we can't really just apply directly from what we've been doing, right? We have to build a whole new governance framework around it. You know, identity access management's very different for individuals versus agents. And so just thinking around that ecosystem and putting those guardrails in place. I agree. Yeah.

SPEAKER_00

I mean, you're getting into a cyber discussion, as least, at least as worldwide talks about it, but at many organizations, there's not just this massive cyber team. So do you think from a governance perspective or compliance, like does the legal team own governance or is it a team effort?

SPEAKER_02

So I sit on our IT steering committee with our head of IT and also our uh CFO COO. And so it's very much it goes hand in hand. I I don't think you can talk about AI without talking about cyber. And we were talking about it earlier today, you know, the security risk, the new tools, right? So we can't bring in a new tool without running through that due diligence to see, okay, how are these agents accessing our data? Where is that data going? Is it being retained by the models? How are the models using it? So that's very much uh the cyber and the AI governance goes hand they go hand in hand. They definitely do.

SPEAKER_01

But it's, you know, it's not even just about the external, it's even the internal, right? Internally within your company, there's data that has to be segregated from other people. I shouldn't be able to see someone else's HR data. I shouldn't be right. So it's um it's it's partially cyber, but it's also just what I'm gonna call good hygiene within a company around how they handle their data. Yeah.

SPEAKER_02

And the way, you know, it was also important for us uh for adoption. So we needed to make sure that our employees were able to have confidence in the tools. And so to your point about data integrity and making sure that that's clean data, we wanted to be able to roll it out to our employees to make sure right off from the jump, you know, they had confidence in the tools.

SPEAKER_01

And you're you're 100% right about that. Because you start to try to roll out a tool and people start getting bad, you know, bad data out of it, or it doesn't work. It's hard enough to get people to adopt any new process tool system, right? And so you you have to do your best to set it up from the get-go, like you're saying, where it's gonna give people good answers. Right. Or they'll just say, this AI thing doesn't work.

SPEAKER_00

Erica, you're you're pretty plugged into, or at least from what I can understand, you know, the industry and you know, certainly how organizations are adopting um AI. Curious, from your perspective, is the industry at a point right now where it's you know understanding what data it has and you know, uh balancing structured versus unstructured data, or what are you kind of seeing when you are are out in the field?

SPEAKER_01

So, you know, it's interesting. I was at a legal conference a few weeks ago. First of all, the market is very crowded. There are a lot of vendors out there. Everyone claims to have, you know, the AI that's gonna change your life. I'll be honest, very few of them do, in my opinion. I think it's still struggling to deal with the nuance. And so, depending on how these different AI products have been trained, they do better or worse in being able to pull out those nuances. I do think AI will, I do think it will revolutionize the legal industry. I don't know if it will do it in the way that people think it will. I don't know what you think.

SPEAKER_02

I I 100% agree. And I I find it's more really helpful for a learning tool. So it's a really helpful learning tool, but I I do think it's gonna revolutionize legal work. I don't think it's gonna be taking our jobs, right?

SPEAKER_01

I hope it helps even the playing field a little bit for people that are newer or people who are trying to get up to speed, right? They have a tool that can help them more quickly approach the level of someone who's been around a little longer, right? So hopefully it kind of brings them up. But the other thing that we have to be very cognizant of is it's never going to be an acceptable answer if something goes wrong or there's a question about why something was done a certain way to say the AI did it. We need to own what we're putting out there. And that's why I think it's not gonna take it's not gonna replace the need for that human oversight. It might help with the grind of getting some of the basic stuff done.

SPEAKER_02

I agree. And I think it's gonna get us to that higher value work faster, right? And so it's gonna replace that, you know, going through the 100 contracts. Yeah, right. But to Erica's point, you need to have that expertise to know what to flag and Yeah.

Legal As Business Partner, Not Roadblock

SPEAKER_01

And and that's one of the things that I I was talking with another another professional about this was part of the way that we learned how to to find those issues was just slogging through it, right? Slogging through that 80-page agreement and reading every word and finding it. Okay, AI can do that faster. How do you replace the learning that comes with doing? Right. Okay, so for the new professional, I like I said, I have no objection to them being able to use a tool to help them get through that faster, but they still have to learn, they have to go back and see what just happened and why did it happen and understand the implication. Otherwise, you're just a word processor.

SPEAKER_00

Yeah. Treating AI more as a collaborator as opposed to just give me an answer, give me the detail, whatever it might be. Eric, I want to go back to what uh one of the things you mentioned earlier, which was I think you said if they're willing to include us early, what does what does the business need to know about the legal side of things to to keep fostering more of a collaborative relationship so that you're moving forward together and not not pausing anything?

SPEAKER_01

I think one of the most difficult things is just to have an under just to realize that we we all come with different experience, understanding, and training, right? And so the there, I think a lot of times there's simply an assumption that this there's no legal issues here. There's no, I don't need to, it's not, it's not like I affirmatively don't want to talk to Olivia, but I don't even think that there's a legal issue. There are legal issues. If somebody decides they want to build an AI robot, please come talk to me about that before you do it.

SPEAKER_00

Yeah. Do you do as people are building that robot or adding a service or they're going out and getting their own license or whatever new AI tools out there, is that creating a burden on legal teams?

SPEAKER_02

Yes. Uh and going back to the cybersecurity piece, those tools for monitoring our employees' use of AI were were not there, right? They they came after AI. Yeah. And so that is definitely um a a a concern of ours, uh, monitoring the use of AI tools.

SPEAKER_00

Yeah. Is that something that we're seeing either within the WWT or you know, with clients that we work with every day?

SPEAKER_01

Yeah, both. I mean, uh, you know, we have a close collaboration with our IT team, our, you know, similar to what Olivia was saying about, you know, her collaboration with her IT peers. We we work very closely together and and have you know established, I think, a really good cadence of listen, when you guys get to some gate point here, you know, please call me and then we'll figure out whether we need anything. And if it's a quick yes, then we move on. And if we need to talk about it, we will.

SPEAKER_00

Yeah. Let's move on to regulation and policy. Well, I mean, that that seems to be it seems to be an instance, but correct me if I'm wrong, where innovation and the business are are trying to move faster than regulation and policy can be rolled out. What type of dynamic does that create, not only for you, but for for organizations kind of of all kinds, and and what are you seeing in the regulatory landscape?

SPEAKER_02

We can spend an hour just talking about the regulatory landscape. Uh we've already seen this playbook, right? And so with the we touched on the digital transformation, you know, 2017-2018. So cloud, data privacy, data sovereignty, right? And so we had the EU coming out with GDPR. Yeah, California followed quickly behind. Uh, I think we'll probably follow a similar playbook on the regulation side. So we know EU AI Act is coming through next year. States are already starting to murmur about putting out their own AI. So I think it's gonna be very similar to what we saw on the privacy regulations. Yeah, and unfortunately, what it creates is a really big patchwork.

Adoption Hurdles: Orchestration And Data Integrity

SPEAKER_01

And and that just creates a lot of uncertainty in our for us for sure. And I I worry that the AI legislation that will come out is almost too I don't know how to say this, but I've had people say, like, there's a lot of conversation around like, well, are you worried about the ethical implications of AI? It's not that I'm not worried about it, but that's more about uh understanding and questioning how the AI model was trained, developed, right? That's something that happens before it gets to me. And a lot of these laws, I think, are so focused on the on that other side that they're missing the ability to come up with a practical solution that helps businesses understand how to regulate or what they need to do to comply with these laws.

SPEAKER_02

And I think, you know, we can agree in theory that human in the loop is is critical. How to get that human in the loop, I think, is where you're starting to see some of that delay on the regulation side. Right. And and and the the monitoring and the reporting. Yeah, I think that's causing a lot of delay.

SPEAKER_00

Yeah. D Erica, do you think it's is it just a misunderstanding with of the term ethical AI? Or do you think people are applying that to like ethical AI meaning, you know, displacement of massive amount of jobs, or what what I think some of it is differences in how people are using it, right?

SPEAKER_01

There, I mean, listen, there is a real there is a real moral slash ethical concern to think about. If you think about some countries in the world, the majority of their population engage in labor that is service oriented. If all of a sudden tomorrow you came out with a thousand AI robots and put them on the ground, you would put out mass unemployment. What would those people do? Right. So there is that element. That doesn't mean the AI itself is unethical. Right. The question is what is the right deployment of AI within the society that you are currently in and what makes sense for you?

SPEAKER_02

We like to look at it through a responsible AI lens, right? And so that framework and identifying, okay, these are the risks. What are we doing to mitigate those risks? Correct. You know, and that responsible AI posture first has been really helpful for us in looking at that from an ethical perspective.

SPEAKER_01

Yeah, but you have to think about the consequences. Of what you're doing. Once upon a time, this happened to the horse and buggy driver.

SPEAKER_00

Right.

SPEAKER_01

Right? The Model T arrived, and all of a sudden the horse and buggy driver was going no crap.

SPEAKER_00

Yeah. Right? But AI is moving so rapidly that the goalposts on what's responsible feel like they're moving at times. So do you I mean, should should policy and governance should it be more of like a software rollout where there's, you know, every week is changing or there's new rollouts? Like how how do you approach that when it's going so quickly?

SPEAKER_02

So I think transparency, right, will be key. And so I think as long as we have a transparent framework, I think we'll get there. And you know, this will settle over the next couple of years. We'll see how the Supreme Court comes down on AI. But I I I do think transparency will be key.

SPEAKER_00

Yeah. Both of you have mentioned we've been through this before. There's a playbook for that. So what are some of the lessons that you that you remember or that were that should be applied now from those kind of other eras of of innovation or breakthroughs? Are there any kind of key lessons that were learned the hard way that we should be doing right now?

SPEAKER_02

I would love more of a carrot than a stick. Yeah. Right. So the GDPR really got people's attention, right? Because you were talking about 5% revenue fines and right, you're getting those big headlines. And so I think if it if it takes more of an innovation approach rather than the stick approach, right? Yeah.

SPEAKER_01

And yeah. And then I would say one of the things I think is most important is to think about it. Don't be afraid. Don't be afraid of it. Fear of it isn't gonna stop it from happening. So you need to embrace the fact that it's here, and then you have to figure out how does this work for us? How does it improve our lives, our company, our business, our customers' business? What right? You have to figure that out and and be more open to innovation. I agree.

SPEAKER_00

From from a legal, a little bit of a hard pivot here, but from a legal perspective, what's keeping you kind of up at night right now, knowing that AI is moving rapidly? All of our employees are using it. People are using it, you know, they're using license, their own licenses with Chat GPT, they're using approved tools. It's a long list.

Building New Governance With Cyber At Core

SPEAKER_01

I was gonna say that's it. I mean, what you just said, what what what fear? So, I mean, from my perspective, it's the unsanctioned use of random tools. They may be fabulous tools. I'm there's not judgment on the tool, right? It's just the fact that they're tools that haven't run through our IT department, so we don't know about the security, or we don't know what kind of license applies to whatever data is going in. And then, frankly, the creation of records that are outside of our control in terms of document retention and purview. Yeah. You know, people like to hang on to things, documents and things. And, you know, Olivia and I will tell you, I have never, never, I've never encountered a situation where someone has hung on to some document from 30 years ago and it was a good thing.

SPEAKER_00

Never. What's the what is the risk of what you're just talking about? Is it just data leakage or what's is it cost? What's what are some of the risks of of those implementation or implications?

SPEAKER_02

So so having those records and producing them, it there's litigation. So that would be the biggest risk.

SPEAKER_00

Yeah.

SPEAKER_01

Yeah. And then we, you know, you do worry with the unsanctioned tools about potentially whether or not there's there's leakage of that data over to the provider of that tool, right? If they have access to that information and they're using that information to retrain or to improve their tool. With our enterprise license of our chat GPT, for example, that all stays in our universe. Chat, you know, open AI doesn't get the benefit of the queries that I put into that. Yeah. But if I go out into the world and I just license some AI tool that's available on my iPhone or on Android, that may not be the case.

SPEAKER_02

Yeah. I know a big part of my year last year was spent on making sure that we had that walled garden, right? Yeah. And so your data is not leaving that wall garden and being uh retained by the LLM or trained, yeah, used to train the LLM. But yeah.

SPEAKER_00

But is that realistic to think that that we can control that? I mean, I'm sure I'm one of your problem childs of I've I've got ChatGPT open. So what is is there a balance between ideal and realistic?

SPEAKER_01

It's a question of what you should be putting into it. Yeah. Right. And and again, this is why I think these are not new problems. If it was confidential before, it's confidential now. Just because AI is in the name of the tool, that doesn't some suddenly sanction it. So it's really about making sure we're being good stewards of our data, good stewards of our customers' data. Just think about it if you found out that, you know, at your hospital, your doctor was using some random AI tool to dictate all the medical notes from your visit.

SPEAKER_00

Yeah.

SPEAKER_01

So you probably wouldn't be happy.

SPEAKER_00

No. So I mean, it sounds like there's, you know, it's still common sense. Back to the basics. Just do what is is intelligent.

SPEAKER_02

Well, and I think training, that's a big piece, right? And making sure all of our people are trained to to know what you should be doing, what you shouldn't be doing. Yeah. But that's been a big, big focus of ours.

SPEAKER_00

Yeah. Let's uh let's shift forward here a little bit. What what's on the horizon as it relates to to AI either in the legal community or as it relates to how it might transform the work the two of you do or the work your teams do? What what are you what are you excited about, or maybe what are you nervous about?

SPEAKER_02

You want to go for I'm trying to so I you know, from a client experience, I mentioned earlier, I'm really excited about the hyperpersonalization. Yeah. And I'm most excited about from my side, from my work, that getting to the higher value work faster and not having to deal with so much of the administrative burden. Yeah. Right. Oh, where did I save that document? Right. And so that unstructured data and being able to work with that using the AI tools, I'm really excited about that. Yeah.

SPEAKER_01

Yeah. I mean, similarly, we're really focused on efficiency gains within the department, being able to find the low-hanging fruit of the repeatable stuff that, you know, really at the end of the day is pretty simple, but still today requires a lot of human intervention. And so if we can, you know, if we can take some of that stuff off the table or at least get to a turn faster, for us, it's time to deal. That's what's most important for my department. We are laser focused on getting our, you know, getting our salespeople to their deal as fast as possible. So that's what I'm I'm most interested in.

SPEAKER_02

And the meeting summarizations, right? So that's been a game changer, right?

SPEAKER_01

So you can and research, you know, and that's one thing that, you know, is pretty universally good about AI right now, especially with these new AI rules. I can go in and I can type in and I can type it into Chat GPT because this is not proprietary to say what does the EU AI rule say? I can type it into as many AI tools as I want. And that gives us a lot of good data that we can then validate it, but it's pretty good, right? Summarizing a lot of that stuff.

SPEAKER_02

Right. Yeah. Just the sheer volume of information that we have to go through. Yeah. Right. No one human can get through all of it. And so that that is really very helpful, very, very helpful.

Market Reality: Crowded Tools, Real Limits

SPEAKER_00

Well, recognizing, and we're coming up on the bottom of the episode, but recognizing the fact that, you know, we don't probably have a lot of lawyers out there listening to this podcast. It's probably more of an IT audience. What would be some things that you want to communicate to more of uh the business or practitioners of IT? And maybe it's some things that you've already mentioned, but like, are there key things that they should keep in mind or remember as it relates to implications on uh on the business?

SPEAKER_01

So really understand what the product is you're trying to implement. Also understand what your end goal really is, too. What are you trying to solve? There's a lot of chasing of shiny tools that don't necessarily solve problems. So if you and if you come to us with those early, we can help understand what legal issues might be there, but we can also help you problem solve around some of the roadblocks.

SPEAKER_02

I would echo what Eric has said. You know, look at us as a as a partner, right? Because we might have seen something, right? We might have already had the experience working with a tool, and we can maybe add some value there. I would say the data privacy is a big one. And so, and and knowing yeah, where the data is is is going and being used. Um, and also data governance. I think that's that's a big piece, right? And so making sure that that data governance framework um is established and being monitored and maintained, I think that's really important as well.

SPEAKER_00

Well, to the two of you, thank you so much for for stopping by and joining us. I know your your your schedules are busy, so I appreciate the time.

SPEAKER_02

No, thank you for having us. And if I could just say thank you to Worldwide Technologies, the thought the thought leadership um that you guys put out there has been really valuable and helpful. So thank you for doing what you do. It's very helpful. Thank you.

SPEAKER_00

We appreciate that. Yeah, absolutely. Okay, in listening to Olivia and Erica, one idea rises above the rest. AI transformation is so much more than a technology project, it's a governance project. Yes, it's about models and data and tools, but what really determines whether AI becomes a competitive advantage or a liability is the strength of the guardrails around it. If you want adoption to stick, if you want trusted outcomes, your legal and security teams have to be at the table from the very beginning, helping you build the walled garden to find responsible use and keep people confident that the system is working for them and not against them. This episode of the AI Proven Ground Podcast was co-produced by Nas Baker and Kara Kuhn. Our audio and video engineer is John Knoblock. My name is Brian Felt. Thanks for listening, and we'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology