AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

Agents, Copilots and Beyond: Everyday AI's Jordan Wilson on Future of AI in the Enterprise

World Wide Technology Season 1 Episode 35

In this episode of the AI Proving Ground Podcast, we talk with Jordan Wilson, host of the popular Everyday AI Podcast, to unpack the realities of enterprise artificial intelligence (AI) adoption. From tool sprawl and failed pilots to executive sponsorship and agentic models, Jordan shares lessons from thousands of conversations with enterprise leaders — and reveals why soft skills and unlearning old habits may be the ultimate keys to success.

Support for this episode provided by: Cyera

Jordan Wilson is a seasoned digital strategist, a 'Top AI Voice' on LinkedIn, a top-rated Generative AI Instructor on Coursera and AI keynote speaker. He hosts the Everyday AI podcast, a fast-growing media and consulting company helping professionals grow their companies with Generative AI. Everyday AI fills the AI knowledge gap between the demand for GenAI skills and the lack of business trainings.

Check out Jordan's podcast: Everyday AI

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Speaker 1:

From Worldwide Technology. This is the AI Proving Ground podcast. Today and every day, the ground beneath the AI world seems to shift New tools, new breakthroughs, new challenges. For most of us, keeping up feels impossible, but for Jordan Wilson, that's the job. Jordan is the host of the Everyday AI podcast, a daily podcast and newsletter that's become one of the most consistent, practical voices in a space known for noise.

Speaker 1:

Jordan has subscribed to and experimented with hundreds of AI tools, spent thousands of hours in the weeds and makes it make sense for the rest of us. And while his show mostly helps everyday professionals get smarter with AI, his vantage point offers something rare a front row seat to how the technology is actually being adopted and stalled inside the world's biggest companies. This episode is for the leaders standing at that crossroads, the ones who know AI can transform their business but aren't sure how to move from experiments to real impact. So stick with us as Jordan walks us through how to make AI work at scale inside your organization. Let's get to it, jordan. How's it going? Man, I am loving having you on the podcast.

Speaker 2:

Ah, it's going well. Yeah, it's a, it's, it's a treat from, uh, you know, having to do my own podcast every day. When I get to come on a on a another one like yours, I'm, I'm excited Anytime I can, you know, double down and talk AI twice in a day?

Speaker 1:

uh, you've told me before you subscribe to hundreds, if not thousands, of AI tools over the last several years. Just to kind of establish where you're coming from from an AI perspective, walk us through briefly kind of your AI journey where it started, how you started subscribing to all these tools and a little bit some of the expertise that you've picked up along the way.

Speaker 2:

Yeah, so funny story for everyone else listening, I'll just jump straight into it. So I've actually known Brian here for a very long time, but we were journalists in college, so you know, eventually I started my own company and one thing that we did, we did a lot of you know, content writing, seo, marketing, advertising. And you know, about two years before chat GPT was released, they actually released their earlier technology, gpt-3. And there was a couple you know software companies out there that started using it right away. So my company and our team, we started using those back in 2020.

Speaker 2:

And there was a certain point, you know, starting to work with these, you know, kind of precursors to today's large language models that we all, you know, know and many of us love. There was some growing pains, right Initially, it's you know you'd hit your head against the keyboard after you got over the fact of like, wow, this technology is great. But it got to a certain point. When I learned how these models worked and when I, you know, got understood the basics of, you know, prompt engineering, I'm like, wait, these things can already write at a level that you know me and Brian used to write at. So it was really that and that was kind of an eye opening point for me and really what ultimately gave birth to company after that Everyday AI. So from there it spiraled.

Speaker 2:

Some of the pre-ChatGPT AI writing tools I think I had at least 50 to 60, literally subscriptions to different ones that were just precursors to ChatGPT. We didn't use them all right, but I at least went in there and tested them and I think you know it was a lot of that early, you know, sandboxing those tools that when ChatGPT did come out, you know I think that was one of the reasons that myself and my team, you know, we were able to really excel in what we were able to get out of the systems because we had put in, you know, at that point, hundreds of hours into better understanding the technology, or at least what it was, up until that point.

Speaker 1:

Yeah, no, absolutely you, certainly. You know the expertise that you've built over the last handful of years, if not even a little bit more than that has given you a pretty much a front row seat to the development and you know kind of usage of AI. Give us a little bit of a level set where you think the industry stands right now from an enterprise perspective. You know your show. Everyday AI does talk a lot about enterprise AI adoption, but you also focus on the common, everyday AI user. Our podcast more, you know, skewed towards the enterprise audience. Where do you see the AI industry right now? What's the opportunity? What are the challenges?

Speaker 2:

Sure, it's changed. It's changed a lot. You know, if you would have asked me this three or six months ago, I think I would have had a completely different answer, but I think where we're at now is most enterprise companies. You know, whereas early 2022, 2023, you know a lot of companies were spending six seven figures. You know building their own versions right, and you know a lot of companies were spending six seven figures. You know, building their own versions, right, and you know investing heavily in building RAG pipelines and you know all of that good technical stuff. That a lot of it's you know over my head.

Speaker 2:

And I think where we are today right is as we look at some of the big players in OpenAI, google Anthropic and Microsoft. They're starting to make enterprise data instantly accessible, in the same way that you know if you're using Microsoft 365 Copilot. That it's always kind of been. I know there are some uphill battles when it comes to learning curve and adoption with Microsoft Copilot right, adoption with Microsoft Copilot right. But where I see enterprise adoption now in AI, I think it's a lot of enterprises realizing wait, something like ChatGPT, this can be an enterprise tool, right? Something like, you know, the front end of Google Gemini. Yes, you know back-end developers love it. But you know teams I'm seeing more and more teams, you know, start bringing in hundreds or thousands of employees into these front end AI chatbots that we all, you know, maybe two years ago, thought that they were just kind of these fun toys that helped us better respond to emails.

Speaker 2:

And you know what I've been saying all along. I'm like no, these are AI business operating systems. Right and smart enterprises we've seen have started to move their day-to-day work inside of these models, even if they do have their own you know company model that they've fine-tuned. Even if they're, you know, as an example, a Google organization or a you know Microsoft organization, they're not just sticking to the respective. You know large language models of those companies. You have large teams, large organizations, moving completely into these.

Speaker 2:

You know what we think of you know kind of these front end AI chatbots because of their capabilities and how much you know they've expanded over the last couple of months. You know one very quick example you know chat GPT adding something like connectors, right, so now you can, as long as your IT team there gives you the thumbs up, and you can connect your enterprise data to ChatGPT. So it's not quite what you would get out of RAG, but I like to say it gets you 80% of the way there in 2% of the time with 1% of the cost. So yeah, where it comes to enterprise adoption, I think people are taking these quote unquote AI chatbots very seriously as being a part of their future of work in the day-to-day operations.

Speaker 1:

Yeah, I like how you put that business operating systems and how AI can serve that. I think a lot of times you know you might be talking to somebody and, hey, what's the value of AI? And it's always recapping my, is recapping my meeting, or it's doing something you know automate it with my calendar, or it's helping me generate content. Do you get the sense that that's just scratching the surface? And if so, because I think that's probably the case what are some of? Like, you know, the? The optimal use cases are obvious use cases that an enterprise can do using AI with that kind of, you know, business operations systems mindset.

Speaker 2:

Yeah, it's changing quickly, right, like everything else is. I mean, you're seeing some enterprise companies acquire AI browsers, right, and bringing like an agentic browser to the entire organization, right, so that's, we can go down that rabbit hole if you want. But I mean, what I'm really seeing right now is the focus on taking what a large language model is capable of, especially since the. You know they've gone to reasoning by default, right, so gone are the old, not gone. But you know, essentially gone are the old, not gone. But essentially gone are the old quote, unquote, transformer, only large language models. And now, for the most part, the models that we all use are agentic by default, right, they can reason, they can think ahead, they can plan in a very similar way that a human can, and then you add all of the scaffolding and tool use that these agentic models have. Right, not agents, but they are agentic models, and I think that really changes how we should be looking at these.

Speaker 2:

Previously we thought of as consumer large language models, but again they're becoming enterprise tools as something more than just like yeah, help me, you know, recap this, summarize this. Instead, it's like hey, here's five different pieces of information, right, here's my HubSpot access. Here's my Notion account, here's my Google Drive, here's my Teams one prompt or on the agent side as well, being able to connect and carry context and the scaffolding over from those different pieces of data. I mean, that's what humans do, right? So I think the conversation is slowly starting to shift and I think it'll probably take another two or three quarters until the majority of people actually realize what today's large language models are capable of, because it is, for the most part, capable of what your average, you know, entry level employee is able to do.

Speaker 2:

It's it's being able to access, understand, synthesize information from your business data, personalize it and to create new business value by connecting all of those pieces. I mean, when you think about it and you know, obviously, throw in, you know, research on the internet. That's what a lot of knowledge workers do, right? We look at company data, we think about it, we plan, we take different pieces from all over the place, we put it into something that is hopefully presentable and a certain deliverable and then we create new business value with whatever it is that we kind of piecemeal together. So that's what models today can do on their own and, you know, end users don't really need to have a whole lot of technical expertise. It's like all right, make sure you use the right model, make sure you're connecting it to the right data, that you have access and you know rights to do right, and then from there I mean, the models are shockingly capable.

Speaker 1:

You and your podcast. You're a prolific content creator and you know you have been ever since the days where we were journalists together. But you know you're pumping out episodes every single morning and if you're not already a listener, go and check out Everyday AI Podcast. Subscribe to their newsletter. It's a fantastic resource and a lot of times you're making these complex AI issues and features and you're making them digestible for the everyday AI user, and I know that's something that has been important to you over your time with the podcast is making sure that early in career novice AI users they're able to pick up these topics and be able to put them to good use so that, as you say, they can become the expert within their company or department. I'm wondering think about some of the lessons you've learned from how you're enabling people. What of those lessons applies to enterprise AI adoption that a big company could take away from how an individual or a group of individuals uptakes AI?

Speaker 2:

Where do I start? I'll start with one word, then we can unpack it I think unlearning. I think unlearning is key it's been, you know, over 2024 and 2025,. One of the buzzwords we throw around is upskilling or reskilling in the age of AI is there's, I think, a lot of telltale signs between what might make a person or a team or an organization successful, specifically when it just comes to leveraging large language models, which is what I think. As an example, I think 90% of the Fortune 500 uses chat, gpt, so it's a big chunk.

Speaker 2:

And I'd say what I've seen personally and has observed through so many conversations is how people learn and actually leverage AI. And I'd say it falls into multiple categories, but I'll talk about the two different ends of the spectrum. Number one is you have your kind of hands-off automators, right. Not saying that's a bad thing, right, but when you think about what there's downsides to that, right, when you can, just when you realize the power of these large language models and you're like, let me kick the majority of my work, or as much as my team's work as I can, to a model, you know, sit back, see what it spits out and, you know, try and work with it versus those that are augmenting, right? I think the people in the organizations that are getting the most out of it aren't just blindly kicking their most important tasks and deliverables to a large language model, right, I think you really have to want to learn and to be better, right?

Speaker 2:

Let's take writing. That's an easy example, right, because that's, brian, kind of like how you and I originally met each other. If you and I were using an AI tool just to write for us blindly, eventually our writing skills go down because we're offloading it and kicking it all to. You know, a large language model. But if we go through it in an iterative process, like an editor right, someone with a red pen marking your story up right.

Speaker 2:

If you go through it like that, as a brainstorming partner, as someone that is intentionally trying to poke holes, not only will the end deliverable be better, right, and obviously working with it in a very collaborative fashion, not copy paste you know, prompting that's not going to help anyone. But if you're working with it iteratively in a collaborative fashion, ultimately not only will the deliverable be better, but me, the human, my skill set in writing will actually get better. So that's kind of a very long way to see, you know, two very different ends of the spectrum on companies that are maybe just you know taking advantage in all the wrong ways, and then companies that are not just you know saving time finding a positive ROI on Gen AI. But they're people. Right, the people are getting smarter, more capable because they're augmenting instead of just offloading.

Speaker 1:

Yeah, I think back to one of the things that you said to me a while ago, because sometimes I'll listen to your show and I'll hear about all the awesome things that AI can do and naturally, I think probably for a lot of people it's scary sometimes because it's creating so much change.

Speaker 1:

It's scary to think about what that will do. But you know, one of the things that I've spoken to you about that has kind of reassured me is that, as a writer or as just kind of a general curious person, that type of personality and that type of persona is perhaps one of the best situated to take advantage of AI. And I'm going to parlay that into a question about for organizations out there today that are trying to figure out which of their employees are most primed to pick up AI and adopt it and utilize it to its full extent or to whatever they're capable of. What types of skills soft skills or otherwise should those organizations be using? Recognizing that, like you know, it does take some, you know, some Q&A against the large language model to make sure that it's right.

Speaker 2:

Yeah, I think you hit it on the head when you said soft skills, right, I think soft skills, at least in the, you know, in the AI everywhere kind of workplace that we're all faced with now, soft skills are extremely important. Being curious extremely important, language skills extremely important, persistence extremely important, right? So, yeah, a lot of people and you know I have people ask me all the time and for years, right, like, oh, my kid's going to school, what should I, you know? Should they? Should they do computer science? Should they do English? Right, there's no blanket answer because it depends.

Speaker 2:

But from an organizational standpoint, when you are wondering, hey, who needs to be on my, on my champions team?

Speaker 2:

For, you know, ai, it's the curious people, it's the people who are going to ask the right questions and we can get into, I'd love to talk about, like, first party reasoning, data, right, but you need people who can ask questions, who are good listeners, who are solid communicators and who can piece holes or, you know, poke holes, find out where the leaks are, and then can talk and bring the right people together to fill those gaps. I think so often organizations look at the most technical people, right, and they're like all right, who are most technical people, who are people that we can slap a label on their you know, on their desk that says prompt engineer, right? I don't think that says prompt engineer, right? I don't think that's the right. I don't think that's the right approach. You need people who are experienced in change management. You need people who are solid communicators, who are curious and, more importantly, they have to be adaptable. And yeah, so a lot of times it is soft skills.

Speaker 1:

Yeah, well, based on that, Jordan, all the conversations that you've had with enterprise leaders, do you get the sense that they are looking for or prioritizing people with those skills, or is that still a gap where they're thinking they need to go to the technical person or they need to have a real sophisticated prompt engineering team?

Speaker 2:

If I'm being honest, most companies haven't figured it out yet. You know companies that you know hire us to help on their front end strategy and consulting. They're not sure and I think one of the reasons is the pace of innovation is like nothing we've seen. So they could have identified someone you know a year ago, because I think a year ago you might have needed someone that was a little bit more technical right, because a year ago you probably did need some sort of prompt engineering. So maybe it was someone that had some soft skills, but some not like coding engineering skills, but someone that understood the technical side as well, where I think even this concept right and, to just make it simple, prompt engineering. I think it's this term that I both hate and absolutely love. All it is. It's the process that a human goes through to get the most out of a large language model right. So it used to be much more technical and it's not anymore. So I think some organizations you know it took them too long they looked at you know AI, you know AI implementation or their AI strategy, like they would any tech innovation, any digital transformation, which I think is a huge mistake. You have to be in a rush. You have to hurry right, you have to slowly and carefully sprint, and I don't think most enterprise organizations did that. So you know, they probably identified some people, you know, maybe with with ML backgrounds, maybe with AI backgrounds, especially larger enterprises that have those people and have had those people on their roster people data science, data analytics and I'm not saying those are the wrong people, but you also need someone experienced in change management, because if you want to get the most, if you want to get the most out of AI in your enterprise, you have to go back to what I said earlier you have to unlearn. You have to literally unlearn certain skills, certain routines and certain good habits that have made you, your team or your organization successful in decades past, right. So I think that's where you have to find someone that can simply say right, and I had to go through this myself, right, I was a writer, but I had to go through a process that said I was a writer, but I had to go through a process that said, hey, this AI is better at writing than I am, so if I'm going to use it, I can't just sprinkle, I can't just upskill myself, I can't just reskill. I have to unlearn the writing process, right, because when I can work with AI, I'm capable of 10X, 20x, right, but that's only when you unlearn it. If I'm just sprinkling AI on top, my writing skills are going to go down. So apply that to anyone with any skill set that they would be applying toward a large language model. My data analysis skills would go down. My competitive analysis skills would go down right, whatever it is that we're talking about.

Speaker 2:

So, yeah, I think it's a balance of finding the right person, but the problem is that person is changing because of the rate of the technological change, which is so hard to keep up with. I spend hours every single day only doing that and I can't do it Right. So it's like what you need, who you need, is constantly changing at a pace. When you look at you know, the internet, cloud, mobile, right, other tech transformations of decades past. What we're looking at now with generative AI, you can't like. You can't use your prior roadmap anymore because it's moot.

Speaker 1:

You can't like you can't use your prior roadmap anymore because it's moot. I want to pivot a slight bit here and talk about the concept of tool sprawl, because that is something that are that organizations that worldwide technology deals with a lot. They talk a lot about tool sprawl and that's not a new and that's not anything new. But with AI it's kind of, you know, rocket boosted. You have tool sprawled on purpose, you subscribe to all these different tools and so you're collecting them on purpose. But I am curious, moving forward, do you think organizations need to have a ton of tools in their toolbox? Do they need to consolidate? Is it kind of like a living breathing organism where sometimes you're ballooning up, sometimes you're going down? What do you think? Because you know, obviously people want to get towards as much of a consolidation as they can so that they can stay on top of it. But, to your point, you always need new, different things. New tools are being released. They tackle different use cases. So what do you think about that concept?

Speaker 2:

Yeah, it's, it's a it's a good question one of the major reasons why certain organizations aren't getting more out of AI than they would expect to. I did some work with a company last year and even within a small marketing team, they were using more than 50 AI tools and after a couple hours of conversation I think we finally opened up their eyes on why that was the reason that some of their efforts were stalling. Yes, I use a ridiculous amount of AI tools, but most of them I'm usually only sandboxing right. But if you look at what our team is using on a day-to-day basis, it's actually not a lot. It's a lot fewer than a lot of people might realize. And I think, whether you're a small medium business or a Fortune 100, you need to have what I said you need to have an AI business operating system right. I've been a weird guy screaming about that for years, so that could just be Microsoft Copilot, but actually training your organization on that, which is a big thing, people skip. Like I said, I've seen a lot of Microsoft and Google companies companies that use those right as your email provider, desktop software et cetera still use something like ChatGPT or Claude. But you need to pick that business operating system and you need to start consolidating tool use, right? So if you do have, you know, three, four, five. A lot of companies, especially in the creative space, are still using five, 10 or more different AI tools. Not saying you have to get rid of all of them, but the big companies. That's where they're going. I mean, look at what Google's done over the last couple of months, right? So all of a sudden, they have a state of the art AI video model. They have a state of the art AI image model. Right, that's gone mega viral over the last couple of weeks. They have extremely capable text to speech models. So you know, you could have, you know, have started collecting those and starting to build your processes around those.

Speaker 2:

But I think you have to say you have to look at all these tools and if you think that one of these tools, I always say, do they have a trillion dollar market cap, yes or no? And if the answer to that is no, then you have to be able to have a conversation with okay, what happens if this tool, if this company, if this startup, is gone next month? Right, so I'm not discouraging people from exploring different, you know, generative AI tools, but you shouldn't wrap up one of your most important day-to-day processes around a tool that is not one of the big boys, if I'm just being honest right around a tool that is not one of the big boys, if I'm just being honest, right, it's way too risky. You know, or at least have backups and redundancies, whether we're talking about.

Speaker 2:

You know, actual large language models, creative tools. You know AI agents, it doesn't matter. Again, I think a lot of them don't have as much staying power you know. Or if OpenAI or Google or Anthropic or Microsoft, you know, adds one feature, flips one switch, you know 10 of those tools that you thought were irreplaceable and now are ingrained in your day-to-day processes. They might go out of business very quickly, right? So, again, I think you just have to be very strategic with AI tool use. This episode is supported by Syera. Syera offers data security posture management to discover, classify and protect sensitive data. Enhance your data security with Syera's automated protection solutions.

Speaker 1:

Another quick pivot here. I want to talk about that MIT study that came out at this point a couple weeks ago. You had an episode shortly after that report released and, for a reminder of anybody that is listening here, the headlines that I'm sure you saw were 95% of the AI projects fail to deliver ROI or get out of a POC stage. You roasted this study on one of your episodes. It was an entertaining episode, to say the least. I think it gets to a nugget of wisdom there, a kernel of truth, certainly sensationalized. But give me your take on that study. And then, on the back half of your answer, what's your perspective on why a lot of AI projects stall, or do they just need more runway to work?

Speaker 2:

Yeah, great question. And I think this is where you know it's just important to brief, very briefly, touch on my background, right, my backgrounds in journalism and then marketing and now AI. You know, if I had to split my career into three, into thirds part of this, it's the media's fault, right, they saw this study. Oh, that's a sexy headline. We're going to get clicks from this, right? Uh, it's. It's obviously, um, you know, I won't say funny because it's sad right, a lot of big publishers are losing money because of, uh, you know, ai answers engines, so they're not getting clicks, so they're having to write things, and a lot of times it's about AI that drive clicks, right, If it doesn't bleed, it leads, or, sorry, if it bleeds, it leads, right. So I will say I think a lot of the media didn't read the study. The study did not take long to read. It was, I don't know, 30 something pages, took an hour. I know most of the media companies did not read the study, because you can tell they didn't read it, because if you would read it, you would see that it's not a good study. It's one of the worst that I've ever read and I read a lot, um so.

Speaker 2:

So let's just go to the uh, the, the main stat there, right, this 95% failure rate of of AI pilots. It was based on 52 interviews. Yeah, go read the study. It was based on 52 interviews and these weren't even, um, auditive or, uh, quantifiable interviews. These were qualitative interviews, right? So, uh, essentially, the authors labeled them as directionally accurate, uh, right, saying, like, these are just estimates and this isn't even precise On 52 interviews. That's where that 95% stat came from, and not only that. I mean, that's a laughably small sample size, right, but the way that they judged success on these AI pilots, it's hard for me not to chuckle because it's atrocious, really. They judged enough to chuckle because this is, it's, it's it's, it's atrocious, really. Uh, they judged it had to show up a positive ROI on the PNL within six months, right, like that's, that's asinine.

Speaker 2:

Um, this is one of those studies and I think, unfortunately, um, companies are intentionally doing this, uh, because they know that they can get good play and good press and they can get eyes on their product, and ultimately, this was a product, right, mit was promoting. Essentially, they, they said, oh well, the reason all these AI pilots are failing is, you know, people don't know how to use AI, but this is like an infomercial, a really bad infomercial. And then at the end they're like, all right, but here's, here's the answer, right, it was actually funny. They said the answer is, you know, like, you need to start using these other protocols you need to start using, and they literally put their uh, mit Nanda, which no one had ever heard of. Uh, you know, they were trying to place it, um, you know, on the same pedestal as Anthropics MCP protocol in Google's ADA, a Asian protocol, right. So they said, well, the answer are tools like this, right, but oh, they're MIT NANDA, like, if you want to use that commercially, it costs $250,000 minimum, reportedly.

Speaker 2:

Right, not only this I'm going to stop after this because I don't want to turn this into a 30 minute riff right, but you couldn't even get the study right. It was a fully gated report, which, in the AI community, is a big no-no and a huge red flag. You don't gate a report behind a Google form. And then they just selectively sent that out. Right, I didn't get it. So many other people I know in the AI education space didn't get it, and very few people got it.

Speaker 2:

And then they published the study. So that has to tell you something when they don't make a study publicly available. And also the timing I mean the timing of it coming out around the same time as a lot of quarterly reportings, things like that. I mean they knew that they were gonna get a lot of play and, unfortunately, journalists and the media they shoulder a lot of this blame, right?

Speaker 2:

You can't just look at a shoddily done study like this and I'll say it's more marketing than a study, and you can't just copy paste the headlines without actually reading it, because if you actually read it, I mean another thing huge self-selection bias. Right, the interviewees were organizations that were quote unquote willing to discuss AI implementation challenges, right. So if someone asks you, hey, are you willing to discuss AI implementation challenges? Well, what if you don't have challenges? Right, all of a sudden, that skews that 95%. If you're only selecting from the world's smallest pool, I can go sample more than 52 people right now at the grocery store, right. And if you're only asking for people that are having challenges with AI, yeah, that is the number one self-selection bias red flag, right there.

Speaker 1:

Yeah, Well, tell me how you really feel, Jordan. No, I knew I was going to push a button on that one. I do want to push back a little bit, though I agree with the merits of what you're talking about with that study, but lots of conversation that we have here at Worldwide Technology with clients that come through our doors or if we're visiting them in their offices, they do struggle to get AI projects out of the POC phase. I don't know anybody that's saying it has to work in six months or we're going to trash it, but the fact remains it's hard to do these things. Where are you seeing some of the gaps? What are some of the challenges that you think are truly maybe not plaguing, but are really having a?

Speaker 2:

big impact on getting those pox out the door to production for enterprise orgs fine, investing their dollars in the software, in the tooling, in the licenses, right, yeah, companies are fine, you know, spending seven, eight figures, you know, on Microsoft co-pilot seats, right, they're fine, they're fine with that. But you can't, like, you can't, just give someone access to a piece of technology that hardly no one understands, right? I think that's important to look at. Even the people who create and work on large language models admit that there is still a certain level of black box, right, they don't fully understand everything. So if the people that are actually creating the technology, if they still admit that there are things that many of them don't understand, and you just hand off a literal generational piece of technology to hundreds or thousands or tens of thousands of employees, and if you don't train them, that's why it doesn't always work right away or why it works in smaller pockets or for certain groups and not other groups, right, obviously, you know the technology excels in certain verticals or in certain types of work more so than others. But it comes down to learning, education, training, for whatever reason. Companies don't want to invest in that, and I think part of it is. You know companies have FOMO and they just expect like, oh well, these models are so smart, you know, just talk to them and then it's going to go Right.

Speaker 2:

I talked to people at large organizations who it's their only job to you know, keep up with. You know Microsoft Copilot or whatever it is you can't like, you can't. Teams need large, like enterprise organizations need large training teams internally. Right, like what I do every day, I spend, depending on the day, six to 12 hours learning about experimenting and sandboxing AI. Six to 12 hours learning about experimenting and sandboxing AI. Companies need that, but they need teams of me who are doing it personally.

Speaker 2:

All of these new updates, whatever your sandboxing, your QA, whatever your next pilot is, you need people constantly working on that. But you have to stay up to date every single day because, literally, I mean the technology we're talking chat GBT just drastically changed yesterday. They had a small little, a small little thing with essentially the, the, the juice in how much a model thinks, and it completely changes. Right, if your organization was has chat GBT enterprise and you know, you had everything up and running and working and you had these, these projects and these GPTs and everything was working great. Well, now the models are completely different, so maybe you had something kind of overfitted or underfitted and now you have to go adjust. But you have to understand what's happening on a daily basis and be able to make those changes that move the needle for your company.

Speaker 1:

Yeah, I want to stick with the adoption part. We have a couple other topics that I want to get to, but you know, one of the additional items I think is actually in the MIT study to get out of that 95 bucketed into the 5% that are really succeeding within that six month period. I think one of them was executive sponsorship or something of that flavor. You've talked to a lot of executives on your show every day AI. What do you think about executive sponsorship as it relates to driving AI down through an organization and more so maybe? Like, as you're talking to executives, how can you tell that they're excited or going to be able to effectively get people to buy in to AI and use it, and use it a lot?

Speaker 2:

That's a great question and, if I can, let me give a very like WWT answer, you know, going back to you know, I was able to talk to you know your CEO, a couple of years ago and something he said actually then I mean talk about you know executive sponsorship. He had a great nugget in our conversation where he essentially said you know, then I mean talk about you know executive sponsorship. He had a great nugget in our conversation where he essentially said you know meeting with his executive team and he said, hey, if it's not AI, it's not AI first, ai native, don't bring it to me, right? And this was two years ago, you know. So if you want to talk about you know leadership, championing a cause, I think something like that is a prime example. But it can't. I think there's also that approach can sometimes get twisted and misconstrued. You obviously need it right. Nothing in an organization, especially an enterprise organization, can go and move forward and pick up steam and succeed and show a positive ROI unless you have your top leadership getting behind the cause. You can't go against them. But I think sometimes that turns into a crutch and what I mean by that is there's a fine line between something like executive sponsorship and top-down leadership. Right, you also can't just say randomly one day and I've seen this happen all the time hey, team of 5,000 employees, we're switching from Copilot to Gemini next quarter because Gemini's models are better, because Gemini's models are better. Final answer and yeah, we also need to increase productivity by 30% and reduce headcount by 10%. Right, you can't do that and unfortunately, a lot of organizations are doing that top-down approach and there's a lot of reasons why it doesn't work. But you have to go back to people management. This is change management.

Speaker 2:

Ai implementation is not a technical project, it's a people project. And you kind of touched on it earlier, brian, as these models and we're going to be talking about agents more, as these things get more and more capable organizations that still haven't fully implemented AI from top to bottom, the employees are scared. Let's be honest, right? So if you get a top down kind of edict from a leader and you're like, oh gosh, it looks like this tool or this AI or this agent can kind of do a lot of my job, right, so you literally have self-sabotagers all over the place, it is way more common than people think of, especially in enterprise organizations that essentially just kick the can and don't fully implement an AI project the way that they should or could, because they're like, well, this is going to replace a big part of what my team does.

Speaker 2:

So well, if we never, you know, integrate it to begin with, we don't have to worry about it. And I think that's, you know, an overlooked cause of a lot of you know whether you call a pilot or an implementation failing when it's top down. So, yeah, I think you have to find that that nice balance between you know, the executive sponsorship and not falling into that, you know, top down directive, because people do feel threatened by that.

Speaker 1:

Yeah, well, you'd mentioned agents. Shockingly, that was the first time I think we had mentioned agents and we're what 40, 40 minutes in maybe 2025 called the year of the agent many months ago. Agents has a ton of buzz around it. I'm a little tired of hearing about like, oh, it can go and um book my air flight travel and then it can get my restaurant and then it can get my hotel. Um, give me something a little bit more tangible for a business. What are some? What? What's the immediate use case and usage for agents? Um, in the enterprise setting, and where do you think it'll go um in the near to mid future?

Speaker 2:

Sure setting and where do you think it'll go, um, in the near to mid future, sure? Um? I think even the word agent is very confusing, right? Because when you talk about capabilities, I think, like, you have to separate, uh and also understand the difference between a large language model, between an AI powered workflow, between an agentic model and an actual agent. Right, so an agent is a piece of technology powered by at least one or sometimes a dozen different AI models, and sometimes there's sub agents as well, and you essentially give the agent a goal, right, and the agent will build its own path. It'll build its own vehicle and it'll figure out how. Agent a goal, right, and the agent will build its own path. It'll build its own vehicle and it'll figure out how to get there.

Speaker 2:

Right, it has tool use. It has access to the internet. Many agents have virtual browsers. They have terminal access, right, so literally the same type of tooling that a human would have sitting in front of the computer, right? Yeah, I can go to a website. You can log it in and save your credentials. It can have access, uh, to your dynamic, uh, you know company data. Um, so where where you can find a value right now in agents. Um, you have to know the right agent for the right tool. Uh right, I it's. It's great timing because I actually went over top 10 AI agents on today's show. So this is one of those areas where it's like, okay, some of the startups are actually a little farther ahead and, again, I think you have to be cautious in not putting too much of your day-to-day operations inside of agents. I think the exception to that rule, I think, is Microsoft Copilot Studio. Exception to that rule, I think, is Microsoft Copilot Studio. It tiptoes the line between AI-powered automations powered by agentic models and actual, true end-to-end agents. There's obviously both, but I think some of the utility is more of using agentic models and building these AI-powered workflows. So technically agentic workflows, but, yeah, not booking your travel.

Speaker 2:

Number one I think one of the best things that enterprise companies can do right now is marrying their personal or, sorry, marrying their company's data with research and personalized reports for their people. Right, because, regardless of what your role is, I think that's something that we all do and it's something that an agent does way better, way faster than humans. I have scheduled agents that every day, they look at my company's data, they look at what's you know, this is very relevant for me, obviously, because I'm following daily news, right? So they take my company's data, they go out, they research all of the day's news, they filter it through my company's data, they personalize it and give it to me in a report form. So that's something that, before all of this, I would usually have to spend three or four hours on, and now it's scheduled and it just shows up and I just read it, right?

Speaker 2:

So I think that's a very safe way to do it, especially when a lot of that you know public or sorry company data can be public data as well. So you don't even you know. Sometimes, especially if you're looking at an AI startup, you're like I don't know if our organization should be using, you know, this AI agent startup that we haven't heard of and connecting our data. Well, okay, start siphoning out, see what part of your company data is public, right?

Speaker 2:

A lot of especially larger enterprises don't always realize, well, if you're a public company, so much of your company's data is already public, from earnings call, from financial filings that are required here in the US, right? So I think that there's so much you can do just in that one use case that just about any knowledge worker can find immediate benefits from. And maybe it's not something you're already doing, but you're like man. It's almost like if you could hire a personal research assistant to go out, you know, look at your competitors, look at market positioning, look at new services and technology. You know around your work and then personalize it for you your job, your role, your company, and it's ready to go how you want to read it, in the format you like that works best for you. I mean, that's a great easy use case, much better than you know booking your next, you know flight for the next conference.

Speaker 1:

Let's. I want to go back to training. I know you don't like the word upskilling or things of that nature, but let's just say you know, for a majority of listeners that are perhaps out there right now, they've experimented or they're using AI on a relative regular basis, but they're not using a lot of tools. Let's say they just use chat, gpt or they're just using copilot. Maybe they've experienced a little bit of a stall in terms of, like how you know how it can provide breakthroughs in their day to day. What can they do to break through that stall phase? Is it just continued tinkering? Is it trying something new? Maybe a little bit beyond your you know your unlearning answer, but, like, what can they do to reach the next level? Because inevitably there will be?

Speaker 2:

It's a great question, one thing that I see hardly no one doing. So stick with me podcast audience. I'm going to try to explain this. Let's look at the two most popular models in ChatGPT and Google Gemini. So, by default, these models. Now they're reasoning models, technically hybrid, right, but when you ask a question, or if you are doing something a little more complex, sharing your company's data, working on a project, iterating, right, there's something called a summarized chain of thought, okay, and the same thing with, like chat, gpt agent mode, right, we talk about traceability, observability, et cetera.

Speaker 2:

Take your time and look at that, right, people don't know, you can click it. It's not like a big button, right, but you can see how a model reacts to the information that you give it, right. So if I dump I'll just use an example for me If I dump my podcast download stats and my email open rates and huge which I've done this all the time spreadsheets with hundreds of thousands of data points, and then I also have a model go do research based on what it finds. It might take 15, 20 minutes to get a response. So what do I do with that? How do I get better? What if it's not ideal? Click that little you know. It usually says like thought for you know, 30 seconds, thought for four minutes, right? That's a summarized chain of thought. Um, so you can see how a large language model takes on a task. This is as if I were to give you know, brian, if I were to give you the same assignment, this would be me sitting over your shoulder taking notes exactly on what you're doing. Okay, first Brian is opening up the spreadsheet, then he's scrolling through and, oh, it looks like he he is adding a filter in this row in order to sort only cells that contain the word podcast, whatever, right, um, so if you really want, so if you really want to get past a roadblock, or if you want to get to a breakthrough within your enterprise when it comes to using, especially if you're using kind of like you know these off the shelf consumer models, you know in ChatGPT, you know Gemini, cloud, copilot, et cetera.

Speaker 2:

Use that chain of thought and also keep in mind large language models are generative, they are not deterministic. So again, if you run that exact same prompt 10 different times, you might get nine wildly different answers. You might get two, you might get, you know, a variation of three different things. But read the chain of thought, because that's going to tell you what you should have done differently to begin with, because you're going to see where the model like if I was watching over your shoulder, brian, and I'm like, oh man, you, just you. You you filtered Z to a. You actually should have uh written a formula in that cell first, before you did that. So now I know, next time, right, I need to uh put that in my instructions, in my prompt, my prompt to that large language model. So I think that's just a simple cheat code that's available for anyone using these systems. That again, I don't know anyone else. Maybe I'm too dorky, but I think that's it's. It's such a cheat code that's sitting right there in front of us.

Speaker 1:

Yeah, you brought that up on your podcast a long time ago and it's a foundational element that works almost every single time. It helps you understand how the model is working and it helps you write better prompts. You kind of know how to train it, how to coach it, how to say you know how to respond to whatever it's giving. So that's that's an excellent, excellent piece of advice that I would give to anybody as well. I know we're coming up on time here. I do want to ask you just before we leave. You've always been someone, dating back to our days at the Daily Egyptian yeah, that's a newspaper down in Southern Illinois when we were teenagers. You've always been somebody who's been looking ahead to what's coming next and for the most part, you've been right about a lot of these things that are coming ahead. You're obviously well plugged in with AI. Take us out as far as you can in terms of what's coming down the line in the future and how that's going to change how we work.

Speaker 2:

Yeah, I think agentic browsers are going to be more impactful in the short run than general use agents, right? So an easy example of that right, chatgpt's agent I mean you have to like, they're a serious player, right. They have 700 million weekly active users. They're technically, if they ever go public, they'd be one of the biggest companies in the world. So you can't ignore. You know their technology and even though their agent right now isn't the best, they're reportedly going to be releasing an agentic browser soon. Right, perplexity has their version Comet. Google has been testing, you know, and I've been using it. It's really good. They're Project Mariner. So I'll say, and we kind of talked about I think it was Atlassian acquired Dia. So a literal enterprise company acquired an agentic browser to use it, right, for their company. So I think agentic browsers are going to have a more short-term impact than agents. But agents aren't going anywhere, right, I said two years ago, agents too soon. Last year I said agents not yet. Even right now I'm like, agents, they're getting there. But if you want to get a taste of the technology, use an agentic browser. Openai should be releasing theirs reportedly anytime this fall. That's one thing.

Speaker 2:

The other thing, looking forward, is one thing that I think business leaders are always grappling with, and sometimes individual employees, is what happens if AI works right. And I'm telling you, like, if you invest right. So, if you're out there listening, if you're a decision maker, if you're, you know, the leader of a large enterprise organization, if you're not finding positive ROI on gen AI, I can almost guarantee it's because you're not training and educating your people. Once you do that, you will see great gains. Right, all of the studies, right, a very you know, in-depth and good study from McKinsey Digital said up to you know, 60 to 70% of day-to-day manual knowledge work tasks can be automated by generative AI. So you might be thinking of like, okay, well, what do we do next?

Speaker 2:

So, without getting into too dorky here but there's been a lot of talk about, like our large language models hitting a wall, right, and we talk about scaling. I think one of the issues and what you have to look at is the quality of data, right, and we talk about scaling. I think one of the issues and what you have to look at is the quality of data, right, everyone's just been scraping the same internet. We'll save that conversation for another day on what happens ultimately on that. But everyone's based their models on the exact same data. It's the internet. It's everything that's ever been out there. It's YouTube videos, everything else, right.

Speaker 2:

What about your company? Right? So much, right, as we make this shift from transformers, I think the AI labs are doing better on pre-training to make sure that just the best information goes into their models, not the entirety of every single thing that's ever existed online ever. But I think what you need to look at, in the same way that transformer models had the internet, I think, reasoning models and that's what we're working with now they need your company's IP, right? So what does that look like? Larger organizations, larger enterprises? You know they have. You know if they've had AI and ML teams for decades? You know, you don't. You don't need to hear this cause, you're already in a great place.

Speaker 2:

But for everyone else, um, you need to think what is my company's IP? What makes my company special, my department, the other thing you have to talk about, like we're, we're actually facing some, some huge crises, right? Uh, between recent college graduates, graduates not being able to get a job and being woefully under-trained on AI because colleges banned it. So now we're going through this crisis of large enterprise organizations can't find skilled enough recent graduates and we're coming up on this silver tsunami where a lot of organizations are going to be losing literally centuries. Organizations are going to be losing literally centuries, centuries of subject matter expertise.

Speaker 2:

So when you start getting gains productivity gains, uh, from generated AI, you're like, okay, well, what do we do now? Can we just cut 20%? Right, if you get a 20% gains, you cut 20%. No, uh, you need to start working on what I call first company reasoning data, and that is, uh, not just on the uh structured data. That's not what I call first company reasoning data, and that is not just on the structured data. That's not what I'm talking about. It's the unstructured. It's the unstructured decision-making from your executives, from Mary, who's been here 35 years and is retiring next year.

Speaker 2:

What are you going to do? Is a large language model going to be able to do what Mary can, who has 35 years of experience? Probably not. So what should you be doing with all of this saved time? You need to start collecting, curating, cleaning and putting to use that data that lives inside of your employees' heads. All of these, these um, you know internal processes. That's reasoning right. So we need to start collecting that, because these reasoning models, in the same way that, uh, you know, transformer models thrived when we gave them structured company data, we also need to give them unstructured decision-making from our people. We need to give that to that, to these new hybrid models that can think, plan and reason like humans do. So when we say the future agentic, browsers, agents, and you need to start collecting what makes your company special.

Speaker 1:

Love it Well. Jordan, thank you so much for taking the time out of your busy schedule. I know that you are out there learning about new AI products and solutions all the time, every day, and if you don't already, go out and subscribe to his podcast, everyday AI, you can find it all over the place. Subscribe to his newsletter. It is a game changer for helping you become the next AI expert within your company, jordan. Thanks again, man. We'll have you on here soon, all right. Thanks a lot, man. We'll have you on here soon, all right. Thanks a lot, brian. It was great talking. Okay. Thanks to Jordan Wilson, host of the Everyday AI podcast and someone who's been living inside the AI world long enough to see the patterns most of us are just starting to notice.

Speaker 1:

After this conversation, three key lessons stand out. First, unlearning is just as important as learning. Jordan's point was clear. The people who get the most out of AI aren't blindly offloading work to it. They're breaking old habits, rethinking their workflows and using AI as a collaborative partner.

Speaker 1:

Second, soft skills may end up mattering more than technical titles. Curiosity, communication and persistence are what separate successful AI adopters from stalled ones. The best prompt engineers aren't the most technical people in the room. They're the ones asking better questions. And third tools don't create transformation. People do. You can buy licenses, roll out models and spin up pilots, but if you don't invest in teaching people how to use them, nothing's going to stick. Training isn't a side project, it's the foundation. If you like this episode of the AI Proving Ground podcast, please consider giving us a rating or a review, and if you're not already, don't forget to subscribe on your favorite podcast platform, or you can always catch additional episodes or related content to this episode on WWTcom. This episode was co-produced by Naz Baker and Cara Kuhn. Our audio and video engineer is John Knobloch. My name is Brian Felt. We'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology