What's New In Data

Sol Rashidi on Why Most AI Strategies Fail—and What Great Data Leaders Get Right

Striim Season 6 Episode 6

Sol Rashidi has built AI, data, and digital strategies inside some of the world’s biggest companies—and she’s seen the same mistakes play out again and again. In this episode, she unpacks why AI initiatives often stall, how executives misread what “transformation” really requires, and why the future of AI success isn’t technical—it’s cultural. If you think AI is just a tech problem, Sol is here to change your mind.

Follow Sol's work:

What's New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What's New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

Speaker 1:

Welcome to what's New in Data. I'm your host, john Coutet. Today, I'm joined by Saul Rashidi, former Amazon and Fortune 100 C-suite exec, holder of 10 patents and best-selling author of your AI Survival Guide. In this episode, we get real about what actually makes AI projects succeed. Saul shares our lessons from hundreds of enterprise deployments, such as why most companies can pick the wrong use cases and why launching isn't the same as scaling. If you're leading AI or data initiatives, this is a conversation you need to hear. Let's dive right in.

Speaker 2:

Hello everybody, Thank you for tuning in to this episode of what's New in Data and I'm really excited about our guests. We have Sol Rashidi Sol, how are you doing today?

Speaker 3:

I am good, John. Thank you for having me on board.

Speaker 2:

Yeah, absolutely. You know, we've been talking about doing this episode for a while now and I've been following all your writing. I'm a big fan of your book. I have both the written and the audio version of it and I love just referencing it and going back, because there's just so much useful advice there. Thank you.

Speaker 2:

You've been talking about this, I think almost a year since verbiang, I would yes yeah it's yeah, yeah well we were together with uh, with chris tabb and uh, joe joe reese and the whole crew there and yeah, yeah, yeah, yeah, that was a fun time. I think that was that was the first time I was like, oh, you know, you should come on the podcast soul and you're like, yeah, let's do it. And you know, a year flew by, which is very, very common these days. It's like a year goes by, like a week now unfortunately so, unfortunately so.

Speaker 3:

But where there's a will, there's a way, and we made it happen. So even if it took this long, doesn't matter. We're here together now.

Speaker 2:

Sol tell the listeners about yourself.

Speaker 3:

Sure, oh boy, my data journey and AI journey completely haphazard. I always say it was at the wrong place at the wrong time and in retrospect I said yes to the right thing, but at the time I felt like it was a big, big mistake. But I had no direction. I didn't know what I wanted to do, and so down this path that we went, and I think everyone knows, especially a day-to-day, that I was a rugby player on the women's national team. I thought I was going to play professional sports my entire life. It turns out as you age, you get hurt, and when you get hurt, recovery just takes a lot longer, and so I was like, okay, I'm done sleeping on futons and eating ramen noodles. I got to be a professional and grow up and I applied for a series of jobs, and the first job that accepted my application was to be a data engineer. And from there I was a horrible data engineer because I could write code. I could not write production-ready code, and my team came back to me in a very, very short time period and they're like, listen, we love you, but you're creating more work for us and you're alleviating work, However, even though you know our world, but you can't write production-ready code. You seem to do really well with the business, so why don't you go gather requirements, bring it back to us, we'll develop what's needed. Then go back and tell the business what we've developed and why, and, like you, just kind of be our translator. And that's just how it started. And, by the way, this was in the late 90s, early 2000s. At first I thought it was a rejection, but that notion of being a translator, of connecting the two worlds between functional and technical, bridging the gap, it turned out to be a massive asset because there were very few of us who could actually do it, and my career has just grown from there. And so the odd thing was, even after data engineering, I was so good at sales that I became the VP of sales of business development and sales for a boutique agency selling staff along and resources for technical projects. So it was weird. I went from being a data engineer to actually being in business development and it was then that we were trying to sell our big deal to Ingram Micro. I'll never forget this. In the early 2000s I'll never forget this.

Speaker 3:

In the early 2000s, and we were up against Accenture before they rebranded themselves because this was pre-Enron days or around the Enron days and MDM started becoming all the rage. But we were losing the deal to Accenture because the CIO, who my CEO had a relationship with, said listen, we got to give it to the big guys. You're a small, no-name company and unless you can come back to us with something that's completely differentiating, we're going with the big dogs. My CEO sent me to Texas for three months to go get SAP MDM certified and I was like this makes no sense. Long story short, lots of competition, very technical tests and I came out one of 11 people who was MDM certified Globally. We won the deal.

Speaker 3:

I started leading MDM teams and IBM recruited me and, like everything else, it's just history. I went from being a rugby player to a data engineer, to getting into sales and project management, then to becoming this really tactical MDM lead on these massive SAP ERP deployments. And then I went to grow, manage a team, grow a practice, manage a P&L. And then 2011,. Watson went to market and I got to be a part of that story. So I have launched Watson. And then from there, you know, I went to Ernst Young as a partner no-transcript as CDO, merck Pharmaceuticals as CDAO a sale order as CAO went independent. Now I'm with Amazon and, yeah, I love the startup space and fast forward. Here I am, but I still ask myself every day what do I want to be when I grow up? It's still a big question in my mind.

Speaker 2:

Yeah, and I love the whole career journey because, especially like being a data engineer, software engineer in the 1990s is so much different than how it works now. Because, like everything back then was like tied up in all these proprietary tools and there was just everything was like a secret. So I like how would you become like a software engineer? You had to go look in textbooks and, you know, go read proprietary industry reports and you basically had to just get locked into this, this knowledge somehow. So, like going fresh from rugby player to engineer almost seems like an impossible feat. Back then it was like now you can just like anytime you run into a blocker like question, you can just like ask chat, gpt or something like that. But it's so cool that like opened a door for you to get into the like the requirements side, like what, not like how should we build this, but what should we build it and why?

Speaker 3:

And that's huge. And you brought up actually a really good point, john, and I think it's something that everyone's got a question today, like that leap from rugby to data engineer. That was a massive leap, but differentiating yourself was actually, frankly, a bit easier back then because you know the world of data engineering back. There was in one big tabular format, everything was structured, data warehouses and there were classable ways of doing things Metadata, tagging, cataloging, aggregating the data, putting it into a queue. Yes, I'm that old. I remember the cheap constructs. Like you followed a protocol and that's how every corporation enterprise was doing it.

Speaker 3:

But right now, I mean, it's amazing and it's intense and it's astronomical of the different options that you have, and I always say you're only as faithful as your options. It's a New York City saying You're not stuck doing one thing. There are multiple ways to approach a problem, and so the optionality in and ways to approach a problem, and so the optionality in and of itself is a struggle, I would say the first thing. The second is how do you differentiate yourself in your career, knowing that everyone has access and they are one prompt away of understanding an industry, a sector, an approach, now whether they have the level of depth to be able to follow through on what they've researched is a different story, but at a superficial level, everyone knowledge has been democratized, so everyone has access, and so you know, when I'm working with some individuals, mentoring them, advising them or whatnot, um, it's really around. How do you differentiate yourself when everyone has access the same things?

Speaker 2:

yeah and yeah that. Yeah, I love the way you you describe that where knowledge is really democratized. Now, you know, this is one of the things. That's that people try to even differentiate. When they just read something on linkedin or x or blue sky, uh, you know, making sure it's coming from an expert with experience, someone who's been there and done it before, or if it's just someone who kind of chat, gpt'd it and say, hey, write something credible for me.

Speaker 2:

And you know, at a superficial level, like you said, like you know a lot of, there's just so much of that coming out now because it's like at a surface level, people can kind of say the right things, which is, you know why, having like the learned experience, the real world experience, where your knowledge is actually like empirical and acquired over the years and I only say this in a way when you're trying to speak to others and teach others, right, because I think it's great that anyone can learn something and do it, and I think that's that's one of the amazing things that we have out of ai, and it's one of the topics I want to get into with you is actually, uh, you wrote this incredible book, uh, your ai survival guide, and you know it's the way you've you put your thoughts together. You know is really incredible to me there, but I want to ask you, like, what gap in the market did you want to address with that book, and what is it about?

Speaker 3:

interesting. But I think you know I was like why me? Why would anyone want to listen to what I have to say? And if you look at my career, like I got into the social media thing late. I started writing on LinkedIn late. I would even say I wrote the book a bit late, but I got a lot of encouragement from folks around me and the encouragement was is the reason why I got into social media very, very late is because I was really busy doing the work, not talking about the work, and I have never struggled with imposter syndrome because I'm not one to read and regurgitate and take shortcuts.

Speaker 3:

I go dig into the details and you know most of my employees, like in the past, could attest to say that like it probably fatigued them or bothered them that I would ask so many questions but I was like I can't understand what you're trying to tell me or what you're asking of me, and I can't understand a day in the life in your shoes. Unless I can get to that level, I won't ever do it again. Meaning, once I understand what you're going through and I have that understanding, that compassion, that empathy and why you're asking me what you're asking me, then I trust you and I'm able to move on, but unless I know what you're going through day to day, I won't be able to help you as well, and so I will ask questions, I will ask questions and I will ask questions, but again, like I started, this, linkedin stuff, the book stuff, all 2024. I am really really late to the game, but it has served me well. As a result of the book Meaning, because I was so busy doing the work and not talking about the work, and because I have over 200 deployments under my belt, I have all the post-mortem notes of every deployment I've ever done.

Speaker 3:

Writing the book was actually the easiest exercise for me and the support that I got and encouragement I got in writing is listen, sol. Most people are going through this the first time. You've done this countless times. You should share what you know to help others avoid the same mistakes you've made. And I was like you're right, it's really a book of assumptions I made that didn't hold true, mistakes I made that went awry, areas we needed to course correct and where, along the line, like folks should really consider on how to de-risk their AI deployments. And so it's kind of funny.

Speaker 3:

It's not like a philosophical book by any means. Book by any means. It's a complete how-to and practical book of like all the things to avoid that I did in those 13 years and with those 200 plus deployments. So at first I thought it was meant for non-technical mid to executive level individuals, but I've gotten a lot of pings on LinkedIn from MLOps, folks, data engineers, data scientists, machine learning experts saying that you know I'm the founder and I need to scale this. Or you know I'm responsible and on the committee and I need to deploy this and I don't know how to push back, even though instinctually I know what's going to go wrong. But you've given me the business terms to use. So it's been a blessing, but it's really just to help others avoid making the mistakes I did over the past 13 years.

Speaker 2:

The book is really uniquely valuable because one of the things I've noticed when you give talks and from all your experience and all the things that you accomplish, there's this instinct for kind, for cutting through the fluff and getting to the core of what actually makes these ambitious data and AI initiatives work in the business. That's why I really love reading your book and enjoy your talks. When they go give their talks there, like this sort of essence to it. You know some people might be a phd talking about this thing that they've researched for 20 years and they're an expert at it, um, and then you know, for example, we mentioned joe and joe reese and uh and matthew housley. You know they talk about it at a fundamental level of data engineering and pipelines and data infrastructure and but you know what makes you know, what makes you know your view on this.

Speaker 2:

Like you know, it includes all of that, but it also gets to this kind of practical element of we have to make it work. Like you know, companies can build anything right If they throw time and money at it, but how do they build stuff that's actually successful? And there's, like these, what I felt like were unspoken truths about it that some people are just kind of skillful and able to do it, and I think that's communicated in your book really, really well and it's sort of a roadmap for people who might have, like, either the technical skills or some other hard skills that want to kind of develop like the soft skills or like okay, I can build all this great stuff, how do I get the buy-in and how do I make this something where an executive will give me budget to do it and prove that it's successful? So one of the things I want to ask you is, like what's like the biggest unspoken truth that AI leaders aren't acknowledging when building these ambitious projects?

Speaker 2:

That is such a good question these ambitious projects?

Speaker 3:

That is such a good question and I'm very much known for my transparency sometimes, you know, at the gengret of my leaders that I've reported into. I would say the first one is not everything needs to be solved by AI. You know why use a chainsaw if a scissor does the trick. We've got a lot of amazing technological advancements, quite frankly, that have been developed the past 20, 30 years that work. Our orchestration models like, just as an example, or robotic process automation right, rpa isn't new. Now we're like, well, no, it's intelligence-based now and that's what the automation with the young is about. I'm like, okay, but quite frankly, most of the use cases, most of the business processes that need to be automated, actually don't embed intelligence into them. A massive sequel go with a major decision tree actually does the job just as well and it's coded, static and unless your business processes change quite frequently, this works and I hate to say it, but most enterprises don't change their business processes frequently. So there's, I only share these because I think the biggest unspoken truth is not everything needs AI. I think that's the first one. The second one is is that you know, while everyone's chasing foundational models and LLMs and fine tuning and rag.

Speaker 3:

One thing I always ask people to reconsider is when you're past figuring out your prototype and your proof of concept and you're ready to push it into production. Pushing it into production does not mean you're ready to scale. It just means that you've just started the real work. Because in the green scheme of things and total deployment schedule mean you're ready to scale. It just means that you've just started the real work. Because in the green scheme of things, in total deployment schedule, 30% of it is, quite frankly, figuring through the technical components, the data components, architecture components, the accuracy threshold components, user flow, workflow, etc. The remaining 70 percent comes down to how am I going to get users to use it, adopt it and weave it into their day-to-day without crippling the intellectual capacity of my workforce?

Speaker 3:

No one's talking about that. That is 70 percent of work, which is why we have these amazing capabilities thrown into production and they think that's scaling. No, production does not equal scaling. Adoption equals scaling and folks aren't there yet, so they're not talking about it. But that's going to be a major issue. So like the big chunk is after the fact. So I always say that you know when deployments go wrong it's not because of bad models, it's because of bad integration, and I, unfortunately, am seeing a lot of that right now yeah, that and that's such a great comment because you know everything sounds amazing in design phases or architecture review boards and approvals.

Speaker 2:

you know, because you have to defend what you're going to build right, and that's when people will sort of get into the practice of over promising. And then you, you go to production and either it's too slow or it crashes. There's other issues. People say, yeah, it works, it's not that useful and, like you said, that's like step one. That's when the journey starts right and I think that's where leaders have to understand just because you, like you said, just because you launched it right, doesn't mean that's scale yet. That's like that's step one.

Speaker 3:

Now you're kind of crawling right, yes, like you've opened the door and you're peeking around the corner. The real stepping into the opportunity is wide scale adoption, not a deployment within a specific function amongst a specific set of individuals, of deployment within a specific function amongst a specific set of individuals. But you know, and that's partly what the book covers and partly what I'm evangelizing, because I think not only is the adoption going to come from, or the scale is going to come from, the adoption and the integration, but how do you do it? Where and this is kind of like my next business purpose, if you will how do we leverage artificial intelligence for good at the individual level, at the company level and then at the community level? And what I mean by that is, you know, as we lean heavily into automation, augmentation and autonomous agents, how do we still empower our workforce and our individuals and their cognitive abilities, their intellectual strengths, their ability to solve problems? Because the original intent was always let's build automation and let's leverage autonomous agents to free up capacity and bandwidth. Some are translating this as displacing the workforce. Well, no wonder there's mistrust and no wonder people aren't going to adopt it. But what I want everyone to understand is it's not a game about displacing the workforce. It's a game of how do we create the most amount of value with minimal dependency on manual labor and then how do we reallocate the workforce to work on problems that continue to plague the company. So let me give you just a basic example.

Speaker 3:

About a year and a half ago, I helped a company yes, we deploy some autonomous agents and automation and a variety of functions and my ask to them was I said here's how much I'm going to charge you for your strategy and to help you deploy.

Speaker 3:

I will work with any vendor you choose. That's not an issue. But if I build 17.6 to 18.5% additional capacity with this team of 15 individuals, your promise to me, your payment to me, because you paying me the strategy work is optional. But what I am asking for is do not let go of any of those 15 people. Instead, get those 15 people into a room to discuss the next business problem that you want solved and why you haven't been able to solve it the past year or two or three or four, because we were meant to redirect our abilities to problem solving, not necessarily to reading, regurgitating, copy and paste and doing mundane, repetitive tasks. So my ask to them legitimately was, is that, with this additional capacity and bandwidth, the intent isn't to shrink this team from 15 to 12 or 11. It's to reallocate these 15 individuals to solve a business problem that's been plaguing you for a really long time and get them in a room to do it, and that's your payback for me. And like we're just not thinking of it that way with AI deployments.

Speaker 2:

Yeah, there's this. Sometimes I shouldn't overgeneralize, but you see, this obsession with the hard costs and executives really need to think about those. Like you said, taking that room of 15 people and rather than cutting headcount there, how do you get more output? And then how? So how, and if we were having this, this conversation, how would you measure that and help advise the company on how to best measure, measure that increased output they'd get from deploying ai?

Speaker 3:

this is where you've got to go through some of the nitty-gritty details that most people don't like, or short circuit. You know I in the past we used to do what we call day in the life measurements. We would ask I would interview a team what are the top five questions you get on an email or in conversation? And then we would go through a day in the life and go how do you answer this question? Emails, systems, processes, slacks or Teams messages, how many people would you interact with? Who are you waiting for? How long does it take to answer that question?

Speaker 3:

And then we would do these across a series of just questions, like three to five questions, depending on how much time we have to understand, how many people are involved, how long it takes Is it really internet? How many people are involved, how long it takes, is it really internet systems and work flows or sneaker net, slacks, emails, phone calls, and how long does it take to answer a simple I shouldn't say simple a business question that, quite frankly, this team should know. And we would go through and measure productivity, efficacy, efficiency, duration, like we just go through a series of missions, and then we would say, okay, this is our before benchmark. Now the question is is how much of this can we improve after we introduce all these amazing promises? And then we would measure it again.

Speaker 3:

But, where the trick lies. Most people are very much pro-AI let's go do this. It's going to solve world hunger, right? There's this big, big, big, massive push to doing it. I've actually had and I don't know if this is stupidity or guts to actually tell a few different businesses actually the way you're doing it now, although it's not efficient, is a hell of a lot cheaper. So your return on investment is more through the manual labor than to deploy AI and they're like wait, what do you mean? I'm like it's not about just cost, but at this point in time, if I'm looking at a six-year evaluation of your return, at a six-year evaluation of your return, what it's going to cost across models, workloads, gpu, cpu, orchestration, data infrastructure, devops, your CIC pipelines and everyone that's going to be needed to be involved in actually automating this process, you're not going to make back your money for another six to seven years, whereas if you do it the same way right now.

Speaker 3:

It's a small team, quite frankly. It's lean. It's not only more efficient at this point in time, because it's going to take us about 19 months to be able to deploy this, considering the maturity of your existing environment, but it's more cost effective. So, if you're okay waiting six to seven years to actually make back on your money, let's go ahead and do it. But if you're not keep doing things the way you are right now and that's the conversation no one's having it's just actually suggesting if you expect results next year. I wouldn't expect them. So I've always outlined you can expect results in three years, and here's what it's going to be. Or actually, this one's going to take six to seven years because you actually don't have a process. You have something that's been stitched together by a group of individuals who's just been doing this for a long time. We actually have to create the process before we can automate the process, and then we've got to teach an entire workforce about this new process we've created.

Speaker 2:

Yeah, and this is one of those common examples of moving slow to move fast. Right, and I think you know good executives understand this that you know that getting real efficiencies within large businesses does require this sort of long-term thinking and through that basically you've outlined so much of this in your book as well is also finding these opportunities within the company where what's a real problem that we can solve and are we ready to solve it? We don't have to go through the whole thing because it's in your book, but I wanted to at least ask you at a high level about the readiness assessment that you have for executives.

Speaker 3:

Yeah, that was something I created right after IBM. I had done enough proof of concepts and prototypes and pushed a few things into production, and part of it was is REI. Deployments wasn't really dependent on the technology but on the maturity and the readiness of the organization. And then readiness goes through and there's like five key areas and you ask a series of questions Would you say your work forces A, b, c, d or E? Would you say they know X, y, z, et cetera, et cetera, and then, depending on that score, it'll actually let you know how ready are you for a deployment.

Speaker 3:

Another one that I actually just developed about a year and a half ago, right after the book was released oddly enough, I didn't include it in the book was what I call the discernment framework, because once again I got back into the business of deployments and strategy and advice and coaching, both in startups and enterprise, and I was like something that was so obvious for me and something that I just followed wasn't obvious to others, and so I created the thing called the AI discernment framework, and it's a four quadrant framework and it essentially outlines when you should use AI, only when you should use humans, only when you should use AI plus the assistance of humans, when you should primarily lean on your workforce, humans with the assistance of AI, lean on your workforce, humans with the assistance of ai, and a four quadrant framework. And I have about over a hundred use cases in each of those. Um, so that folks don't overly lean and go oh, ai should lead and humans should assist. No, no, no, it's actually reverse humans should be leading and ai should be assisting. And here's why, um, it was just so obvious for me because, again, I'm too close to the work.

Speaker 3:

But when I realized I'm like well, one of the gotchas is they picked the wrong use cases. They couldn't even deploy this use case, even if they brought in an army of consultants if they wanted to. So the readiness assessment came out of there. But picking the right use case, the framework was developed out of that. So I always say it's understanding how ready you are and how complicated you can get with this, selecting the right use case based on the criticality and the complexity framework that I have in the book, and then understanding from there what field of play your use case should take. Can it be completely dependent on its own? Does it need a human? And like all the different interpretations of it. And those were just some of the gotchas that, again, are just so obvious in my head, but as people are going through this, it's just obvious.

Speaker 2:

And that's the perfect example. You know, when I mentioned earlier in the pod that you know your perspective is so common sense and practical of getting business adoption and business and then some, uh, you know some misconceptions about the potential value you can get from data and ai. Right, so you know, because you see examples, or like companies will try to, you know, adopt some sort of like. Maybe they'll look at like semantic layer before they even have a good internal data model, the right data platform set up Right, and they'll say, well, you know, the semantic model, you know, is invaluable. Well, yeah, at your stage, yes, it's not valuable because you don't have the right foundations. Right, but it requires someone to be there and just ask the right questions, which you know. And that's what I really love with the perspective that you bring, that's grounded in, yes, the technicals and what you have to build. But is it the right thing to build and is this the right time to do it?

Speaker 3:

100%. And if you should even use AI to do it, not every horrible needs AI Like if you're hanging a painting on the wall, you don't need a jackhammer A hammer is going to do the job. So I think having that breadth, depth and span of technical knowledge to go is there pre-existing technology that can solve this problem that you're trying to solve is extremely helpful. Um, and then knowing where ai should and should not be applied like that's also helpful, and I think that's just where I struggled. I was like this stuff is known, everyone knows this, everyone knows this stuff, or so I thought. And then I was like wait a minute, no, not everyone knows this stuff. Everyone's repeating some of the things that I went through 10 years ago, 15 years ago. Maybe it would be helpful if I started disclosing these things more publicly versus keeping it all in here.

Speaker 2:

Yeah, yeah, and that's where I really, you know, I definitely recommend the book because there's just a lot of these kind of frameworks, ways to approach these ambitious projects within large organizations, because large organizations just doing anything within them can is its own beast right, which is kind of outside of any you know technical way of approaching it right. So just getting that leadership buy-in, getting the internal buy-in, like even from you know people that are adjacent to your organization who have to be champions of what you're doing. And but I also wanted to ask you you know, you mentioned that, you know in your book and even on this pod you mentioned about making existing teams more efficient. How should leaders think about balancing AI, automation and human expertise?

Speaker 3:

Yeah, and, to be honest with you, if you ask me what keeps me up at night, this is now the problem that I was meant to solve. Yeah, like, for a long time I just kind of haphazardly stepped into positions and took roles and jobs and like didn't have the purpose. But more and more, now there's this seed that's starting to harvest and it's funny because, like it's just getting. This noise is getting louder and louder. This seed is starting to bloom, like if you talk about my purpose right now, it is really meant towards how do we leverage all the amazing capabilities of artificial intelligence, but not at the cost of our intellectual atrophy? And I'm seeing this across multi-generations. You know, my children, who are seven and nine, are digitally native and it's. It's funny because I you know, when you see the picture of like how apes started increasing their posture and we became humans, I'm watching the generation starting to decrease their posture with their screens and like we're going back to, like the old eight formations. I'm like, oh crap, like it's just funny watch people walk and we are starting to degrade our posture back to our early historic ways of like, how we started as apes. Okay, well, put that image and thought aside. But I'm also learning that with the younger generations, common sense is less common than what we're seeing you know even at the adult stage, than what we're seeing you know even at the adult stage, because the ability to question and push back, that's a muscle that hasn't been flexed. So if they read it, they fundamentally think it's true. Or if an influencer says it, well then it must be right. You and I both know that that sure as heck isn't right. An influencer doesn't even mean they've even done the job. An influencer just had a good strategy on social media and either got there early or as a great marketing tactic and created a voice around them, but it does not make them the source of authority for a subject or a domain. But that's not the world we live in. So now you add that, and then you add artificial intelligence. You know if I'm looking at people in the artistic realm, in the content creation realm, in strategy realm, in positions, everything is a prompt away from grok chat, gpt, like you name it, soup du jour. Pick your foundational model of choice.

Speaker 3:

What's happening is that we are, while knowledge has been democratized it's great because we get access to information but we are not taking it a step further in asking the questions. We're using it also as a source of truth to check a task on a task list, but we're not using it as a place to start our own individual forensics. And so I think we're short-circuiting the learning process. And if we short-circuit the learning process, we stop learning, we stop being curious, we stop and we weaken the synapses of the neurons firing and being able to create these interconnected connections between different data facts and create a point of view. We will all end up just regurgitating exactly what we read off of any of these foundational models and we're all going to start sounding the same, and I can already hear it in some people's tones.

Speaker 3:

I was at a conference once and someone wanted to ask a question. I could tell in an instant that was not from that person's heart, their tone or their mindset. They had asked the question in chat. Gpt got the question and then they got up and asked it, hoping to sound intelligent, but it came across extremely scripted and insincere.

Speaker 3:

And so when you take the image of the apes in the evolution, with our inability to question the source, with the fact that we're short-circuiting the learning process, what I'm afraid of is we're all going to experience intellectual atrophy. The thing that makes us the strongest, our ability to think critically, solve big problems, invent, imagine because we're just going to outsource our thinking to all these amazing capabilities that exist within us or exist for us, and so that's kind of the crusade that I'm on is how can I help big companies and small companies, while we introduce these amazing capabilities, to not outsource our thinking properly, integrate AI with the human workforce, to increase our largest investment, which is human capital, so that we become empowered and not disempowered? So I always say I want to deploy AI for good, at the individual level, at the corporate level and at the community level, so that it strengthens us, it doesn't displace us.

Speaker 3:

Yeah, absolutely I thought, ours felt worn out.

Speaker 2:

No, but the important thing there is that there has to be a way to uh, at the end of the day, I mean, people know this now, like, whether you're an executive or you, you manage a small team but that aspect of just being able to go ask someone who, who owns something and really understand something within the business either either they know a certain customer really well at an intuitive level, or they know some code base at an intuitive level and they're just the expert on this thing right, and having a human there is so important because you trust that person and they're the ones with the authority to really get stuff done. So now introduce this. You know tying this into AI, right, we're talking about AI really empowering and making humans more efficient and automating the stuff that should be automated. So, like, where do you see, like just conceptually, at a high level, you know how companies should look at. You know human in the loop for AI.

Speaker 3:

Yeah, so I was chatting with someone who used to work for me a long time ago and I think if we're not careful and if we don't do something now, we're going to pivot from AI being our co-pilots to us managing AI cockpits, and that can be dangerous on many levels. So I do think that we need to have a human in the loop for decision making and governance at all times. Now, easier said than done, because I think there's always going to be a place or time where autonomous thinking ai agents are 100 going to be not only more profitable and more efficient, but the business decisions that need to be made aren't critical. There's classic data aggregation case studies, right. Instead of I hate to say this, but maybe a data analyst having to stitch together and cobble up information and then create this massive translation layer to be able to run a dashboard or a report, we could actually develop that individual in a very different way. But I do think that a human needs to be in the loop, and part of that framework of AI only human human, only AI plus humans, and then humans plus AI actually distinguishes where a human should be in the loop and why Like if we're talking about national defense contracts. That's not something you should automate, not at this point in time. There's too much at risk. If we're talking about criticality of a certain change with partnerships who you're going to choose to work with in that deal term, that needs to be in the human in the loop. Now, I know that AI is being used a ton for data analysis, but when you have to publicly report your 10K, that should also require a human in the loop because you are technically responsible. So I think AI agents are going to provide a ton, a ton of benefits. There's no doubt about it. But again, where you, ai agents are going to provide a ton, a ton of benefits, there's no doubt about it. But again, where you apply, it is going to be key. But at the end of the day, I think the information should always be verified by having a human in the loop, because the risk exposure of not doing so is too massive.

Speaker 3:

Like one of my favorite use cases, I think it was about three and a half four years ago A driverless vehicle hit someone that was walking in the walkway. Well, the first thing the person did was try and sue the owner of the driverless vehicle Okay, but when you peeled back the layers. Technically, that person was breaking a law because they were crossing at night when they shouldn't have been crossing, and the street signs had said don't cross because they were still moving traffic. Well, there wasn't a human in the loo, they had actually leveraged artificial intelligence to do the decision making.

Speaker 3:

But in something like that, you can understand all the different nuances, because the driverless vehicle was technically abiding by the law and by the law they had right away, and the user who was crossing the street broke four laws in the making. Who's at fault and who should be doing the decision making? These things require human interlude. It's very subjective, it's not black or white, and unfortunately, I would say our world is 50 shades of gray more than things are black or white, and so I think we shouldn't lose that human in the loop capacity or component for things that require critical thinking, and unfortunately, I think some people are trying to do that.

Speaker 2:

Yeah, and I think that's the right way to look at almost every AI project is you know what is the human in the loop and who's the owner? There still has to be someone accountable, like you said. You know the first person they looked at, when a driverless vehicle, you know, hit a pedestrian, was, okay, let's point the finger at the driver. Or is it the software maker or is it, you know, whatever right? So at the end of the day, someone's accountable for this AI and that person that's accountable right for delivering it and its quality. And you know we can get all into fine tuning and reinforcement, learning and how that's applied and all this stuff. We're actually going to cover a lot of this, you know, just as a sidetrack, you know I'm recording this from our Palo Alto headquarters and Sol. You'll be joining us on site here in May for our AI Executive Roundtable, which I'm super excited about. We'll dive into some of these deeper topics and I can't wait for that. That'll be fun.

Speaker 3:

Yeah, me too. I'm super excited. But if you notice, john, in this entire conversation we did not talk about API toggling, model reinforcement, we did not talk about fine tuning and rag techniques. We did not talk about capacity. That stuff can be solved, for not only do we have extremely smart individuals, but there's a lot of wonderful companies out there that can actually solve these problems for folks. It's the applicability in business, it's the accountability, it's the humans in the loop and knowing where, when and how there's a space and time for everything, and I think that level of discernment needs to be applied if you want it successful At least, that's been my experience over the past 13 years of deployments you want it successful.

Speaker 2:

At least that's been my experience over the past 13 years of deployments.

Speaker 2:

Absolutely, and that's one of my favorite reasons for following you is that I can always kind of find that practical common sense and really just proven strategies for really deploying AI in a way that's successful. And I really do think about that. Uh, in my I won't say day-to-day, but week-to-week or in quarterly cases is like, how do we make sure that you know what we're doing is really successful and valuable? Because, like you said, there's there's all these uh you know just naming things yeah, model context, routing and reinforcement and all this technical stuff and you know any team can implement, you know any of that, but the question is you know when and doing it at the right, doing it for the right team at the right time? So, and I'm going to be really excited to continue talking to you about this at our upcoming event- I'm very, very excited about the event.

Speaker 3:

So thank you for having me on this podcast. Thank you for inviting me to your event in May. You guys are doing amazing things and I'm a big fan, so any which way I can support you know, I will.

Speaker 2:

Thank you, Sol. Thank you for joining us this episode of what's New in Data. We'll have a link out to Sol's book and her LinkedIn for all the places you can continue to follow her down in the show description and show notes. Sol, thank you again for joining and we'll see you soon.

Speaker 3:

You got it. Thanks, John. Thank you.