What's New In Data

From Turing to Tooling: How Businesses Really Win with AI with John K Thompson (University of Michigan)

Striim

What if the fastest route to real AI value isn’t training a new model—but learning how to use the ones we already have, better? We sit down with author and professor John K. Thompson to chart a clear path from AI’s roots to practical wins you can ship this quarter, while keeping an honest eye on what stands between today’s LLMs and true AGI.

We start with the origin story—from Turing’s early ideas to the Dartmouth workshop—and show why those founding questions still matter. Then we move into the present: how context windows let you infuse models with your voice, policies, and playbooks without fine‑tuning; why unique information assets are your real moat; and how cross‑functional teams (operators plus technologists) turn prompts into production results. John explains the power of causal AI to answer “what should we do?” and shares concrete examples, from proposal generation that compresses months of work into minutes to manufacturing setups that slash daily waste by two‑thirds.

Along the way, we cut through common myths. AGI isn’t arriving next week; we’re missing durable memory, robust causal reasoning, and integrated “composite AI” that blends generative, foundational, and causal methods. GenAI coding is a productivity edge for scaffolding and tests, but complex logic still needs expert hands, strong reviews, and measurable KPIs. For leaders, the blueprint is simple: build around the model first with retrieval, guardrails, and evaluation; organize AI and data science as one team; choose tools that fit practitioners; and measure outcomes relentlessly.

If you’re serious about unlocking AI’s upside without getting lost in hype, this conversation offers frameworks you can use today and a realistic map for tomorrow. Enjoy the episode, and if it resonates, follow the show, share it with a colleague, and leave a quick review to help others find it.

Follow John K Thompson on LinkedIn

What's New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What's New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

SPEAKER_01:

Hello, everybody. Thank you for tuning in to today's episode. Super excited about our guests. We have John K. Thompson. John, how are you doing today?

SPEAKER_00:

Doing great. It's spring here in Chicago. It's sunny and warm and everybody's running around. So it's uh it's great.

SPEAKER_01:

It's wonderful. Oh, amazing. So glad to hear that the uh the the weather in Chicago now that we're the day of recording here, we're we're April 16th. And I know sometimes those Chicago winters can can sort of bleed into May, but that's nice to hear that. It's it's it's getting you some uh good spring weather.

SPEAKER_00:

Yeah, it's uh the city goes crazy. You know, the first day that it gets about 50, everybody's out and at the lakefront and you know, riding bikes and rollerblading and you know, roller skating and you know, doing everything. So it's uh the city's all uh it has come to lie, come to life.

SPEAKER_01:

Yeah, absolutely. When people ask me what my favorite city is, I I always ask, can I qualify the the the time of year? Because it's it's summer in Chicago specifically. And you know, I actually had my wedding in in Chicago, uh, but it was in the winter, but it's still very nice. Uh it was a nice snowy uh uh wedding. But John, uh, you know, today I wanted to discuss your your new book, the The Path to AGI. Uh a super fun read, both uh very uh I think executive oriented, gives you a very high level uh, you know, the value and and way to achieve uh AI, but also kind of gets into the the required technical details in depth. It's a really great book that that I enjoyed. Uh, but first, John, uh I'd like love for you to uh tell of the listeners about yourself.

SPEAKER_00:

Sure. Yeah, and and thanks for that opportunity, John. I I've been doing I've been involved in AI for 38 years, and and often, you know, people say, Oh, it hasn't been around that long. And I'm like, well, yeah, it has. It's been around for about 70 years. And and uh, you know, I started working in data, you know, so many years ago, and it just seemed to me that everything that we did that had really any interest or value came from data. So, you know, I started on that that journey a long time ago and I've been involved in in advanced analytics statistics and AI for that amount of time. And and uh, you know, writing the book was uh it was it was really fun. I really enjoyed it. Uh, you know, I've had a number of people come back and say, wow, 420 pages. What happened? Did you get possessed? Or, you know, what what what went on there? And I was just like, look, I got I got sucked into the topic and and just ran with it. So I'm an all all around AI nerd and AI generalist, I guess is the way to say it.

SPEAKER_01:

Yeah, one of the uh one of my favorite parts of about the book was really how you went all the way back to sort of the the the foundations, the very early meetings that were sort of the precursor to to AI and the and the and the concepts of you know computers really thinking and and and making decisions uh autonomously, you know, conceptually. Uh I'd love to hear that part of the book.

SPEAKER_00:

Yeah, I think one of the things that I I you know, writing the book is it's it's a labor of love. Anybody that's written a book knows that. And and one of the things that that made me a little sad was that you know it said Alan Turing was one of the first people to go to the Royal Society and start talking about machines that think and and machines that mimic human intelligence, but he never he never wrote anything down, he never submitted those papers. So there was there's really no written history of of him starting out the movement towards AI. And I'm such a history buff. I I would love to read those papers and see his thoughts. But he was actually talking about it in the in the late 40s. So then we we move up into the 50s and and the uh you know the Dartmouth Summer Project and you know, and all the great thinkers, Claude Shannon and John McCarthy and uh Marvin Minsky and all these people coming together at Dartmouth over a summer to talk about what AI is and what it could become. And what I realized in in doing that research is that what we think about AI is what they defined. You know, going into that workshop in the in 1955 and 1956, they had written down everything that we have been trying to do and for the most part have accomplished over the last 69, 70 years. So it's uh it's great to go back and see where this stuff actually started.

SPEAKER_01:

Yeah, it it it really gives you a sense of you know how how long it really takes to solve these types of problems. And now we're in this era where we have all this technology at our fingertips, right? You can provision, you know, essentially limitless cloud uh resources in the cloud just from your browser now, uh and uh access to all these massive foundation models that have been pre-trained with you know petabytes of of uh of data, both public and maybe not so public. Uh and now it's you know, in and it I that's the part about the book that really made me excited in in a way that I wasn't before. I was like, wow, imagine uh if I was back then in uh you know uh in in in the 1960s and just thinking about this and being inspired by it, but not having the infrastructure to actually execute it. We could only really theorize it back then. And uh so it's it's a really good perspective because now you can just you know use these LLMs in in any you know customer or internal facing application. Uh, but your your book is really incredible at you know, sort of the why you would even want to do that and if you should even do that, and the right labels on it. Uh so definitely uh a really good great read. I I I I do recommend it both to folks who are on the on the business side, on the executive side, but also you know, data engineers, software engineers who who who really want that that you know larger, let's say the big picture behind AI and and what it can accomplish. And I like the cover too. So that that that always helps.

SPEAKER_00:

Thank you. Yeah. That uh that cover, I think they sent me like 10 or 12 different variations, and uh that was the one that jumped out. And then uh, you know, I brought in a few different people, including my wife, who uh made different suggestions on the color of that arc. Uh so it came together as a very much a group project.

SPEAKER_01:

You know, they say don't judge a book by its cover, but people do anyways. So it's a it's it's important. Uh but before we get more into the book, uh you also have a uh another update. Uh you you've you've taken on full-time uh teaching. I I'd love to hear about that as well.

SPEAKER_00:

Yeah, I uh I've become an uh adjunct professor at the University of Michigan, uh, the School of Information. My daughter, our daughter went to Michigan and graduated from the School of Information. And I've been talking to them for about five years, you know, different things that I could do to help the school. And and one of the uh deans came back and said, Well, we'd like to have you teach, in addition to being a board member. And I said, Wow, that's really interesting. And, you know, and I'm I'm honored and flattered. And and I thought, you know, I've taught at DePaul and and different kinds of universities, but never as a, you know, a full, you know, a full-time faculty member. And uh, so I sat down and and wrote a class uh based on my second book, and I'm just finishing it now. I'm getting ready to send the final to the students, and it's been great. It's really been fun. Uh, you know, I as we talked about, I live in Chicago, and the class is at the University of Michigan in Ann Arbor. So I've been driving up there every week and teaching a three-hour class and then you know coming back to Chicago, and it's just been such a wonderful experience. One of my one of my friends had an observation. He said, Well, it's quite intriguing, you know, you're you're interacting with people that are as young as 17 years old. Some of the undergraduates are 17 years old, 17, 18, and I have graduate students. Then I go to work, you know, on and in my day job, and I interact with people that are in their 30s, 40s, and 50s. And then I go and I do work with boards uh over the last couple of years. I've been in front of, you know, a couple hundred boards, and those are sometimes people up into their 80s. So I'm getting quite an spread of exposure to uh you know different slices of you know, education, corporations, board governance, and hearing all these different people uh give me their impressions and ideas of AI. And it's it's quite enlightening.

SPEAKER_01:

It's pretty cool to get all those perspectives. You know, they all think about it in different ways and uh different levels of uh technical detail, you know, risk mitigation, uh looking for you know opportunities to you know uh extract more uh internal optimization. I I'd love to hear more about that actually. So you're you're working with all these different groups of people, it's giving you great per perspectives. Like, what's your main takeaway from that? I mean, like, what is the right level to really engage with a with a with a set of problems?

SPEAKER_00:

Yeah, that's a great question. You know, the the undergraduates, they you know, they want to do everything all the time with Gen AI. You know, it's it's like, well, do I need to go to the store? Can I have Gen AI bring me my Mountain Dew or whatever it happens to be? And I'm like, well, probably not yet, but maybe you could. I suppose you could get a robot to bring it to you, you know, and that the mid-care career professionals are like efficiency, effectiveness, you know, how do we get things done faster? How do we automate everything away? Uh, and then once you get up into the senior levels and the board, it's all risk mitigation. You know, what's our risk? How do we not end up on the cover of the Financial Times or something like that? So those are the things I see on broad brushes, you know. But when we dive down into it, you know, we have some really intriguing conversations with the subject matter experts, the people that are running, you know, the actual operations of corporations, supply chain, pricing, uh, distribution, sales, marketing, manufacturing, all those things. Those are those are quite intriguing. And then when you get the subject matter experts in the room and you bring the technologists in, then it's really you know a great time because you you you you really it's I've never seen this kind of dynamic before where the technology, you you put it very well earlier, John. You know, we've got these large models that have been trained on petabytes of information that are accessible by almost everybody. You know, if you can write you know cogent sentences in your natural language, you can prompt these models. So once we get the technologists in the room and subject matter experts in the room, and we break down the barriers of communication and start talking about how each of them can bring their skills and experience and technologies and understanding of operations together, you really start to see impressive movements in building solutions that can really pretty much take over almost any process you want to focus on.

SPEAKER_01:

And it's it's so interesting to see how, like you said, different different uh people have different approaches where you know the the the college student might throw AI at a problem that you know there isn't really a lot of value to throwing AI at it, but it's just kind of play around with it. And I see that as well, where you know, uh young engineers who see all the hype sort of want to just dive into writing something, right? Even if it's not the best idea and there's no no business purpose to it. They'll build it with AI, they'll build a uh a model context protocol server because everyone's talking about MCP and for all its faults, it's it's very popular and you know it does seem like a a frame, at least from a framework level, something that'll be uh uh uh persistent in in implementations. Um, and then you know, you you you look at the the the operators who are who are deploying AI, and it's always very precise in terms of making sure it has internal buy-in and uh a number on its back, so to speak, to make sure that's it's delivering value because it seems like yeah, the the developer tooling's there, you can build stuff that you know calls LLMs, but you have to think about you know, why would we even do it? And you have a lot of great uh kind of conceptual ways of thinking about that in your book. What is what is like the essence and the core things that need to happen to really get business value out of AI?

SPEAKER_00:

That's a great question. And and there is, you know, we're we've we've gone past the last two, almost three years of just rocketing innovation, you know. And and I'm not I'm not quite sure yet, but I do feel like we're starting to see that curve come down a little bit. You know, it's not like every week there's another, you know, crazy breakthrough. So, you know, we don't have to keep you know so much energy on trying to stay on that, you know, almost vertical curve. We start to see that flatten a little bit. And that that gives people in in business and operations a chance to take a little bit of that attention and start to look at the things that you know are it's are salient and interesting to them. You know, I often, as you said, you mentioned young engineers and and young professionals. I often have them come to me and say, I don't know where to start, I don't know what to focus on, you know, I'm I'm not sure where I can make a difference. And I and I always say, well, partner with someone who's in the operations of the business and go have lunch with them. And then the next question is, how do I extract, you know, their problems from them? And I'm like, you don't have to extract them, you just have to ask them what is the challenge, what's keeping them up at night? What can't they get done? You know, and they'll tell you. So, you know, it's it's really quite intriguing. I was talking to a company a couple of days ago that, you know, they're one of the premier information providers in the world. And they were actually very befuddled on what to do in their business about, you know, AI. And and I said to them, I said, look, you're in a privileged position. You are a trusted processor of much of the world's financial information. You have connections to insurance and finance and all these different places. You know, really what you need to do is take a step back and say, we have an opportunity to rewire, you know, some of the way people think about credit risk and finance and granting loans and insurance and all this kind of stuff. And if if you just take a step back and look at where you are, you can see, at least I can see, a very clear path forward that is going to drive innovation and change and and unleash value. So it really is different for each organization and almost each person, really. But it it comes down to, you know, what are the unique information assets that you have? What are the unique information assets that your ecosystem of partners has? And how can you put those together in a way that's going to drive and unlock value for you? And this all comes from a premise that I believe you and I have talked about is that, you know, with the Gen AI and the large language models and all those innovations, what we've done is we we've basically unlocked the additional 90% of the world's information that can be actively used now. You know, if you if we had this conversation three years ago, we'd be talking about ledger entries and quantitative data and columns and rows and numbers. We wouldn't really be talking about audio and video and images and text. We just wouldn't be because we didn't really do much with it. You know, it's hard for people to get their heads wrapped around the sea change that has happened in the last three years. So I think people need to uh open their aperture. They need to take a moment and see where those unique information assets are and how they can actively engage with them and how they can change the game today.

SPEAKER_01:

Yeah, that's that's incredible advice. And it's so much this this information is so much accessible, uh, so much more accessible than it was before. There's all these technical hurdles have have been removed in terms of you know act access to to yeah, to to machine learning and and and AI, just just even you know, taking out a lot of steps required for pre-training and and and things along those lines, uh, labeling data sets. I mean, I'm sure that's still necessary for many use cases, but I would love to hear from you like what like what are some of the the practical unlocks that executives should be aware of uh thanks to the uh the innovations with with AI.

SPEAKER_00:

Yeah, you know, one of the one of the things that really opened my eyes to it, and and I suggest it to everybody that I talk to is, you know, go take the prompting 101 class at Coursera. And everyone's like, really? You took that class? And I'm like, yes, I took that class, as a matter of fact. And you know, what it really opened my eyes to is the value of the context window in a model, you know, that you can sit there and I mean what I did in in that class is they have exercises. So I basically loaded up, you know, my five of my books and papers and videos and it kind of made of digital John just to see, you know, hey, how does this really work? So even without touching the core of a model, without even doing any fine-tuning whatsoever, you can take information that you control, you know, stick it in the context window, and the model will do an amazing job of mimicking whatever phenomena you want it to do. It did a really good job in answering questions in the way I would have answered the questions and in my voice. So, you know, I'm saying to people, look, you know, it's you don't have to be a techie. And that's one of the great messages, is you don't have to be a techie. You know, you can sit down and say, you know, hey, I'm I'm a small businessman, you know, and I'm trying to move into opening another distribution channel, maybe in a country I don't know about, or maybe in a different part of the United States, or maybe even with customers I don't really even understand. And if you can, if you can gather enough information and upload it into a context window of a model, you can probably get a pretty good idea of how you need to protect yourself legally to open this new distribution channel. Now, three years ago, is that possible? Could that even happen? Could I, as an individual, do that? I mean, I probably could because I'm a nerd, but you know, most small business people, no way, not possible. You know, they would have had to go get some programmers and some developers and some you know data management people and maybe some architects, and they would all been looking at it and go, gosh, you know, we got eagle agreements and we got distribution, we got pricing, we got you know all sorts of tables and graphs and things. How do we bring all this together with Gen AI? Upload it away you go.

SPEAKER_01:

Wanted to latch onto that one comment you had about you know, even fine-tuning, right? It's it's it's not as necessary as people think. People think, oh, for me to customize uh uh an LLM, I need to have like some uh PhD level engineers who are gonna do some matrix calculus and change the vectors. Like, no, you don't need that. It's just it's all in the uh in the context window, which is where where the magic is. And if you look at some of the uh popular AI deployments, it's just kind of stacking LLM calls in an intelligent way and and you know, sequencing that context uh in the right order and coming up with a plan, right? So people people jump to to the fine-tuning, but it's it's it's not necessary, right? It's if you're building the LLM and selling the LLM, maybe, yeah, right. But but most, but that's that's not where the value is right now, right? For for some folks it is, but you know, you better have 20 billion in capital ready to compete in that market.

SPEAKER_00:

Um yeah. I mean the the first thing I ask people, you know, in this in this kind of dialogue that we're having right now is, you know, do you want to, as you said, you know, create a new model, protect that model, sell that model, and monetize it? Or do you just want to try to get something done? You know, basically, do you want to do things in the model or do you want to do things around the model? And what I'm saying is that you can do most of the things around the model to prove that it maybe it doesn't work. Maybe it does work. If you prove it does work and there's some real value there, then maybe you want to do it in the model. But in the model is not where you want to start. You want to start around the model.

SPEAKER_01:

Absolutely. And you cover a lot of this uh in your book, uh, both both from a you know practical execution standpoint and a and a technical uh you know deep dive. Uh I'm in your book, Path to AGI, it would be great to just go through some of the the high-level sections. I don't want to spoil anything, but you know, you you could probably guess from the title where where it's going with uh with with AGI. Uh but yeah, we'd love to hear more about that.

SPEAKER_00:

Sure. And thanks for that question, John. Yeah, it's you know, I started writing the book, and and the idea was is that there would be these these core sections. And it's sec in the book to now, it's it's section two, section three, and section four. And it's foundational AI, past, present, future, generative AI, past, present, future, and causal AI, past, present, future. So there's patterns there. Obviously, I'm a pattern person. Uh, and that was supposed to be the core of the book. But then when I wrote it, I was like, uh, it starts out as like you're almost jumping off, you know, Mount Everest. So I wrote the I wrote the first section, which was data for AI, because I I thought there's gotta, there's gotta be some way to you know ramp this up better than what I was in the manuscript. So then you end up with, you know, section one, which is data, and then foundational AI, generative AI, causal AI. And then the last section is what is gonna happen, you know, over the next you know, few decades to get to AGI. So that's that's the layout of the book right there.

SPEAKER_01:

Uh AGI, uh we get to just define that and get get your take on that as well.

SPEAKER_00:

Sure. Uh, you know, we've been talking about it a lot. I don't think we've actually said it, artificial general intelligence. Uh, you know, because I think most of the people that listen to your podcast are pretty pretty well versed in this stuff anyway, but just to be clear, it is artificial general intelligence. And you know, we see a lot of people out there, you know, Ray Kurzweil, Elon Musk, uh, you know, Sam Altman, you know, saying, hey, you know, AGI is here, AGI will be here next week, AGI here in six months. Um, and and I don't agree with them at all. Obviously, if you've gotten part way through the book, you get that. But the one thing that I do agree with every person is that, you know, the definition of what AGI is. And AGI in in everybody's consensus definition is where AI is as intelligent as the above average graduate, uh, you know, college graduate. So, you know, it it's where you know AGI is reasonably intelligent, you know, and I've had a number of people come back after they've read part of the book and said, well, you're you know, you're a you're you're an AGI hater or you're against AGI. And I'm like, no, not at all. I mean, I'm pro-AGI. I think it's a great idea. I think we will get there. I just think there's a lot more work than what people are, you know, saying and understanding and realizing at this point. So um, AGI is all about AI acting and reacting and interacting as if it's as good as an you know an above-average college graduate. So that's the definition. And you know, there's there's lots of people that are given pros and cons on it. I actually am very pro on it. I think we'll get there. It's just gonna take a while.

SPEAKER_01:

Like, what would you say the current limitations are like just like very concrete that basically don't allow us to define what we have today as AGI?

SPEAKER_00:

Well, one of the things that you know we've started to see a little movement on last week was memory. You know, AGI for the most part doesn't have any memory. You know, you come in and you ask it questions and you prompt it and it you know responds to you just as if you had never met you before. So, you know, it it's a lot harder than what people think. You know, you and I have known each other for what, three or four years now? You know, and I remember when we met, you know, and we were talking about stream and different things, and then one of our previous conversations, we were talking about the, you know, the the you know, the incident that had befallen you and many people in California with the wildfires and and those kind of things. And, you know, we've built a relationship and all those different conversations are in my head, you know, and and when we get together and talk, you know, I'm always thinking about those things. Well, how's how's John doing on rebuilding his house? And you know, how's that affected his family and those kind of things? And um AGI doesn't have that, or AI doesn't have any of that. AI doesn't have any emotions, it has no, you know, context for what we're talking about. And those can be resolved and they will be resolved in the next you know, couple years or three to five years or something like that. But, you know, there's many other things, you know. Even I think even Sam Altman has said, well, maybe Sam hasn't said it, but many other people have said that, you know, large language models or language models, Jan Lacoon has definitely said it, you know, language models are not the path to AGI. You know, there's so many more things that AGI requires than just predicting the next word or letter. You know, causal AI has to come into it, foundational AI has to come into it. And what we're seeing now is that, you know, it's it's like a, it's kind of an old phrase, but it's kind of like a roll your own. You know, I see it with young engineers. They're like, hey, I took Gen AI and I grafted onto, you know, a logistic regression, or I took causal AI and I put it into in with Gen AI. You know, everybody's grafting these things onto each other, but not many people are poking their head up above the parapet and saying, you know, what's gonna happen is that vendors are gonna start to look at this and they're gonna say, hey, you know, we need an AI platform that has it all. And what I'm saying is it's gonna take at least 30 years for a vendor or a cadre of vendors to integrate this stuff all together. So you have an AI platform that has all the different flavors of AI in it. And I had one of my graduate students ask me in class last week, they're like, well, that doesn't sound too hard. And I said, Okay, well, let's let's take one feature and let's integrate it across foundational AI, causal AI, and generative AI. And we went through a design session that lasted about an hour. And at the end of it, they're like, God, we're nowhere near even understanding it, are we? I'm like, nope. So I mean, that was just a thought experiment in a class in a university, but there's a myriad of those things that have to happen, you know, before we're anywhere near it. I mean, we're not even close to composite AI, let alone, you know, artificial general intelligence. So I'm pro on AGI, and I think we will get there. You know, Rodney Brooks is uh, I don't know if you remember or know who Rodney is. He was the founding, you know, uh director of MIT C Sale Institute, and he's founded a couple of robotics companies. He's on record as saying that he thinks AGI won't be achieved for 130 years. So he's even further out than I am. And he and I have been trading, you know, emails back and forth. And and his position is, you know, why have people worked on AI for 70 years? And the reason they've that some of the best and brightest computer scientists have worked on it is because it's hard and it's interesting and it's fun. So when I say AGI is 120 years away, I don't say that as a detractor or as a Luddite or as someone who wants to take away from it. I say this is a hard problem. And if you're really smart and engaged and excited, you should jump in the pool. This is where the fun is.

SPEAKER_01:

Absolutely. And that's uh one of the things that people who are building with AI come to terms with, which is you know, just the probabilistic nature, and it's just something that's very hard to that that nature of it always makes it just a little bit um unpredictable, right? Which is you know, the in its definition, right? And that's one of the things that software engineers, when they make that leap from you know, uh building data-driven applications to suddenly AI-driven applications, you lose that determinism, that predictability, and and it's just a little bit, you know, unsettling for some teams. And it just kind of changes the way people have to work. And I think that's sort of the root of everyone's you know, lack of commitment to saying, yeah, you know, AGI is here. Because there's always just that little, you know, uh, you know, 2% chance that you know AI is just gonna completely make something up or or or do the wrong plan or you know, go down and uh uh you know execute something that's that's completely off. Right. So uh one of the other things I wanted to ask you about, which you know, this is well covered in your book, The Path to AGI, but you also wrote a book uh called uh causal artificial intelligence, the next step in effective business AI. So you wrote a book on that as well. Tell us about causal AI.

SPEAKER_00:

Yeah. You know, causal AI is is you know a great. A great tool. You know, you know, kudos to Judea Pural and his team at UCLA and all his PhD students and and collaborators for creating an entirely new branch of calculus. I mean, I I think I'm a reasonably smart kid, but or guy, but yeah, man, I'm nowhere near creating new math. And they they have definitely done that. And the great thing about causal AI is that while it's still probabilistic and still, you know, has you know some of those you know fuzzy components to it, you're actually bringing in information and you're understanding at a greater level, a much greater level, A did cause B. You know, and when one of the the the hard things about causal AI is that most people think they understand causality. And I think at some level we do. You know, I put my finger on a hot stove and I burned my finger. Okay, well, you know, you put your finger on a stove and you burned it. So that that's pretty easy to understand causality there. But when you start to try to break causality down to its mathematical level and have it on a predictable, understandable, repeatable basis, that's hard. So, you know, we need causality to be better than it is today because causality is one of the elements we require for AGI. Because without causality, without actually understanding the true cause and effect things, you know, AI can never be anywhere near the experience that we have in the real world. So causal AI is a burgeoning field that I think there's 15 or 20 vendors out there working on it right now. And there's some really interesting software being built. Um, but it's early days. And I think it was, it's been slowed down a little bit because Gen AI has sucked all the air out of the room for the last two years. But, you know, those companies are still there, they're still being funded, they're still doing interesting work. But, you know, that will come to the fore in probably two to seven years, you know, or maybe five to seven years. And we need it. We have to have it. That is a core component of what the AI stack is going to be in the future. And it's very exciting. One of the things that's truly exciting about cause LAI is that you can go back in history and take any data set that's been collected. And if it has a reasonable objective or a reasonably close objective to what you're trying to achieve, you can integrate that data set into your current analysis. So you could take some of Darwin's data from, you know, whenever he was alive, I guess it was back in the 18th century, and you can condition it in a way that you can bring it into causal analytics you're doing today. So one of the great things about causal AI is it makes all the structured data that we've ever had in history available for use today, which is mind-bending when you really think about it.

SPEAKER_01:

And how does that different differentiate from generative AI?

SPEAKER_00:

Well, generative AI is is it's much more uh unstructured. You know, you can actually bring in all kinds of stuff in generative AI, but these are two movements, two parallel movements that we've never really seen before. You know, in generative AI, you're bringing in the unstructured information, in causal, you're bringing in the structured. But what they both do is they make almost the entire knowledge repository of the world available to you to use actively in your analytics today, which is really cool.

SPEAKER_01:

Yeah, absolutely. And and you work with companies that have deployed this and seen real value from it. I'd love to hear one of those uh uh stories.

SPEAKER_00:

Yeah. Well, when I was at EY, I left there about a month ago now, I guess it is. You know, we built uh a gen AI platform called EYQ that serves 300,000 people on a daily basis. And they use it for all kinds of productivity applications and uploading documents and comparing legal documents and different things like that. We actually built uh a team in EY actually built something called Deal and Delivery Assist. And that allows them to bring in, you know, the best proposals that were ever built uh in EY. And it took months for a number of people at EY to build these proposals. Um, and with deal and delivery assist, it takes one person to answer about 11 questions and put in a well-formed prompt, and out comes a fully formed proposal. So, you know, multiple people over multiple months from to one person for a few minutes, you know, a pretty impressive uh productivity application right there.

SPEAKER_01:

And I think it does require someone who's knowledgeable in in the kind of the parameters required for successful deployment of causal AI. And you know, I'd love yeah, I'd love to understand from you. Like let's say I'm a business leader and you know, I directionally understand AI as this big unlock, and I just have this initiative and I want to use causal AI, right? Where do I start? Do I go looking for vendors? Do I try to hire the right engineers? Do I need a PhD? You know, what would be the steps there?

SPEAKER_00:

Yeah. I I think the the way to do it would be as if you were just a you know a regular company, you know, not a tech company, not a Silicon Valley organization or something like that. You know, I would talk to you know some of the early stage causal vendors and and get them to educate you on where is a good application for causal AI. I mean, one of the best stories that I've ever heard around causal was a bakery in the UK. And, you know, a big bakery. You know, these people are cranking out, you know, millions of buns uh, you know, a week. And what they would do each day is is they shut down, they had uh a day shift, they didn't have a night shift. So they turn they turn everything off, everybody go home. The next day they would come back and the humidity was different, the heat's different, the you know, the ovens operate differently every day. Uh, you know, and what they would do is they would crank everything up and they would run you know the factory and keep tweaking, you know, the heat and the speed and the dough and all this different kind of stuff. And they would throw away about a half a million dollars worth of product every day. So that seems absolutely wasteful. Um, but they brought in a causal vendor that said, okay, we can bring in all the known factors every morning and we can spit out, you know, what we think is the optimal setting for all the different, you know, the speeds, the dough, the the ovens and everything. And they cut that waste in in uh two, they eliminated two-thirds of that waste. So that's a really good uh use case for causal. If you don't understand, you know, uh how things are, you know, any of the kind of questions on what we should do. If you have a question that starts with what, it's usually an application for causal. So, you know, that's the kind of thing you're looking for. And what I would say is go find these vendors, bring them in, explain what your challenge is, let them educate you on it, do a POC. And if it works, then really think about okay, now we know that we can make this stuff work in the real world. It's not just some pie in the sky conceptual thing. Then you can start to plan forward and say, yeah, I want to hire some people, I want to have this as a core capability. I'm either going to use this vendor or I'm not. You know, that's the way I would start out if I were running a business.

SPEAKER_01:

Yeah. And you know, you had some good comments in your book about this as well. That gets much deeper into the details of how this can be executed, along with the the kind of the concepts for for an executive and or an operator to understand. So definitely uh recommend looking, you know, at at your book as well. I've I've already recommended uh to to quite a few folks. Um and you you you see these operators who've you know every everything they've done so far has generally been something the intersection of software engineering and and data engineering of some sort. Hey, we're gonna have these pipelines that ingest data from external and internal sources. We're gonna process that data into some you know form of uh models, right? And those models could just be your dimensional models or you know, data warehouse models, or you know, you can evolve them into machine learning models where where you're essentially you know generating some output from them automatically, and then you're just deploying that in an enterprise context as software. So now those same operators are trying to understand, you know, they they like we're we're fully leaning into Jevon's paradox where now you know their their cloud vendor now has 10 AI tools and frameworks that that they want to sell them and they think, okay, you know, it's all accessible to me, might as well use it. Right. So I I I see this, you know, there's obviously in the enterprise a lot of governance work. There's a lot of approvals in terms of what models you can do uh use and and you know how you share data with it. Well, what's your recommendation to those business leaders who just have all these tools uh accessible to them now and really choosing and evaluating the best one?

SPEAKER_00:

You know, that's that's a great question. And and it's something that we just talked about recently in my class. Uh you know, we we really aren't we don't talk much about data science teams anymore. I've had conversations recently with organizations. They said, well, we have a data science team and we have an AI team. And I'm like, you do? You know, but I've heard it multiple times now that people have done this, and I'm like, that seems really odd to me. But you know, they should be together, they should be one, to tell you the truth. So, you know, I I'm not a big believer in in forcing AI teams or data science teams into standardized tool sets. You know, I've had lots of data scientists that have wanted to use R, many want to use Python, you know, some have wanted to use proprietary tools and things like that. So I don't it I don't advocate that you go out and you know use 13 different tools to do the same thing. But you know, I don't I also don't say that, you know, if you've got different data scientists and they want to use different tool sets, you shouldn't force them to standardize. Because it's kind of like saying to uh you know a musician, you're a guitarist, but on this song I want you to play the saxophone. You know, you're you're kind of cutting off your nose to spite your face. So while you don't want to have everything in the world in your shop, you certainly don't want to force talented, you know, analytics professionals to use things that are going to be suboptimal for them. So what I would say is don't have an AI group and a data science group. They're really one thing there. Uh, and then find out from them, you know, what's going to make you the most productive. And that's the tool set you should use.

SPEAKER_01:

Yeah, that's a that's a great point. Sometimes I'll see uh teams sort of prematurely, uh I shouldn't say teams, it's really at an executive level try to prematurely consolidate, right? And they say, well, why do we have 10 database vendors? You know, how did that happen? Let's just have one database vendor and pick the biggest one, right? And imagine the the software teams here then say, Oh, okay, well, what we're gonna migrate and refactor all our uh applications, go from uh the object store we use to this this big uh relational database. And um, you know, I I I always go back to you know uh Michael Stonebreaker's quote, you know, one size does not fit all. And I think this applies, like you're you're saying, to to AI and and and data science as well. Definitely interesting to to hear that perspective. But that's that's where executives have to be real context sensitive, right? And and and understand, you know, why each team uh chooses the tools. And it's really to get the job done for them.

SPEAKER_00:

That's right. That's right, you know, and and we're moving forward into a world where, you know, if you talk to many non-technical people, you know, they certainly believe that there's this model, you know, somewhere out there in the the cloud that they're using. And and sometimes that's true. They are using a model, but more than likely in the future, that's not true. You know, we saw that Mistral has always been, they've been working on mixture of experts models for years now. And now Llama has released their mixture of experts environments. So, you know, more than likely they're sending in a prompt that's being accepted by a model that's being parsed and then sent to many different models. So, you know, we're probably at the most simplistic world we're ever going to be at right now. You know, in the future, you know, the back end of these models, there's gonna be hundreds, if not thousands, of them. So, you know, the idea that you know things are going to be simple or should be simple or could be made simple probably isn't true.

SPEAKER_01:

The other really popular use case for generative AI is coding. You know, teams are teams are using it to to write code. Um, I actually just saw uh a tweet today that from Gary Tan, the CEO of Y Combinator, saying that uh over over 90% of source code from its port codes are generated by AI now, which tells you that these, you know, the Y Combinator, of course, is a uh uh angel, does does angel investments they're they're they're the one of the most popular and and uh uh prestigious in Silicon Valley for early stage companies. So it seems like they they they're really viewing generative AI coding as a competitive advantage because they can have these like small, brilliant teams, right, of maybe two to four technical co-founders who can you know build almost an entire start. And they're they're all extremely bright people, right? They're not just you know uh you know uh just vibe coders. They you know, they're really good software engineers. So he makes it sound like it's a competitive advantage, and it's a way that they're gonna definitely infil infiltrate and undercut the market. What's what's what's your perspective on that?

SPEAKER_00:

You know, I I I do I do believe that yes, you know, Gen AI is good for coding, there's no doubt about it. But you know, what we found in our real world experience is that Gen AI is good for simple coding, you know, clearing registers and setting up different, you know, housekeeping tasks and all the things that as a software engineer you need to do, you know, that coding is very easy, it's very straightforward, it doesn't vary much. So Gen AI is good for that. But when we get into sophisticated logic and and very hard problems to solve, it usually falls apart. At least today it falls apart. So, you know, we generated and and committed millions of lines of code to our code base, but we found out that that was about 20% of what we generated. We threw away 80% of it because it just wasn't good or didn't scale or didn't work. So it'll get better over time. I don't know how fast it will get better, you know. But I talked to people and like, oh gosh, you know, I'm I'm really sad. I told my son to become or daughter to become a developer, and now there's no jobs for developers. I'm like, that's not true, that's not the case. We're gonna need as many developers as we can train for the foreseeable future. Yes, the you can do interesting things and brilliant people can do good things, and they will continue to do good things, but it's not the end of developers as we know it. Absolutely, yeah.

SPEAKER_01:

And and I think it's gonna be a bit of an art, it's gonna be a little bit subjective, and it's gonna be a little you have to be clever about where you want to use that gen AI coding. Okay. Yeah, yeah, okay. It can probably automate a lot of the QA testing framework for you, which is great. Um great.

SPEAKER_00:

Nobody likes well, very few people like to do that, I guess.

SPEAKER_01:

Yeah, yeah. And and even people who do love to do it, right, can suddenly just tell you, I hey, analyze this code, tell me the paths and help me, you know, kind of map it out so I can write my my my my tests faster. And but you're a hundred percent right that you know having it become be the smartest person in the room and and you know implement the sophisticated logic to solve your business problem. If you're relying on gen AI code for that, you're gonna run into problems because it's it's gonna do a bunch of stuff that no one understands. Um, and it'll be very hard to debug. And it's famously very hard to debug generative AI uh code, especially if you're if you're using it a little irresponsibly and you know, giving it very vague instructions and having that generate thousands of lines of code, and it's it becomes impossible to to debug, basically. So definitely one of those those interesting use cases that's clearly very lucrative. Companies are making hundreds of millions of dollars off of it. And on the other side, you know, there's there's new startups coming in and using that as a competitive advantage. So it's it's great advice that you have to the to the business leaders because large enterprises will need to adopt this at some point too, so they can stay competitive and keep uh you know their their efficiency up to market standards. But absolutely important for them to understand that it's not going to come in and solve your hardest problems for you. It can only help you kind of streamline that process with other smart people solving your hard business problems.

SPEAKER_00:

We as humans are still at the center of it all and will be for the foreseeable future. So this you know, world where AI takes over, that's a long way off. Absolutely.

SPEAKER_01:

Uh well, John K. Thompson, author of many books, most recently, The Path to AGI. Thank you so much for joining this episode of What's New in Data. John, where can people follow along with you?

SPEAKER_00:

Thanks, John. It's so great to be here with you. I I love every time we get together and talk. So grateful for the opportunity. Thank you. Uh, LinkedIn, that's the best place to connect with me, John K. Thompson, John Thompson. And if you want to, you know, check out any of the books, Amazon's the best place for that. Awesome.

SPEAKER_01:

And the link out to your LinkedIn will be down in the uh show notes for the for the listeners. John, likewise, I always really enjoy catching up with you. Uh, hopefully we can do it soon. We don't have to wait for your next book to to to do it. Uh and and uh hope to see you soon. Thanks, John. Take care. Bye bye.