Arkaro Insights: adapt and thrive in complexity

Beyond the Hype: What Creativity Research Tells Us About AI and Innovation | Dr Todd Lubart

Mark Blackwell Episode 56

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 42:21

What does the science of creativity actually tell us about working with AI? And where does human judgement remain irreplaceable?

Mark Blackwell speaks with Dr Todd Lubart, Professor of Psychology at Université Paris Cité, about his research on Cyber-Creativity and the future of human-AI collaboration in innovation.

Todd has spent his career building rigorous frameworks for understanding and measuring creative potential. In this conversation he brings that evidence base to bear on the questions business leaders are wrestling with right now.

In this episode:

  • Why AI excels at divergent thinking but struggles with genuine originality
  • Where the quickest wins are in the creative process, and where the pitfalls lie
  • The Pick and Mix approach to prompting multiple AI systems for better results
  • Why problem definition remains a distinctly human advantage
  • The four scenarios from the Manifesto for Human-AI Creativity, including Plagiarism 3.0 and Shutdown
  • How rigorous benchmarking transforms the quality of idea evaluation
  • The hidden cost of slop, and what to do about it

Essential listening for business leaders in innovation-intensive sectors who want to engage with AI on the basis of evidence, not headlines.







Connect with Arkaro:

🔗 Follow us on LinkedIn:
Arkaro Company Page: https://www.linkedin.com/company/arkaro
Mark Blackwell: https://www.linkedin.com/in/markrblackwell/
Newsletter - Arkaro Insights: https://www.linkedin.com/newsletters/arkaro-insights-6924308904973631488/

🌐 Visit our website: www.arkaro.com


📺 Subscribe to our YouTube channel: www.youtube.com/@arkaro

Audio Podcast: https://arkaroinsights.buzzsprout.com


📧 For business enquiries: mark@arkaro.com

Welcome And Why AI Matters

Todd Lubart

We've opened up the door to a whole new phase of creative thinking and what we call co-creation.

Mark Blackwell

So by relieving the bottleneck that you observe of humans divergent thinking with AI, we're now able to rethink the whole creativity process. Hello everyone. Welcome back to the Arkaro Insights podcast. This is the show where we help business executives thrive in a complex adaptive world. I'm your host, Mark Blackwell. Today we are joined by a real pioneer in the science of human potential, Dr. Todd Lubart. Todd is a professor of psychology at the Université Paris, where he spent his career redefining creativity not just a divine spark, but as a structured multivariate ecosystem. From his foundational work on the investment theory of creativity to his development of the creative profiler, Todd has provided the architecture for identifying and nurturing talent. Today we are going to dive into some of his more recent research and his extremely broad portfolio of research and look at cyber creativity and the future of human AI collaboration, especially in the approaching creativity and innovation. Todd, welcome to the show.

Todd Lubart

Hi, thanks for having me on your podcast.

Mark Blackwell

Well, Todd, it's really a pleasure for me to do this. I just has as an aside, when I started back in September 25 on this podcast program and started talking to friends and colleagues, at least three people mentioned your name as a go-to person that I really must talk to. So I'm extremely honored and thank you for taking the time to be on the show.

Todd Lubart

Great. Yeah.

The Seven Cs Of Creativity

Mark Blackwell

So let's focus on AI and what it means for innovation. I think right now, where are we? March 2026. A lot of us in the business world are hearing a combination of excitement and fear, uncertainty and ignorance about where AI may help us become more successful entrepreneurs, more successful innovators, and better indeed in the corporate world. And maybe we could use today as a way of bringing your knowledge and your latest research to provide a little bit more certainty on where we are as of today. Because indeed, you've described the whole idea of AI creativity as a bit of an uncharted territory. I think that really, in context of all the great charting you've done on human creativity, if you think about a map, what are the key differences?

Todd Lubart

Yeah, well, first, it might be useful to say that when we talk about AI, most people today are talking about generative AI, LLM type systems in which you give it a prompt and it gives you stuff that it can condense from the internet and many different sources. And so you have these generic systems like ChatGPT, Gemini, and others. Then you have some more specific chat uh systems that people have configured with a more specific purpose. And we did look at a few of these in our research. And then you have some systems you could actually train with data banks and things like this, that in a way can actually become a judge of ideas, for example, based on the data banks you fed it. So there's a range of what we mean when we talk about it. But when we talk about creativity and in general, a few years ago there was a chance to review the literature in the field, and I suggested that there are like seven topics. I called them the seven C's, because each topic we has a C word that goes with it. And the idea was if you travel across the seven C's, you've covered the whole territory of what creativity is about. And so in the past, we were talking mostly about humans as creative agents. And we could talk about who are the creators, what's their personality, what's their skills, what's their process of creating the second C. When they create, do they collaborate with other people, like in teams? That's the third collaboration. Then you have the fourth, which is their context in which they work, and how does that impact their creative productions? Then you have the creative productions, we called those the creations, and what's the characteristics of those. The next C was consumption, the uptake of these new ideas and products. And the final C was curricula, how to train people to help develop their creativity. All that was the human side of things. But nowadays, we could look at each C and see about AI impacting it. So how does AI, for example, impact the process people engage in, the creating process? How does it impact, for example, the creations? And is there new criteria that people use to judge what is creative nowadays, given that AI systems, as agents, in fact, can produce some stuff or co-create with humans? So there's a whole new set of questions around each C now that we got AI in the system.

How AI Creates Differently

Mark Blackwell

So maybe it's a way of structuring the podcast, you know, where are if you did a compare and contrast of the seven C's of a human versus AI, is there any big things that come out?

Todd Lubart

Well, for example, the process creating. Humans have a certain process that they tend to use. There's a lot of variability, individual differences. And some, there's obviously what we call creativity techniques, uh, which help structure the way you look for ideas. And you could learn these techniques. Now, AI systems, when you use those, they have a pretty different process. In fact, they kind of search for things that are connecting to each other in a statistical way, and they combine things that are out there in reconfigurations. And so it is a rather different search process. It might lead to pretty different results. And so that's one. That's one of the C's that's really different.

Mark Blackwell

Mm-hmm.

Todd Lubart

There's, in terms of, for example, creations, there's a lot of debate because the AI systems, they they don't many people say they don't have quite what humans have, which is like what's called the authenticity of the meaning of the creation. So an AI system creates stuff, like it can create a picture, but it doesn't exactly know really what it is more than a set of pixels that it combined statistically. And so you see, what does that mean to call it creative as a production? Because the AI system doesn't know what we're exactly doing here. And so then there is this question of collaboration, another C. Before we collaborated maybe human with human. Now you can collaborate with your AI. And that changes a lot in the way we collaborate and which agent, meaning human agent, AI agent, is gonna do what when you collaborate. So that's another C that's extremely impacted by the emergence of AI.

Mark Blackwell

So that's a lot to unpack there. If I can go back. So when I was speaking to other people in the past, I say, you know, the whole creative process, if we think about it, you know, the impetus to do something, that's pretty lacking in the AI, but strong in humans because they see the the need, you know, the frustration of a problem that they experience as a as a human. Then you've got the next phase, which is sort of the expansive brainstorming phase, and then later we can come on to the convergence and the selection phase. But in that brainstorming phase, I think I think it one one of my past guests said, you know, we really often too often think about that being the only part of the creative process. Whilst it's important, it's certainly not the only part. And we've I think a lot of the headline reports are maybe exaggerated that I think about that creative brainstorming phase, human has a pretty flat curve if we look at abilities, whereas AI has a narrow bell curve. So it's very good at the average, but misses the very good and the very bad. Is that right?

Divergent Thinking At Machine Speed

Todd Lubart

Well, AI is is quite productive when you ask it to give many ideas. It gives a real lot of ideas very quickly. And the the ideas that it gives are sort of like a sample of what it can find on the internet. And we've done some studies together with the people working on their PhD at the time. For example, in our team, we have Florent Vachon and Demetrius Graminos. And if you give an AI system a request, give me a lot of ideas, it gives you a set of ideas. Then if you give it the request again, another time, it gives you a different set of ideas. And so it depends also on the time of day you ask it. If you ask it, for example, um it is uh morning in Europe, which means it's nighttime in the United States, where the server is perhaps located, it does better than if you ask it in the early evening in Europe, which is daytime in the United States, because it's busy doing a lot of stuff with the server. Wow. And so it's actually more performant. So that's kind of funny. But also in our studies, we ask a system like ChatGPT maybe a hundred times to do the task. And then we get a big set of ideas, and you'll see some of them are quite frequent coming up each time, and some of them are uh a bit rare. And so you get a distribution of ideas from ChatGPT because you ask it a hundred times. Of course, a normal human does not do such a thing. Okay. And we also ask different systems like Gemini, Chat GPT, Grok, etc., to do these kind of tasks, and they have some variation. You know, they're not totally similar because they had different training bases, different algorithms of search. Another thing you can do is you can influence what we call the temperature, which if you heat add temperature to your search, it heats up the tendency to jump around in the database. And so you get more random stuff, which might be interesting for divergent thinking and creative thinking. So, so you know, there's a lot to it, but it's quite strong in divergence, which humans have trouble with. In the past, a lot of creative thinking techniques were used to help humans get more ideas, get more out of the initial rut. And so these systems might be a really good technique in a way for us. And in the past, humans also had to evaluate the best ideas. But since they didn't have too many ideas, that went pretty quickly. Now they have a lot of ideas, and that's become a new important skill, we'll say. It was always there, but there wasn't much there to evaluate. So, you know, wasn't much of a skill, we'll say. And so the thing is that there's a huge tendency now. You ask it, it gives you a lot of ideas, you say, wow. You don't think your own, you just start to select a few. So, you know, there's there's a changing landscape in how we interact with our uh, let's say, support tools, uh, and how that impacts uh the actual outcome.

Mark Blackwell

So I I get it that humans probably is strong for the impetus, you know, let's start this innovation project in a way that AI isn't there yet. But am I hinting from what you're saying that you think at the end that's another area where humans may find an advantage over the current state of AI is in selecting and prioritizing, refining, or may even hybridizing these ideas. Is that a fair statement?

Better Prompts Start With Better Questions

Todd Lubart

Yeah, that that is an aspect that as humans we could invest more of our energy in because it's a new opportunity, we'll say. There's also, of course, the opportunity to ask questions to these kind of AI systems, and the way you formulate the prompt impacts a lot what you get. In fact, people tend to formulate a prompt, ask one question, and it gives stuff. Well, if you start to ask a lot of questions, you'll see that it gives various stuff. In the literature, that's never been recognized as a key key ability for creative thinking, asking the questions, setting the problem up to be solved. And you have famous citations like that a problem well posed is half-solved and so forth by John Dewey, etc. But now it's really become an evident skill to develop for creative thinking. In one paper that we did with Sudopa Champunich, which is available online, we looked at lots of specialized GPT systems that people have produced and put out there on the internet. And some of these actually contain various help to formulate problems. Like there's one called AHA Apple, and there's a function in there, help me generate possible ways to ask my problem statement. And then you can investigate those. There's also a function, help me get lots of ideas using various creativity techniques. So there's a very exciting moment because there's a lot of like we've opened up the door to a whole new phase of creative thinking and what we call co-creation.

Mark Blackwell

This is really interesting. So by relieving the bottleneck that you observe of humans divergent thinking with AI, we're now able to rethink of the whole creativity process. And what you're saying about defining the problem or restating the problem echoes that conversation I had with Roni very recently, where she's been, you know, teams that spend the majority of the time defining the problem really perform better at the end of the day with or without AI. That wasn't an AI-related thing, but it echoes that idea that um, and it's the one that also was around and I discovered that sadly Winston Churchill never said if we had an hour to solve a problem, I would spend 55% thinking about it. But your research is coming at it from different ways to say that there might have been an element of truth about that. So that's good. So how do you think we should, you know, are there any other aspects of the way that we should work with AI solutions across the whole process that you would like to comment on?

Todd Lubart

How should we work with AI solutions?

Mark Blackwell

Well, um For example, and just to trigger a thought in your mind, I had a great conversation with Vlad Glavano, and he was talking about slow AI, which some work he's doing with Ron Baghetto, which is, you know, we should try to shift this instead of using AI just for answers and just the rush of that. Think about using my AI for stimulating questions. So giving the prompt back at us, as it were, which sort of relieves some of the blocks in our own creative processes to think about ways of being as a more productive man and machine rather than either or that.

Using A Team Of Chatbots

Todd Lubart

Yeah. Well, this is a very good question and topic. And in fact, it depends a little on the domain or tasks that you're trying to do. So there's not exactly one solution for everything. But from what we've found, a key is is to say, let me try out different ways to incorporate AI tools in my creative process. Let me see where it is useful and where is it maybe even harmful or of no particular use, and which tools are the most interesting for me, which chatbot, for example? Okay. So is one provider better than another provider? And that might change as these providers update their systems. And also there is a possibility to get a team of AI systems as if they were members of your team, and you give it out to several of them, and they all give ideas, and you try to see which is the best way to combine them. So, in which phase, let's say, of creative thinking is the AI system the most valuable? In several phases, perhaps, maybe with different tools in each phase, maybe ChatGPT in one phase, Gemini in another phase. So there's a kind of exploratory path to be taken in each task that you'd like to try to upgrade in a business setting. And the other thing is that, of course, you might have a position where you don't want to outsource every phase. You want some to be kept for humans because it's a high-stake decision, or you want to say that humans have the ownership of the ideas coming out of a certain phase. It's human-supervised, or it's human-done, for example. It's part of a strategy globally. So this leads to the whole notion that how you position with AI is not just a random, let's try that and it worked and we're happy kind of approach. And also it's going to become part of the organizational culture, in fact.

Mark Blackwell

And so we clearly we're in the learning phase, the experimentation phase. If there are a CEO or a head of uh innovation or a business leader listening to this podcast and thinking about the f stages of the creative process, you know, the inset motivation to start, the divergent, the convergent, and the selection. Where would you say is the biggest, easiest, quickest win right now that you feel you'd encourage you to go for? And where would you probably give the most caution?

Can AI Judge Creativity Reliably

Todd Lubart

Well, probably the quickest win is is in the generation divergent type phase. And that's because the AI systems offer a process that's quite different from the human one, probably. So it can bring you some stuff you wouldn't have already gotten, we'll say. And so it's a potentially a good value added. Let's say in the evaluation, you can train an AI system to do pretty well. If you ask it to evaluate without any training or guidelines, that can get a bit messy and problematic. And in the in fact, in the implementation and developing the idea for implementation, it's very contextualized, usually what you need to do. And so things can go quite wrong in that phase, too. But let's remember, generally speaking, there is a prompt in the beginning in which you set it in a direction. And so asking the right question, as we mentioned, is a key thing. So I think the quickest win is in the generative phase at the moment.

Mark Blackwell

So I think there's clearly advantages. I think we all must be aware, most of us has fallen into the trap of being using AI lazily and let our brains suffer the risk of atrophy on a problem. So we've got that problem. But you've also spoken about plagiarism 3.0 as a risk that we might need to think about.

Todd Lubart

Yeah. Well, in one of the papers in which there were a lot of colleagues associated, and it was a paper in which Florent Vinchon and many people associated, we called it the Manifesto for human AI creativity. And we generated, let's say, four scenarios possible. One was the co-creation, which we mentioned, which we thought needs to be favored. One is plagiarism 3.0, which means you can plagiarize stuff, but even better thanks to AI tools. And you can just ask AI to do it and you say it looks good, and you say, here it is. Thank you. I did it. Which now there are some tools to help detect that AI did something, but still it is a lazy way to get something to give in. And as we know, humans are pretty lazy species. It's very appealing to a lot of people. And they don't necessarily say, they might say, I use some AI to help me, but to what extent was the help like 100% help, in fact? Okay. Um, so that is a huge risk at the moment. Um, the other two scenarios was that people will say, I did it all by myself, a real human, like the old way before AI came, and they'll make that a value point. And the last one is just what we call shutdown, which means that they just say, since AI can do it, I don't have to worry about being creative anymore. Just let my AI system do it. And it is what it is. And I say that. But the AI systems don't do that great. Reality. They can get a really good idea sometimes, but it's not every day, in fact.

Mark Blackwell

Yeah. So that I was one of the points you made. So they appear to be fluent and good, but they're only average performers, and that we should be aware of that. You know, it appears good, but maybe from Probering it it isn't as more. So finding that balance is wise food. But I did take that hint from you that if as a European I should be doing more of my work in the morning European time. That's one for my notebook. But there's another thinking about finding good differences between humans and machines. You talked about um we still, as a human beings, have a sense of humor. And I couldn't believe it. You told me that you've done some research that's showing that uh having a sense of humor in your prompts actually generates some interesting results in creativity. What is what's going on, and how should people think about that?

Co-Creaition Versus Plagiarism 3.0

Todd Lubart

Okay, well, that was a line of work that was done by Demetrios Graminos, and I was involved. And we had AI systems generate stories, okay, and based on a certain topic. And then we wanted to give them feedback to see if they would do better. And so one of the feedback methods was to tell them they should try to introduce humor. And um, this was to get the story better, and we wanted to see if each system would actually get more creative when it tried to improve itself with our suggestions. Why was humor an interesting suggestion? Because, in fact, according to some authors, notably Arthur Kestler, humor itself is the combination of several thought matrices that don't normally go together. And so they create a kind of an unusual moment which is unexpected and therefore surprising and somewhat innovative and funny. And so to introduce humor is to request an unexpected turn of events in a way that is also engaging, amusing, and so forth. For story generation, it's a prompt that does help these systems, although they're not that great at humor, but occasionally it does help them out to get out of a rut. Because when you ask a system to generate a story, a lot of times it finds a mix of average stories on internet, let's say, about the topic of interest, and it makes a kind of like kind of a new twist on an old storyline. And so it's a little bit creative, but not that much. And so it's hard to get these systems to break out of their, let's say, vortex of of spinning around what's on internet to you with a slight modification.

Mark Blackwell

Aaron Powell So can you give me an example of what you mean by uh humor in a prompt? Is it is it seemingly impossible situations or well Tell me, I'm trying to I'm trying to imagine, but I would love to hear what you say.

Todd Lubart

Well, if you if you give AI uh a task to do and it it generates some output, and then you say, okay, that's somewhat interesting, but it's pretty typical, and there's no there's there's nothing in there that's funny, that's uh like uh a moment of surprise, that's almost integrating a kind of humoristic twist. So can you do that? Try to do that. So we don't give it jokes and say, here's a joke, try to put it in. We ask it to try to engage in a twist of the story that would be surprising and humorous, and it uh occasionally comes up with something interesting. So it's a way to help the system learn. In other words, the human becomes the tutor of the AI. And also the humans obviously, what's funny is very much a human judgment. So it's just an example. It's just an example of a way to improve the AI systems.

Mark Blackwell

It's really about how the human does the prompting to try to get the humor element that you think is so valuable in uh getting creativity out of a system, which is very useful. But that sort of leads me to a sort of broader question about prompting. You're suggesting another approach, which was actually use multiple different uh AI machines and prompt with different of them and see how they get on. Can you tell me more about that?

Todd Lubart

Sure. Well, we in a certain study there with Dimitris uh, we had all these different AI chatbots and they each did the task. And then what came back from all of them was put together in a composite file of replies, and it was submitted to all of them to try to either do better than what was in the composite file, or to pick and mix and combine stuff from the composite file. And it turned out that they do it can improve on what was already done, but in particular when you're in the pick and mix mode, which in fact is kind of the specialty of these chatbots based on their algorithm. And when you ask them to just do better, it's well, they often go back and find kind of the same stuff or even stuff that's worse. This concept of better is not so clear to them, in fact.

Mark Blackwell

So I'm getting a sense of hybrid vigor. If you use Gemini and Claude and Chat GPT and mix them all together, then you get uh something which is better than the sum of the parts. Now that's interesting, and something I'll certainly try out at home on that. Yeah. I like the idea. But I'm getting a message as I talk to people and read up that AI is good at generating ideas. I mean, generative AI, that would seem to suggest that it might be an idea. But what about scoring? Is that an area where human judgment still has the power, or or can these AI machines actually try to evaluate different outputs and try to think about what's better? And I think it might be a bit connected to what you've just said.

Rubrics, Training Data, And Gold Standards

Todd Lubart

Well, AI does, it can do a pretty reasonable job as a judge. There's kind of uh a couple different modes people are trying in the research. In some cases, they just give a bunch of ideas, let's say that humans generated, and they say, can you evaluate it? This is the most open-ended thing, and usually not the best. If you give the AI a judging framework or rubric and it says, like, use this scale. Here's what I mean by each number from one to seven on a scale. Then, and here's an example of a one of a two, and this is why it's a one and why it's a two and why it's a seven in creativity. They do a pretty good job, pretty reasonable job. Now, of course, you can ask several chatbots to be judges of the same material, and then you can average the chatbots. So you can have Gemini, Claude, uh, et cetera, Chat GPT, et cetera, as each is a judge, like a judging team, like humans. If you do that and you take their average, you'll find out that that chatbot average correlates with a set of human judges who you averaged, probably about 0.8, which is a pretty good correlation, but it's not perfect at all. And so there's still a little potential missing of the good ones, we'll say. And now some people have given chatbots a whole bunch of human production scored by human judges, like thousands of things. And they tell the chatbot to learn to be a good judge based on this database, which is huge. And it's for a certain task, by the way. It's not about everything. So you give people a certain story task and you train a chatbot, it becomes a really good judge because it has this specific database, you see. So the use of chatbot judges trained on a specific large human database scored by humans is a tendency now. And it actually is a pretty good uh strategy if you want to avoid the cost and time and difficulty to find qualified human judges. But they remain the qualified human judge. A panel of them remains sort of what we call the gold standard to use.

Mark Blackwell

And of course, implicit in what you're saying is the human must create the measurement system. What constitutes a one, what constitutes a two, which is quite a lot of creativity and judgment in that in its own right. I can see using the task, but is you're not suggesting that they replace humans, but just trawling through thousands and thousands of data points, there could be a Yeah.

Todd Lubart

Well, that's that's totally right. If you let a chatbot create its own judgment system, you might agree with it, but you might not. But then again, do we care what a chatbot thinks is most creative for its own self? Because ultimately, who is the quote user or consumer of the judgments is humans. So, I mean, that chatbots like it because they're a chatbot and find it, or if you tell us about computers and you say something mean about computers and chatbots don't like it, that's a chatbot judgment issue. It's not a human judgment issue.

Mark Blackwell

Well, yeah, interesting because as we start thinking about the relevance of this increasingly for commercial organizations, it's innovating and developing value propositions. And that judgment is in the eye of the beholder, which is the consumer, not an AI chatbot. So it's so easy to forget that. But so again, coming back to a sort of broader area, you're pointing out the value of having a very rigorous innovation process. And I think ultimately business leaders who aren't so familiar with innovation think about it, you know, fit airy-fairy, you know, it's sort of emotional, it's not a rigorous process. But in your career, you've argued against that. Can you tell me, give me some examples of some of the tools that you use which help us bring real hard rigor to the science of innovation?

Benchmarks To Reduce Bias

Todd Lubart

Well, over the years, we work to develop some creativity assessments, or you can either assess things people have produced in a real task, or you can give them a mock task and say, tell us what your ideas are in the hypothetical situation we give you. And then we compare their answers. And we develop scoring rubrics with judges and we show that they can be consistent with each other. And the whole point of it is we use not, we don't just say be creative about anything. We give them a certain job to do. This allows us to compare people or teams, because there's actually they're doing the same thing. If everybody's doing their own different thing, it becomes more difficult, but still possible. But and we also develop benchmarks like what is an average performance so that if one person or team does it in the future, we compare them to this database of what people did. And so this benchmarking process is quite important in the judgment of innovative creative ideas. And of course, you'll never see necessarily exactly the same idea coming exactly again and again. You could, but we have to be able to judge things as like they're similar to an idea that you previously got an average score. And so you need kind of a judgment like a family resemblance judgment. And humans are quite good at this, in fact. Uh so our technology of measurement involves generating norms or benchmarks. And when people are in a business setting, they might just have a couple ideas, only two or three ideas before them. They don't have any real benchmark. And so what they give as a judgment is more of an opinion, maybe, and has some potential measurement error because they could be biased. Because if they just had a really bad idea, then they get a kind of average idea. They say, wow, it's a lot better because they were biased by this really bad thing they just saw. And so this kind of stuff is avoided when you develop benchmarks and actually train some people to try to be good judges. We'll say we've actually trained people and found they really improve in being a good judge if you give them a training.

Mark Blackwell

That's really interesting. I can tell you, I've seen several times in my life. You talked about uh bias of the recency effect bias. But one of the big biases that exists is the pet project of the boss. You know, if the boss says something, the team typically has a huge bias. So, oh yes, boss, brilliant, well done, and can go down a disastrous innovation process. So anything that you've got to try to limit that and be more rigorous in selecting uh and judging would be a vast improvement. Yeah. Yeah, yeah. So I'm just wrapping up the conversation. We've had a great tour of some good, and I'd really welcome hard database science on where we are in early 2026 on using AI for innovation. And we've learned lots of simple little tools like if you're in Europe, do it in the morning. And we've learned how tools can be creative. And I love the idea of using multiple different uh generative AI systems and trying to build something. But if there were a business leader listening to this, he's still a little bit cynical about the role of AI in innovation. What would you say that he needs to think about?

Practical Advice For Business Leaders

Todd Lubart

Well, I'd say that every use case is a bit specific. And depending on the industry you're in, the kind of market or problems you might be facing, not necessarily the same solution is gonna be optimal. So, first of all, it's maybe not optimal that everybody just uses Chat GPT and that's it. Okay, it's maybe not optimal that you use it in various in every part of what you do. So I think the key is to explore and try various tools in various parts of your thinking process and to uh, you know, have a large exploratory phase of testing out what's out there. Maybe creating a specialized chatbot is also an option. And maybe using a team of chatbots, like we mentioned, of various uh, you know, available options as a team and seeing how to optimize the uh value added. The value added is potentially positive, potentially zero, or potentially negative. And now lately, there's a whole big discussion about what they call slop, which is like you give your chat bot something to do, it gives you back something. It looks pretty okay. It's when you read it, it's reads okay. But in fact, on the surface, it's looking nice. But when you actually know about the topic, you realize that they're sloppy answers and you got to go fix them. And people are saying it takes them as much time as if they did it themselves, in fact, because they got to fix the mess that's kind of hidden. So there is uh, I'd say, you know, there's there's all the uh possible outcomes, so-called the good, the bad, and the ugly. And um and it's a matter of uh collective intelligence to decide how to best profit from it.

Mark Blackwell

Got it. So we're in our early stages, Todd. I can say thank you. It's been a really interesting, but I'm convinced that this is going to be part of the future. We've got to avoid the hype and avoid the skeptics. There's I can see a journey somehow through the two that means it's going to help bring creativity and innovation into the workplace. So thank you very much for that. If any of our listeners want to find out more about your work, um, we'll put everything in the show notes. But uh, is there anywhere that you could tell them to go as they're listening right now?

Todd Lubart

Well, the most simple is just send me an email, really, you know, and they can send me an email. The most simple email, just send it to todd.lubar at gmail.com, really. I have a university address, of course, and sometimes it actually works and sometimes it doesn't. But anyway, yeah.

Contact Details And Closing

Mark Blackwell

So that's no worry about that. We'll also put your LinkedIn everywhere. Uh and Todd, I'd love to stay in touch and keep up to date with what you're doing. I see this is a really exciting area, and I'm sure that in six months, twelve months' time we're gonna have um a great advance on this. So thank you very much for your time, Todd.

Todd Lubart

Thanks, Mark.

Mark Blackwell

So if you found this episode valuable, please do subscribe to the Arkaro Insights and share it with a colleague who's struggling to learn how more about this and be successful and thrive in a complex adaptive world. I'm Mark Blackwell, and we will see you in the next episode of the Arkaro Insights podcast. Thank you.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Just Great People Artwork

Just Great People

The Sixsess Consultancy
HBS Managing the Future of Work Artwork

HBS Managing the Future of Work

Harvard Business School