Shaken Not Burned

AI is powerful, but why is transformation so hard? With University of Exeter

Felicia Jackson and Giulia Bottaro Season 6 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 40:11

AI is becoming one of those topics where the scale of the claims can make it surprisingly difficult to work out what is actually happening.

We are told it will transform business, unlock extraordinary productivity gains, reshape jobs, and even help solve major global challenges like climate change. At the same time, there are growing concerns about energy demand, governance failures, bias, job losses, and the sheer speed at which these systems are developing.

The dominant narrative tends to swing between utopian optimism and existential fear, often without spending enough time on a more practical question: what actually happens when AI is introduced into real organisations, real systems, and real decision-making? This is the focus of this week’s episode, which is the first in our AI series.

Rather than debating whether AI is inherently good or bad, Felicia Jackson speaks with Professor Saeema Ahmed-Kristensen, Associate Pro Vice Chancellor for Research and Impact at the University of Exeter and Director of DIGIT Lab, about something much more grounded: why so many digital and AI transformation efforts struggle in practice and what that reveals about the limits of technology alone.

One of the most useful distinctions in the conversation was between problems that are well-defined and those that are not. AI is particularly powerful when objectives are clear, data is available, and success can be measured relatively easily. In those contexts - pattern recognition, diagnostics, optimisation - it can offer extraordinary value. But many of the most important challenges organisations face are different.

Sustainability, climate strategy, major organisational change, and social systems are messy, politically embedded and filled with trade-offs. They are often what researchers describe as wicked problems: issues where there is no single right answer, where choices create consequences elsewhere, and where uncertainty is part of the challenge itself.

That distinction matters because it shifts the conversation. It suggests that AI may be extremely useful in supporting parts of decision-making, but it does not remove the need for human judgment. In fact, in many cases, it may make governance, accountability, and strategic clarity even more important.

AI is powerful, but power is not wisdom. Better tools do not automatically create better outcomes. What they do do is make it even more important to understand what kind of organisations, systems and governance structures are capable of using them responsibly. As this new Shaken Not Burned AI arc begins, that feels like the right place to start.

If you enjoyed this episode, subscribe to our newsletter and follow us on LinkedIn, TikTok and Instagram and why not spread the word with your friends and colleagues?

Felicia Jackson (00:35)
Hello and welcome back to Shake and Not Burned. Today we're going to be talking about AI. We're told it will replace our jobs, transform organizations and even help solve today's sustainability and climate challenges. Yet at the same time we keep seeing reports that a large number of AI and digital transformation initiatives fail, particularly in large established organizations.

We know that AI adoption is driving significant energy and resource use, concentrating power, even though there's a deep uncertainty about where all this is heading and where we're going to end up. And yet, again, the dominant belief is that AI can simplify complexity. It can improve our decision making and it can optimize outcomes so that we'll be solving hard problems with better tools.

In the climate space AI can improve climate modeling. track emissions, monitor deforestation and land use, optimize energy systems it can help with all of those things. What it can't do is decide which trade-offs are acceptable. It can't resolve political or social conflict.

or manage long-term uncertainty. This is one of the issues. Climate change isn't just a technical problem. It's a problem of choices, accountability and coordination across society. So AI may be able to support that work, but can it replace it? Today we're going to be looking at that contradiction. So rather than looking at AI as a standalone technology, we're going to look at it inside the system.

how it's embedded in organisational processes, innovation and decision making. And that matters because sustainability is messy. The challenges don't have clear endpoints or quick feedback or even single right answers. It's trade-offs across time, people and systems, exactly the kind of problems where over-reliance on AI and data can be misleading. So my guest today is Professor Seema Ahmed-Christensen.

Associate Pro Vice Chancellor for Research and Impact at the University of Exeter and Director of Digit Lab, which studies how digital technologies actually work in practice inside large organisations. So we're not going to be having a conversation about whether AI is good or bad, more about what AI reveals about judgment, trust, responsibility and why better tools alone don't produce better outcomes. So, Saima, welcome to the show.

Saeema (02:53)
Thank you, Felicia. Thank you for having me on. Such an interesting start there. ⁓

Felicia Jackson (02:57)
I'm very glad.

Now, we hear a lot about what AI could do. In terms of your research, in terms of the work that you've done, why do you think it is that so many of these AI and digital transformation initiatives struggle or fail to deliver in practice?

Saeema (03:17)
I think you've started with the way you framed the episode today that almost hits it on the head. So it's the systems inside and understanding AI not on its own as a tool, but how it exists within a system. Now in digital transformation, there's been some work on the barriers for digital transformation.

and they largely speak to the struggles around AI too. And these are leadership and skills. So it's a human's understanding how and when to use AI and the AI tools. But it's also about the infrastructure and the readiness of organizations. Have they got their data in order? they got, have they organized themselves so they can take good advantage of AI tools? And...

also to understand, which is the most important one, the value. Understanding what AI tools can do and how they can do it and what they can't do is one of the key barriers. So those four barriers are true for most digital technologies, which also include AI. And I think recently what's happened is AI tools have shifted. So of course there's been work on developing AI in many different types of fuzzy logic, neural networks, et cetera, for decades.

But recently, those barriers to access tools because of the general AI have been taken away. So it's quite easily available. Yet you don't really know how and when you can use this. So this still requires some work in there.

Felicia Jackson (04:44)
Now that to me is an utterly fascinating topic because it's about decision making. It's about, are we actually identifying a problem that needs solving or are we doing that thing of throwing technology at the wall and seeing what sticks? I realize that's a slightly spaghetti analogy, but there does an awful lot of the time seem to be that thing of we're going to do AI.

as a company, we're going to be more efficient. And I believe there was the research which said that 94 % of AI implementations fail. So do you think this is about failure to understand the problem? Do you think it's about failure to integrate the technology appropriately? Or do you think it's actually about how organizations work?

Saeema (05:29)
You've given me three questions with three answers in there. And probably a little bit more in there. So with AI, the first bit, going back to your spaghetti analogy, that's how we think of product development. Sometimes you're solving a problem, and sometimes you've got technology, and then you're looking for what you can do with that technology. So many organizations fare being left behind.

Felicia Jackson (05:32)
Sorry about that.

Saeema (05:52)
So one is if AI exists there must be something that you can do with AI that is relevant and if you don't do that you're going to be left behind while if the rest of the sector is changing. So that's the technology push bit. Now the second bit about the problem solving, that is tricky because this goes back to understanding what the value of the tools are and of course AI is evolving and it can do different things.

And then that differs by each domain or different sectors. In some domains, the privacy of data, the security and the trust of the systems are really, important. And that's probably true of most organizations, in fact,

my own work is mainly in manufacturing, in design, backgrounds and design. In DigitLab we go across that, we look at other professional tech, professional support, professional services. We also look at healthcare a little bit in there and we've also looked at the creative industries. And for each of these sectors there's different interests in how and where you use AI.

Felicia Jackson (06:46)
be a big one.

Saeema (06:56)
Creative sectors are quite interested in content creation. How can you create content? And manufacturing industries, they may be looking at optimization for little parts that are going to be integrated into products. But then the security of that data is very important. And trust in whatever is being used in these algorithms are really important. So there's different understandings.

But that's where the challenge is. We currently don't know necessarily where AI is best in the organisation and where do you need human beings in that organisation. So where does the decision making take place in there?

Felicia Jackson (07:35)
think that for me is one of the key questions. Where does it actually sit? Does it sit in different places, in different organisations? Because it seems to cross, and this is a sustainability issue as well actually, so I can see a lot of analogies, part of the challenge is that decision making sits across different departments and different lines of responsibility. And I can imagine AI sitting in innovation, in operations,

So where does it sit?

Saeema (08:02)
I think that answer is not a generic answer because it's very dependent on the industry that you're working with. So if I take an example, so if we look at design and manufacturing, which is in my own background, some of our research we've looked at using large language models, so the more readily available tools to generate product ideas or service ideas. And we've also looked at it to understand

Felicia Jackson (08:07)
Okay.

Saeema (08:26)
how you can frame a problem. So part of design is understanding what the problem is. So in manufacturing industries, you'll call this requirements or specifications. In software, it's typically also requirements and specification. In design consultancies, it might be a brief and understanding. Now as designers or manufacturers, you change context. One day you may be trying to design a service or a product which is for dementia patients.

or it may be for their carers. And another day you may be doing something in a completely different domain. So you're switching all the time. And what we found that where large language models are particularly good is understanding that framing of the problem. They can easily search because that data already exists somewhere. So you imagine if you are a human being and a designer doing that, you would have to search and understand everything around the problem. So this typically is done by either observing users.

or doing interviews with users or doing focus groups. And then you understand the challenges currently being faced and then the needs to develop a product or service. But you could shortcut that by using AI because you can find this information and resources that are available. So that's one really powerful place to put it right at that front end of understanding what the challenges are So sometimes it's called task clarification, sometimes

problem framing or just developing the brief In the second bit, the innovation side, when we use large language models to generate ideas, what we found here is human beings are far better. I should contextualize that because one way of measuring creativity is the number of ideas you produce. And for that, large language models, they can produce a lot. In one minute, they could produce four, five, six, seven. In two or three minutes, probably 10. Whereas you and I probably could

produce far less in a couple of minutes, depending on the challenges. However, the quality of those ideas and thinking how innovative they are, they tend to be around the same solutions because the way AI works is around the data that's available. And of course, there's reasoning taking place and deep learning taking place. could create solutions that are different from the ones that exist.

Felicia Jackson (10:25)
right.

Saeema (10:38)
but it can't completely jump out of the box and find something that's completely different. And that's where human beings are really powerful. So in design, there's two elements of that. So one element is just being very innovative. So being able to find a solution that's completely different. An example I can give you is in that large language model versus human beings that we looked at, those 12,000 ideas, we had 600 human beings develop the ideas. And what we could see...

is that the variety of ideas that they produce, so how different they are from each other, was far greater in the pool of human beings versus the large language models. So what this means is incremental innovation is quite powerful for large language models, where radical innovation, where you're coming up with a completely different type of product or service.

Felicia Jackson (11:21)
right?

Saeema (11:28)
currently sits with human beings, particularly creative human beings, So obviously there's a training and knowledge and skill set to nurture that. So I think that's in the creative industries where I can see where you can use AI to sit alongside human beings.

Felicia Jackson (11:44)
the thing that fascinates me about that is climate change, be blunt, because of the fact that climate change is the outcome of a system of decision making that works the way it's supposed to, because it's built on extraction and exploitation. The need for radical solutions in something like that is clear, but obviously no one quite knows what that's going to look like. So actually,

we're looking to AI for solutions to one of the biggest problems we have, and yet the research is showing that the most imaginative, the widest variety of ideas is going to come from human beings. So perhaps what we need is a way of combining these two approaches. And I think this would be a really good time to explore what we actually mean when we talk about the different problems that

AI can solve or that human beings can solve because you were talking about the differences perhaps between, design and manufacturing and creative design. maybe one of the ways of looking at this is trying to explore the difference between problems which are really well defined and problems which are really badly defined that we've got fuzzy edges because it seems to me that the fuzzy edges is more of a people thing. Can you talk me through

what that difference is.

Saeema (12:57)
Yeah, this is in psychology, they refer to ill-defined problems and well-defined problems. So well-defined problems, a good example is chess. So most people understand what a chessboard looks like. So you have chess pieces and they're laid out. There's a very clear starting position. And then there's a very clear goal, which is to capture the king,

Felicia Jackson (13:08)
feeling.

Saeema (13:18)
And then there's rules of how each piece moves, So this is a starting point, the goal and the operators, the rules of each piece. So that's a well-defined problem. If you move to ill-defined problems, design is a good example of that. The starting point is unclear. You have to first gather all the information. So if you had a task of let's design a way to

play music. You first need to find all the information that's available, what the needs are of the people, which context, the different ways that you can play music. And then you've got more than one way to come up with an idea. So you could generate different solutions. There's no one correct way to be creative. And in fact, perhaps being creative is defined the other way that doesn't exist. So that is very, very different.

Felicia Jackson (14:02)
Yeah.

Saeema (14:06)
if I set that exercise to a bunch of students or designers, we would come up, hopefully, with 100 students, 100 different solutions. So there's no one right answer. So that ill-defined area makes it quite more challenging because you're starting unclear, you've got multiple solutions, and you've got multiple ways to reach the solution. And that is the key difference between the two. So when you've got well-defined problems, you've got very clear data.

So you've got clear starting points and you've got a clear objective that you've got to reach. So that is an easier problem for AI. We've seen examples with image recognition. So for example, in dermatology, looking at training of how you identify potential skin cancers, you could feed lots of images and then you can train this very well by understanding.

Felicia Jackson (14:39)
Okay.

Saeema (14:54)
what are the current different types of images or different types of symptoms. And your AI can be better than the human being because it's actually being exposed to more and more problems really, really quickly and different types of potential symptoms than a doctor would have who's only got a year of experience or two years experience. But if you move to the creative one, the ill-defined problems, that is less clear what the objective is for.

for the AI. So that's one of the key differences in the types of problem making. And there I'm just talking ill-defined, well-defined. And then you can go extend that to talk about messy problems and talk about complex problems and wicked problems. And I think when you're talking about the sustainability bit, this is a bit of a wicked problem because you're using the AI to try and improve something and solve a problem faster that potentially could be a more sustainable solution.

Felicia Jackson (15:27)
Mm.

Saeema (15:49)
but you are generating impact on the environment while you're doing that. So it's a wicked problem because there's no really one clear easy answer in there.

Felicia Jackson (15:58)
I do think that the definition of a wicked problem is an incredibly important part of the conversation. And for our listeners, I will add to the show notes a bit of an explanation, but it's very much about the idea that it's really sticky. It's embedded in what happens in our world, in our society. And often it's the people trying to solve it, people who started the problem in the first place, which brings up a tension about how you're actually supposed to address these things.

And I think that continuum of the type of problem we're talking about is really important to our understanding of AI because what we're talking about is the importance of understanding why AI performs better on some problems than others. And to use the analogy that you used about the photographs in dermatology, would I be right in saying that where AI performs best is when it is about

specificity, optimization, pattern recognition.

Saeema (16:53)
Yeah, and availability of data. That would be the other key there. So of course, you could have data sets that are biased. You know, I'm brown, as you probably can see. So if there's not enough images in there, it will be biased for certain skin types rather than other skin types. But if you've got good quality, balanced data sets and clear objectives, yes, I think that's currently where AI is pretty good.

Felicia Jackson (16:55)
availability of data.

I think there is a whole other category of problems around AI, which is data bias, algorithmic bias. you just have to look at the different things that appear on different people's feeds to realize that we're actually getting to a point where people seem to be living in completely different realities with completely different information about what's going on in the world around them. There is a bigger picture there, which has to be addressed, but I don't think it's something we can solve in our

chat today and it's the sort of thing that needs to be kept in mind for every project What is it trying to achieve? And again that comes back to this issue of what problem are you trying to solve? What assumptions are you making? What data is available? And I suppose who gets to make the decisions about what qualifies as acceptable when you're putting all these things together.

Which then brings me back to the organization. the way we've talked about it, we know that AI is excellent at speed, scale, and you talked about how it can be really useful for framing problems. So what is it about that that's so attractive to organizations? What can it do for an organization using those special qualities?

Saeema (18:29)
I think it could speed up many routine tasks really, really quickly and shortcut the time and process involved and the number of people involved. that is quite So it's very dependent on the type of industry you're in. we know code can be written quite quickly now with large language models, translating code for software into a different type of...

Felicia Jackson (18:49)
Mm.

Saeema (18:53)
language can be done very quickly. So there's a lot of support writing your emails, in a way that the tone is correct for your employees and the message is sent that that is a great human AI cooperation tool in there. I think it's quite hard to talk about AI in general,

Felicia Jackson (19:09)
Okay.

Saeema (19:10)
because there's bits that support the human being in their routine, which is slightly independent of their job, like the emails that I've just said, or in writing. Now, in writing, my personal view is it should be supporting editing. You should be doing the writing. Otherwise, it's really easy to see. It's just fluff that's coming

And that's the same as the design solutions, they're variants of the same thing. and I think what we're seeing is also a little bit of a backlash against that, that people are beginning to recognise what does AI generated output look like? if you use it the other way around, where you're first creating the writing and then using it to support emails, and I'm not talking

where it's the profession and the art of writing is really core to your business, but where it's a support function for writing reports, whether it's in manufacturing, whether it's healthcare, that becomes a better place to use AI. So there's some parts where it's just supporting the human being independent of the role. And the other parts, such as the image recognition bit, we've seen that in the defence industries where they're looking at that to have better quality accuracy.

of their products where you could look at it to look at how you could have faster input in writing, how you can use different modes, can you use your voice rather than typing in there. In the healthcare example where we're talking about users, typically when you're designing services you're talking to six, seven, eight or nine users. But if you could use AI to get understanding of a bigger range of people, you suddenly have much more inclusivity.

and diversity for your

Felicia Jackson (20:46)
interesting.

I think this is the challenge that these are conversations we have to have about the tools that we're using. And perhaps we don't have the language yet, as concrete and as hard science, as econometric as our system seems to respect, but it fundamentally comes down to how humans and their tools operate together.

there's AI that can be used for support. There's AI that can be used to replace or optimize. And there's AI which, works in combination with human skills. And that perhaps is where I think a lot of concern comes from is this idea of value.

You know, where does judgment and context and value sit when you're either using AI or deciding how you're going to work within it in an organization? What should we be thinking about?

Saeema (21:41)
I think that's a learning journey that individuals are on because you've got access to all these very freely available large language models and organizations are on. My personal view is of human AI collaboration. That's where we are.

In that, I don't want to neglect and think about the things that we just touched on, which is responsible innovation. So the challenges that companies have around privacy, security and trust of data and AI systems, they also apply to human beings. So how we use data, do we have consent of passing our data? Is it being used by first parties or third parties?

and understanding the ownership of your data and what you're giving away and all that benefit from it. I don't think that's clearly understood. The same with the responsible innovation. the reality is where we are in this trajectory.

researchers are either working on the algorithms or they're working on the responsible innovation and they're almost divorced from each other. And I think we're getting better. It's scary, but it's getting better. So what we do know is that we need to create spaces where we can understand and anticipate what the impact is of the data we're using, the algorithms we're using on people. And you would do that if you're in the NHS redesigning a service, you would be doing that anyway. You'd think, service?

Felicia Jackson (22:38)
scary.

Saeema (22:59)
how is this going to impact anybody, particularly those who are potentially to be marginalised? Is there going to be any negative impacts and how can we negate that and anticipate that before you design the service? That same type of thinking needs to be brought into AI systems and there are frameworks on responsible innovation for researchers

But really bringing those two bits together, that's a challenge.

Felicia Jackson (23:23)
do you think that's something we should be looking for, as an outsider, as a consumer, as a citizen? Do we want to see that those tech companies have been working with responsible innovation frameworks?

Saeema (23:35)
I think we need a way to understand what is responsible innovation as a general public. So whether that's the framework, I'm not 100 % sure. I think there needs to be something that gives transparency. So if AI is embedded in a product, you could have a product that adapts to you and does based on collecting data for you or a service that adapts to you. We want transparency of where that data is from. And we want transparency of

how those algorithms work. But currently, I would say the general public, including myself we're not ready to understand what that means and how to interpret that information. So as well as companies saying that they're doing responsible innovation, we need education and public awareness of understanding what the challenges are around what is responsible innovation and just what that is.

Felicia Jackson (24:12)
Yeah.

Yeah.

Saeema (24:24)
and then we're able to recognise it and able to push companies to be more transparent. the ecosystem needs to grow together. Also, we need to educate computer scientists, so the next generation using AI. And because AI is now readily available, you don't have to be a computer scientist anymore

Felicia Jackson (24:40)
I love that idea of there being an ecosystem that works together because I do think to a great extent a lot of people see technological innovation as something that happens around them. I know loads of people and I include myself in this. You suddenly find out that AI has been added to a device or a service or a piece of software and you're in a hurry and you need to get something done. So you just click on, yes, I've read the terms and conditions. And then you read some story two weeks later about how

this has changed and actually they can use your data for anything. And there doesn't seem to be sufficient communication between the different elements of that ecosystem. So I think your call for some education in that is incredibly important. But let me bring it back to the organization because I think there is...

something really interesting around the idea of the speed of innovation, the way in which AI can speed everything up. And this relates to the ecosystem and education problem, which is that when things speed up, we often lose a lot of time to process institutional awareness, that time to think things through. So what,

does happen to institutional learning, memory, to that issue of trust when AI can accelerate a process as quickly as it can.

Saeema (25:57)
so this is something I've been thinking about quite a lot recently because it's particularly important in sectors where we're making the distinction between human and AI collaboration or replacement. So if you're going to start replacing, particularly the research part, which if you look at finance, that's one industry where there's a good potential to use AI really earlier on.

Felicia Jackson (26:09)
Yeah.

Saeema (26:23)
that institutional memory will disappear and also how people build up expertise. I'm not working directly with finance, so I kind of think it's somebody else's problem, but it is interesting. And I think it's something to think about because the way we build up expertise as an individual, which then also is related to institutional memory is going to change because if we are starting to replace

people, employees who are early in that experience journey. The question is, how do we build up that experience? And currently we need the humans in the loop to be able to make judgment on AI decision making and to take content, and evaluate it and adapt it for their contents and ensure it's appropriate. And so that's the personal individual journey. So that is going to be a challenge. And I don't think people have clear answers on that because it's actually not.

even individual organisations problem, it's a whole sector wide issue. on the institutional memory, this can be an advantage to use AI. So if you go back about 10, 20 years, 20 years perhaps, to knowledge management systems. I worked with organisations where they were in aerospace, where we looked at how to organise knowledge and reports so it's easily accessible.

Felicia Jackson (27:15)
Yeah.

Saeema (27:36)
to the engineers, the service people in there, and that they could access it in a way that was intuitive. So we brought together the cognitive way that the engineers approach their problems to organizing that knowledge and information. And that allows the individual to get access to institutional memory because you're looking at the knowledge base around there. So AI is particularly good in there. So actually that would accelerate institutional memory.

Felicia Jackson (27:36)
Mm.

Saeema (28:03)
The challenges of knowledge management systems, such as ensuring the output is, is valid. And in the old days, it would be done by something called lessons learned. And then there'd be an expert saying, yes, this is a good lesson or no, it's not. So it's a human evaluating it. We still have those same challenges on AI output in there. But AI is actually pretty good to help harness institutional memory if we could provide relevant structures that are useful.

Where we're failing right now, AI never says no.

Felicia Jackson (28:34)
know, mine's very friendly. It's always telling me I've had a really good idea and sometimes I really doubt it.

Saeema (28:36)
Yeah. Yeah.

if you're inexperienced and you've got an idea and then it's not a very good idea, you haven't got the ability to make a judgment. You don't know that's not a good idea.

Felicia Jackson (28:49)
Yeah.

I've spent a lot of time working around data and the use of data because obviously in sustainability, that's a critical part of what's going on because you can't manage what you can't measure, The problem is that data doesn't necessarily mean knowledge if you don't know how to use that data. And knowledge doesn't necessarily give you judgment about what the right choice is. So

Saeema (29:05)
Yeah, that's right.

Felicia Jackson (29:14)
you don't have judgment and wisdom and wisdom is a word associated with, you know, people who've been around far too long and perhaps they should get out of the way and this, that and the other. as you say,

It is by doing things that you learn judgment. It's by watching what happens when you do something wrong. And you need to be able to make mistakes when you're young. And let's face it, we keep making mistakes as we get older as well. But usually the mistakes you make when you're younger have less challenging implications. So you need to be able to fail because that's what teaches you how to do something better. And I think taking that out of the process for people

for young people in businesses. this brings us to this whole idea of back to the office and AI replacing a lot of entry level jobs I suppose it brings me back to the ecosystem, but I was thinking in that loop that you can get caught up in this idea of, well, yes, but AI and data and young people and learn to work and where's the judgment and what are we doing and are we replacing?

And you can end up going around in circles for quite a long time. There's a whole load of uncertainties. What do you think using AI, perhaps even relying on AI does in terms of organisational capability in times of stress and uncertainty? Because

The use of AI is making things uncertain, but can we use AI when we're uncertain?

Saeema (30:37)
Yes, I think

that the use of AI in uncertainty requires an adaption of general AI. so if we look at AI over 20, 30 years or 40, 50 years even, you had AI tools that were very specialist. So they would solve a particular problem for a particular domain. They might be in the hands of researchers. They might have been embedded into companies.

Felicia Jackson (30:51)
Mm-hmm.

Saeema (31:00)
And now what we've got is general AI that's made very easily available. I think what we're seeing is to be able to use that in an organization, particularly to help uncertainty, you need some adaption from the general AI to a local model. So you take advantages of the large language models that exist, but you're tailoring that for your context. And there, I think if you can build in expertise of...

of a structure that reflects expertise around the uncertainties and how you build up expertise for a particular debate. let's stick to engineering for example. We know that if you're a couple of years after your education, you are going through a trial and error process and experts go through a loop that's probably seven times faster,

which is a shortcutting. So it's not just making something, building it, testing it coming back, but you're actually rejecting your ideas. So you're pre-evaluating your whole body of experience. If you can reflect that in AI systems, so you can tailor them from a general AI that's freely available to your system, you can really take advantage of that ability and learn from that.

Felicia Jackson (32:08)
there's something that comes along with that that I do want to touch on. When we talk about the use of AI, we know that there are environmental externalities which are not really factored in at the moment, and we are talking more about energy use, but water use is a hugely important part of this. And I think when it comes to social externalities, we've touched on issues around bias and about early stage career learning.

Saeema (32:23)
Yes.

Felicia Jackson (32:33)
Are there other externalities, especially organisational ones, that you think get overlooked when people are planning AI adoption? Because that brings me back to why is it that AI adoption strategies are failing?

Saeema (32:41)
again.

So what I touched on earlier is the skill set needed. And obviously that's changed and some of the barriers around the skills have been removed But there's still a skill set. So we've been talking a little bit now on the fact that AI produces an answer and then it doesn't say that this is right or wrong.

Or that it can't produce an answer, it will just produce an answer unless there's copyright involved or very certain guardrails. there's still still to get needed in understanding how to use the output of AI Then there's leadership. So understanding how AI is best placed in the organization requires leadership, which is...

able to take those tools and understand which part of the organization you can start with. So what we see there is organizations are beginning to develop processes. So their main core business may be selling headsets or whatever but they've also developed processes on how to roll out AI in the organization. So they've got a separate processes that's one way of tackling that. But there's also the impact on jobs and job crafting.

So does that mean that where is the human in the loop and where is the AI in the loop? What type of jobs need to change? What new types of jobs need to be brought in? We went back about 10 years ago, lot of organisations thought the answer laid in just hiring data scientists. And that probably was a part of the answer, but now we're looking at the things that we've touched on the social, the environmental, understanding the value.

The answers to that doesn't come from data science alone. It requires teams where they understand the value of the AI, but they're not necessarily data scientists. And it may also require a shift in the business models. So you may no longer be selling a product as it was. You may want to be focusing on the service. You may have a smart product service where a bit of it can be adapted. So the implications of using AI, because AI can be used as a tool to improve your process.

but it can also be embodied in whatever you're producing. It requires a shift that's beyond computer science or data science. And I that's where some of those real challenges lie.

Felicia Jackson (34:58)
really what you're talking about in many ways is bringing the human further into it. And there's a lot of discussion. And I would go so far as to move the word discussion to fear mongering about the fact that AI is going to upturn everything and, put everybody out of work. And I do think it's worth remembering that every single...

technological revolution that we've ever had has completely transformed the way in which we work. And we just have to try and find smart ways of managing those tools. what's challenging in the case of AI is the change has been so rapid and people don't deal with change at quite the same speed. I think this idea of trying to differentiate between

AI as a tool, AI as embodied and the human in the loop is perhaps a really useful way of looking at it. Because what I've picked up from our conversation, is it's really highlighting the importance of recognizing bias.

recognizing decision-making power, recognizing the social implications of things. And those are critical elements of the discussions we need to have around climate and sustainability. So it's not just that there are direct impacts on climate and sustainability from the use of AI, but actually the kind of thinking we need to do around AI is actually quite similar.

to the kind of thinking we need to do around sustainability and climate. So that's framing that I suppose I hadn't really considered before. So I thank you for that. where that connects to the organisational side of things,

What do you think organizations need to be more cautious about or even potentially more honest about when they're looking at implementing AI solutions?

Saeema (36:45)
Yeah, think the first thing is just to understand if they actually need an AI solution.

Felicia Jackson (36:50)
Yeah, because I think that is something that often isn't the case. It's just, ⁓ my rivals have got one. I'm going to have one too.

Saeema (36:52)
That, yeah.

That's right. that's so that goes back to understanding the value of AI and where it does or does not make sense. Beyond that, there's understanding the rollout and the investments in there and understanding the upskilling involved in their employees, in their organizations, in their teams.

so some part of the AIs in organization and the failing of an organization is understanding how you embody AI into products, how to have.

have transparency of the data that you've been used and the algorithms that are being used and making sure that you are responsible in that innovation to understand the impact on your intended users or your intended customer groups. At the same time, in terms of using AI in part of a process and in part of the organization skill set is thinking through the impact on the employees and

understanding that that change can be quite harsh and big for some people and how to bring them on that journey by demystifying what AI is and what it's being used for. So it's addressing that skills gap and through that I you've got potential both to prevent failure but also to be really innovative and come up with new ideas and come up with ideas of how your business models can change and take really full advantage of the AI.

Felicia Jackson (38:20)
Wonderful, thank you. my final question is what haven't I asked you that you think I should have done?

Saeema (38:27)
that's quite hard. Well, one question you could ask me is what is the direction of AI? What's going to happen in the next five years? so the good thing this is just predicting in a crystal ball. So I think there's going to be a little bit of a backlash on the output of AI and it's starting. So what we've talked about when you look at writing in the beginning, you can tell that it's written by AI. People are being

Felicia Jackson (38:32)
there we go. So

Mm.

Mm-hmm.

Saeema (38:50)
much more aware. So that's in all contexts, your context of journalism, but also in context of an email that's written, no matter where you are. In academia, we're beginning to be wiser when students produce something that's AI generated. So there's a need perhaps for tools that can evaluate and prove that it's AI generated, which we are, you know, I sometimes put in ChatGPD, has this thing been produced by ChatGPD? Of course it will always say no. Why do think that?

The other thing that we're going to see is this, recognition of where humans are better in the loop versus AI for everybody's particular context. So, and I'm talking in industry and organizations. And the third thing we'll see is these general AI tools starting to be adapted for particular domains. So we've talked a lot about creativity.

how you frame a problem and reframe a problem is a powerful tool that very creative designers or creative people have. That's another direction which is not so easily embodied into AI tools but could be. there's things called chain of thought reasonings or different types of technical expertise which can reflect human expertise. So that's going to be another direction in there.

And I think looking at how we use AI in physical interfaces. So we know Johnny Ives is looking at this. But other tools. How can we interact with AI? And when does AI say that it doesn't know the answer? And can it tell you that it doesn't know the answer and give us information on when we should trust the output of AI?

Felicia Jackson (40:20)
utterly fascinating. Thank you so much for joining us today, Seema. I think this is such an important discussion to have. And it's part of actually being literate about the world in which we live, whether that's sustainability or AI, but it's thinking about the implications of the choices we make. when it comes to AI, one of the easy stories is that it's going to solve all our biggest problems.

But a harder and more useful story is the one that what we see with AI is a tendency to amplify whatever assumptions, incentives and blind spots we already have. And I think what this also highlights is that sustainability itself doesn't actually hinge on having smarter tools. Solving problems is not about that. It actually is about whether organizations and individuals are capable of

judgment and learning and responsibility in the tools that they use and the approaches they take, especially when problems are messy, long-term and systemic, just like AI. So I think AI is not going to replace the work that needs to be done, but perhaps it is a useful way of highlighting whether or not these difficult questions are being asked at all. So if sustainability is about

understanding consequences and trade-offs and limits and boundaries, then AI may not be the answer we're looking for but it could be a test to see if we're asking the right questions. So thanks again for joining us this week. I hope you've enjoyed listening to our discussion. Please follow us on socials and thanks again for listening to Shaken Not Burned. We'll be back soon.