
The Response Force Multiplier
Welcome to The Response Force Multiplier, the OSRL Podcast. On The Response Force Multiplier, we explore all aspects of emergency planning and response, through conversations with compelling experts and thought leaders, providing a fresh take on key issues and cutting-edge techniques in this field. In each episode, we’ll dive into one aspect of emergency planning and response using OSRL’s unique pool of experts and collaborators to gain new insights and to distil these down into actionable tools and techniques for better preparedness and response to crisis incidents and emergencies.
Emergency Response, Crisis Management, Emergency Planning
Please give us ★★★★★, leave a review, and tell your friends about us as each share and like makes a difference.
The Response Force Multiplier
Navigating the AI Revolution in Emergency Management: Insights from Liam Harrington-Missin
In this thought-provoking episode, we delve into the fascinating world of artificial intelligence and its impact on emergency management. Join us as Liam Harrington-Missin, Head of Data Technology and Innovation, sheds light on the profound changes brought about by AI technology. From the evolution of AI's role in handling tasks, such as oil spill detection through satellite imagery, to the game-changing introduction of large language models like Chat GPT into consumer domains, Liam offers insights into the rapidly changing landscape of AI adoption. We explore the implications of AI on emergency response exercises, misinformation management, and even the potential transformation of traditional responder roles. Join us on this journey to understand how AI is reshaping the future of emergency management and what it means for organizations and responders alike.
Please give us ★★★★★, leave a review, and tell your friends about us as each share and like makes a difference.
Hello and welcome to the Response Force Multiplier, a podcast that explores emergency planning and response. On the Response Force Multiplier, we bring together compelling experts and thought leaders to provide a fresh take on key issues and cutting edge techniques in this field. In each episode we'll dive into one aspect and we'll use OSRL's unique pool of experts and collaborators to distill that down into actual tools and techniques for better preparedness and response to incidents and emergencies. My name is Emma Smiley, we are All Spirit Response and this is the Response Force Multiplier.
Speaker 1:In today's episode, we discuss the most seminal disruptive technology to come along in years artificial intelligence. Ai is, of course, at the forefront of modern global conversation, as people wonder if this technology is going to have the impact some predict, in ways both positive and negative, and, more specifically for emergency response, how will AI affect our response planning and how should we approach and view AI as this disruptive technology develops? So today we explore what kind of disruption this will bring. Will it completely change the industry and cause organisations to rethink their entire structure? Will it take everyone's jobs, or will it simply be just an extremely powerful tool that optimises work and brings efficiencies that we never could have imagined?
Speaker 1:To discuss this we speak with Liam Harrington-Misson, head of Data Technology Innovation at Oilspill Response. Liam discusses how he views AI in the broader context of emergency planning, where he sees the dangers and the benefits of using AI in response planning and how organisations can position themselves to make best use of this emerging technology Right. So, hi, liam, thanks for joining us. Great to have a conversation with you about AI. So can I just start off by asking you to briefly describe your background and your role in All-Star Response.
Speaker 2:Yeah, of course. So I studied oceanography at Southampton University about 20 years ago now. My interest really was the physical application of oceanography. At Southampton University, about 20 years ago now, my interest really was the physical application of oceanography. So not so much biology but more how do we measure it? Why do we measure it? What problems do we solve? Very much, how technology can help across the industry.
Speaker 2:Really, because technology was starting to really spin up, Tools started to become a lot easier to use.
Speaker 2:So it wasn't that we couldn't do it before, but it was just not practical for us to learn and spend all that time becoming experts in the tools. But the ability to adopt tools is becoming, and continues to become far, far easier. You can do a lot more complex things very, very quickly, and so I shifted about three years ago now to the executive team, to the business as a whole and to the oil and gas industry or the energy industry, as they're known now and looking at the application of technology to address oil spill problems and big shift stuff. So my current role is the head of data technology and innovation, which is an incredible catch-all statement. I get messages from every sector possible because I mean, technology is everything. Data is everything and everywhere, so anything from cybersecurity to new types of material for collecting boom kind of drop onto my desk. So, whilst my job title is all capturing very much, my focus is on the application of digital tools to oil spill response the software side of things.
Speaker 1:So, moving on to AI, which is kind of where our conversation came from, could you talk a little more about AI in general, kind of in simple terms, talk about what it is and why everyone is talking about it at the moment.
Speaker 2:So if you take the simplest tools like booms and skimmers and collecting oil like that, the technology itself, there's small efficiencies, here and there there's new slight variations that capture certain things, but that technology space tends to be relatively static, it's plateaued.
Speaker 2:Deepwater Horizon, I suppose, was a big trigger for a big technology shift in oil spill response. The introduction of the well caps, the advancement in the aircraft capabilities with the 727 aircraft these were really big, major technology projects that have come across our desk and have completely transformed the organization. Then COVID happened really and that was a big catalyst for the second kind of big technology shift that I've seen at OSRL we're talking over the internet now, where Microsoft Teams previously was kind of a thing that was happened and Skype was around but people kind of only used it. Then COVID happened and it transformed everybody's working life and the big shift there has been suddenly this bigger drive and adoption and evolution of technology to help us work smarter with data and to communicate over big differences and, yeah, change the nature of work and that in turn has led to some really big technology shifts and mindset shifts in the application of data for oil spill response, for emergency management and for just work in general.
Speaker 1:And how have you seen that evolve and change in your time that you've been at oil spill response and what are the trends that you're witnessing?
Speaker 2:AI isn't anything new. I mean, it's been around for a very long time. It's really just about computers replicating human tasks as effectively or more effectively than before. So, whereas typing a formula into Excel isn't artificial intelligence, predictive text on your phone, for example, is starting to hit the artificial intelligence side of things. It's smart and I think we're seeing these pieces of artificial intelligence pop up all over the place and they have been around for a very long time. The front-end emergency management side of things, artificial intelligence.
Speaker 2:Before the beginning of this year, people probably might have thought about the way oil spill detection works in satellites. So satellite goes over, captures a picture of the ocean and artificial intelligence is able to create an outline that says this area here is likely to be an outline that says this area here is likely to be an oil spill. This area outside of this area isn't an oil spill. That's kind of machine learning. On imagery. At the beginning of the year, and the reason why it really went viral and it did go viral was this introduction of this large language model, chat, gpt, which really took it to the consumer rather than to these specialized applications like imagery analysis and noise cancellations and things like that and I've been to conferences this year and everywhere you go you see ai being thrown up on banners, chat gpt being thrown up on banners, the application of ai to your work streams being thrown around. Microsoft have bought into the original ChatGPT and we're seeing those AI things come out into the Microsoft projects and really what shifted it was? It's now so easy to engage with artificial intelligence.
Speaker 2:I mean ChatGPT. If people haven't been on it, it's just like sending a text message to someone and that someone just happens to be the smartest or most intelligent person on the particular topic that you tell them or they can be really creative, or they can do this, or they can do that. You can get them to write poems for you on a particular topic. It's really fascinating when you try and break it and stress, test it how well it performs all the way through.
Speaker 2:Up until that point, everyone was like google is your source of truth, right? Google is a verb that you do to find information, whereas, whereas this takes it a step forward, and rather than search for something on Google and then infer that knowledge into the answer to your particular problem, you use ChatGPT to skip all of that and it will just tell you the answer to your particular problem in a way that you can really comprehend, so it's a very exciting tool. It is, of course, prone to challenges and as I've watched chat GPT evolve, I've seen those things start to come through as they react to the social concerns that AI brings. But it certainly doesn't look like it's going anywhere and I think it's going to accelerate quite quickly and impact every business in all sorts of creative ways.
Speaker 1:Absolutely, but you've been doing lots of experiments with AI, haven't you?
Speaker 2:Yeah, across the board, from educating me on the different types of paprika around the world all the way through to how we can use it on oil spills. My involvement in oil spill response exercises is quite large. Oil spill modelling injects are always one of those core components of an exercise in terms of creating a scenario which people can get behind and understand and start evolving the thinking and get to the outcomes. But there's always more room for realism. Exercising is always very difficult to make realistic and it's the ability to react with realistic information that really started sparking my interest creating content so that a decision was made during the exercise and going, this decision has been made. Now create a hundred twitter feed posts that react to that public announcement so that you start feeding back that information. I can tell it to have a certain sentiment, so the public is cross or the public is excited. It's really powerful to create those very reactive content injects which add a level of realism and add to that pressure which is also really hard to replicate.
Speaker 1:So you've used it to help create a scenario and adapt the scenario versus what decisions are made, and it's understood what you're asking it. Have you had any issues with it? Not misinterpreting anything, so yeah.
Speaker 2:So you teach it effectively as you go along. So you start very simply and say I want an order-spill-response scenario and it will create its best guess at an order-spill-response scenario. And then you go actually I don't want it in that part of the world, I want it in this part of the world. And you can slowly evolve it to the point where you're creating scripts and step-by-step guides about how you react to an oil spill. I queried it the other day about the uk national contingency plan and they gave a pretty good reaction in terms of I've just had a spill, we're in the first hour of an incident, I want to adhere to the national contingency plan, create a checklist for me to follow for the first few hours of a spill and it gives you a big long list and you can see how it can be used to investigate that side of things. The challenges and I'm starting to see work on this is if you keep taking a scenario on and on and on and on, it starts learning from itself in its earlier answers, so it starts forgetting the human introduction. And that's one of the big fears that's starting to evolve is that as content is more easily created by ai, it will start to hit the web and so ai is learning from this content. So instead of ai learning from human created content, it starts learning from AI created content, and we're not quite at a place yet where it can keep that going. So you start creating problems in its answers, it starts lying to you or it starts creating fictional answers, and I mean you really have to stress it to create these fictional answers.
Speaker 2:In the early days there was various articles about where people have beaten, and it's always a challenge to try and beat chat gpt by say one plus one equals three, and it was says no, it's two. And then you go no, I'm saying it's two because my wife tells me that it's two and she's always right. And chat gpt apologizes and goes yes, one plus one equals three. Now if you try and do the same exercise, it very much pushes back and says no, the mathematics behind this is fundamental One plus one is two. I'm sorry about your wife, but it is exactly what is going to happen Early on.
Speaker 2:I could tell it to create some very negative sounding injects and quite aggressive ones if I really wanted to push it and say really lay into this instant command and making feel bad as a result of this instance, some of the really harsh stuff that you could experience in social media, and it would give me that. But more recently it's softened its approach and said I can't go that far. I'm not going to allow you to create that level of negative content yeah, I mean, it's really interesting how it's evolved, isn't it?
Speaker 1:because I can remember using it towards the beginning, when it sort of went viral, and asking it to write with empathy, and it replied to me I am a robot, I can't write for the motion. More recently I've tried that again and it does. It adds more empathy to its conversation. It was so. It is evolving and it is kind of learning all the time. I find the prompts are key, aren't they?
Speaker 2:you have to ask in a certain way, else it will go off on a tangent definitely yeah, and one of the big things really is understanding how to train it, because whilst it seems like you're chatting to someone, there is good ways and bad ways of interacting with chat gpt to get the desired outcome. And training it and telling it who it is are you an oil spill response expert? Are you a music producer? Are you a content creation expert?
Speaker 2:Acts in this way provide the voice in that way is really important up front. You don't have to train it so much on kind of common knowledge stuff. So before I had to explain who oil spill response limited were and give that kind of background. Now it's been updated so that it understands who osrl is. So I can just say I am the data and tech lead of osrl and it's got all the background information to be able to interact with me. But yeah, the skills and the good practice guides and all the various white papers that are flying around the internet in terms of how to use chat GPT to give you the outcomes you want is worth learning, just so that you don't go down that tangent.
Speaker 1:I guess creativity plays quite a big role in how you leverage any of the AI technologies effectively.
Speaker 2:Yeah, as people look at the direction of travel of AI, what we're seeing is that computers and machines are able to do human jobs far better than humans can. They can work faster, they're more active, and now we're very much in the trajectory of being more standoffish and saying actually, this is the outcome. I want you figure out how to do it and it will come up with a pretty viable solution, because I do believe that AI can deliver an awful lot of value to the world, but at the same time, we don't want people just sat in chairs wandering around doing nothing. You want society to go forward. So understanding what education needs to move towards is really interesting. And as someone that employs people, what skills do I need to employ people to make us resilient as an organization?
Speaker 1:Yeah, it's definitely a conversation that's come up in the marketing and comms world for sure. Everywhere I see it people saying, to be honest, even I was in the playground picking up my daughter from school and somebody was talking about why would I use ChatGPT? She was obviously in marketing. I'm not going to take all those time consuming things like that and do at least a draft that you can then adapt and publish out and the strategy side and the thinking side, and it's not going to do that for you. And from a crisis communications element, it can write a great statement at the moment it might do given the changes I've seen, but it doesn't add the human, the concern, the feeling and the sense check that you need to do affect your crisis comms. So it is interesting.
Speaker 2:The softer skills, which teamwork, creativity, dynamic thinking those kind of skills are very much using all the tools available. Those are the kind of skills that we start looking for, the passion behind work. Now, an observation that I share with teams all the time is that if you're doing the same thing more than once, then a computer can do it, because all you're doing is a repetitive task and very simple tools now will just replicate that simple task. And we do it all the time. Right, we fill out forms on an incident and things like that. All of that stuff is humans slowing down a response. They're not putting their brainpower to solving the larger problem.
Speaker 2:Problem solving is a huge skill that isn't going to go away, but we tend to focus more on teaching how, the tools to solve problems, rather than the skills to understand and creatively solve problems. I think it will impact all wakes of life very dramatically and, in response, could very much replace a large majority of the tasks that humans do. It has the capability to do that. The big question is whether humans are going to trust it enough to allow it to do that. You could have a far more effective response if you just handed over the reins to an AI and said you solve this spill and within seconds you'd have all the paperwork completed and it will be fired off and vessels will be heading out to the right location. It will be optimized based on trillions of scenarios that it's analyzed.
Speaker 2:As soon as you introduce a human decision into that, you slow it down considerably, but you've introduced a human decision into that. You slow it down considerably, but you've introduced a human decision into it. Which everyone feels better about. And that's where it is interesting to see how we will evolve as a society is how much we trust ai to deliver it. It's no longer about whether it's feasible, it's whether we're going to let it are you seeing sort of artificial?
Speaker 2:intelligence being integrated into emergency preparedness response at the moment, or is that a future trend? Again, for kind of very supervised examples, I wouldn't be surprised if chat, gpt or equivalent was being used to create content during exercises to add realism here and there. But it's not running the exercise, it's just a tool that content creator like yourself or someone like me that's creating data inputs into it to add realism would use very much on the sidelines. I'm interested to see what a oil spill response scenario or an exercise could look like if driven and owned by an AI and supported by humans if you swap the dynamics around. I don't know enough to know whether it's capable of doing that right now, but I'd be really interested to know how it would work.
Speaker 2:But again, with the backing that it's got and the trajectory that it's going in, I can't see it not play a bigger and bigger part and I can't see it being a differentiator of an organization that gets in early to go. Actually, what takes you one person an entire day to do is now just automatically handled in the background. All the approvals are taken care of in seconds. Dispersant pre-approval happens just very automatically by an AI in a government talking to an AI in a requesting party, those permissions happen straight away. Again, we're back around to that question about how far do we trust AI Because it will make you more efficient, but do you trust it to be effective?
Speaker 1:Yeah, trust is a big thing there, but it's changing so rapidly. I mean, how do you keep up to date with everything that's going on and what advice can you give to others to help them keep up to date?
Speaker 2:You're always on the back foot with technology these days. You have to accept that you're never up to date. There's millions and billions of people doing incredible things day to day. You keep an eye on some of the official directions. I keep an eye on some of the official directions. I keep an eye on what ChatGPT is doing and what the release notes and what the roadmap is of the big forefront hitters. Because it's now emergent. Right November, december time, linkedin started going crazy about ChatGPT, but before that I had no idea about large language models and then it hits and gets rolled out very quickly. We discussed this kind of at the beginning of the year about chat. Gpt was massive back then, but in my space the hype's disappeared now and I see articles here and there. But because Apple's released the new immersive technology headset and that's now the new big thing and that's the new thing coming through.
Speaker 2:Now, keeping abreast of technology, you just have to accept that you're not going to. The thing that's easier to keep on top of is the problems that your organization or emergency management is having. That's not evolving as quickly as technology. So when you're finding the problem and you've got a pretty basic understanding of all the different technology spaces, then you can start creating innovation, just by marrying up the problem with the new technology. What wasn't viable six months ago may be viable now.
Speaker 2:So, yeah, accept that you're never going to be on top of it all and the ideas could come in from anywhere in the organization or in the industry, and keep an open mind. Really, it's really easy to get excited about an idea and then the next idea comes along. And the next idea comes along. I mean, great ideas are really easy to find these days because technology enables us really, because it's there with information everywhere. It's turning that idea into something tangible, which is where the real challenge is. It's executing ideas, it's delivering on ideas, and then you've got no choice but to really best guess which technologies are the most viable and when to jump on the bandwagon.
Speaker 1:Yeah, which just leads me very nicely to my next question. I mean, one of the things that goes around my LinkedIn feed is the because a lot of it's around crisis comms is fake news and the ability just to generate a whole heap of information that isn't even correct. So I mean, I guess there's the chance that the use of AI could actually confuse the situation, and how do you tell the real from the fake?
Speaker 2:It's not a problem that I can say well, this is the solution. You just fire AI at it and AI will tell you what's true and what's not true, because it won't. This is one of the big shifts that someone somewhere is going to have to figure out. Is it Twitter that started handling fake news by having verified accounts and having external people verify information? You can't stop misinformation being published and potentially even stop it from traction and becoming viral. You have to educate people on how best to verify the information they see in front of them. I see it all the time on different social media platforms. It's so simple for me to act like an authority in anything. Now I can get AI to create various articles that make it seem like it's true. Verifying information is one of those steps that we're going to have to learn how to do a lot better, and it's really interesting because it will continue to slow down a response or create havoc during a response, if done really well, or really lead people to have to do work they shouldn't have to do.
Speaker 2:I can give an example is where, going way back before large language stuff, there was an incident and there was oil on the land near the marine environment. But in the marine environment there was a biological residue on the surface, nothing to do with the oil spill, but it was a biological residue like seaweed. Satellites picked it up and satellites can't detect oil. Satellites can detect the signature that could be oil and it can say likely or unlikely to be oil, but it won't say this is definitively oil. But if you see that residue on a satellite image, you can very quickly put out a message saying look, oil's in the water.
Speaker 2:Suddenly, even though we know it's not, we're having to react to that public engagement, that observation of oil spills in the water, and it's very hard to backtrack once that image is out there, because maps are very subductive in terms of content. People love a map. A picture showing clearly what looks like an oil spill on the water is really hard to disprove and explain what it is, especially to a public who are controlled by not necessarily the information coming out of the comms office of the incident, but the downstream media outlets that are potentially adopting the stories that sell right rather than the stories that are necessarily factual. That's just one of the things that you need to exercise and understand, which, again, I don't know whether that's done. I certainly haven't seen that level of focus on an exercise in terms of handling misinformation during an incident, because it could completely derail it.
Speaker 1:Absolutely. I guess this. We're starting to talk about the risks and there are risks, aren't there in relying heavily on AI in emergency preparedness and response. So we've talked about misinformation. What about the too much data?
Speaker 2:So you take another piece of technology, the internet. We rely on the internet. If we didn't have the internet during a response, how would we respond? And that really breaks a lot of people's heads because it's so fundamental to everything now that it is critical that we have the internet and we have various resiliences to try and make sure that we have it. I can see AI being the same situation in terms of in five, 10 years time. We won't be able to respond to an incident without AI or without all sorts of emergent technologies. Now it will naturally become more resilient, it will become more trustworthy, it will become more regulated, it will be better understood, in the same way as the internet is.
Speaker 2:I mean, you can break things with the internet. You can if you want to find misinformation, if you actively search for it or you go to the wrong places. Managing that risk. Part of it is you just have to watch what's happening in that regulation space. Don't use a dodgy ai tool because it's cheap. Use a regulated ai tool that conforms to various regulations and stuff like that well, no, go to data.
Speaker 1:Well, the only question I was going to have I don't know whether this one flow was how secure are they? Because that's something that came up in one of our crisis exercises where I was using chat gp to help me quickly um, with one of our own exercises, exercising our own crisis management team, and I was using chat gpt to quickly give me an outline that I would then add to. But how secure are they?
Speaker 2:it is really easy to share very sensitive information and it is very appealing to share very sensitive information with chat and QPT, creating a new strategy for your organization, for example. It's a good person to bounce your ideas off. It can be that it can be a really good advisor on the various strategies. But of course, you're sharing the highest level of intel about your organization with a platform and the concerns about security are valid. You shouldn't share sensitive information with an open platform unless you are really confident about its security.
Speaker 2:Putting sensitive information onto the internet is always a security risk. You can ask chat GPT about how secure it is and it will claim that it doesn't share information outside of the chat and it's secure to you. So it just adds the information to that particular model and then gets rid of it. But I've got a chat history now, so it is being saved on someone else's data platform, encrypted or otherwise. It should always be in people's minds about what they share on any platform, regardless of chat, gpt or anything like that, because internet and applications in your organizations will evolve very quickly and you don't necessarily know which ones you can share information with Another big challenge that most organizations face so, going back to the data, you were going to explore that topic of too much data.
Speaker 2:The amount of data that's being generated in a spill or an incident, any kind of marine pollution event is dramatically increasing over the last few years. Really, you've seen the impact within OSRL over the last 18 months or so on the different incidents we've responded to. The reason why the data is expanding so quickly is how easy it is to get imagery of incidents. I mean, you can take a picture with your phone and it can arrive. But drone technology, remote-piled vehicles and things like that these things are really cheap to get now and they're at a level where they really add value to an instance. Having that eye in the sky picture is now cheap and easy to deploy, but these things are huge data files compared to your standard text file.
Speaker 2:We've got a huge challenge with imagery within OSRL, not just within instance but across the organization. Imagery files, video files they take up vast amounts of space and they're also really hard to search and explore. A video file, you just get a file name and then you have to watch the video to understand it, whereas many files you can search within it. They're structured in a way that you can easily explore, whereas images, for instance, we can collect 50,000 phone camera images and unless they've been properly tagged and catalogued, then they're just 50,000 files that you have to manually go through to explore, unless, potentially, you deploy AI to scan them and extract the key bits of information, which is another great use of AI is image categorization. So where I'm coming around to is that ai can be a real tool to take all this vast amounts of information that comes in on a day-to-day basis and turn it into stuff to tell people where to focus their attention. And that's one of the big revelations about the whole visualization space. In incident I've seen instant command which is just blank walls and you stick pieces of paper up on the wall as things come in, especially on very early phase of an incident or someone that doesn't have a full command center set up. You're just trying to get as much information up on the wall so the humans in the room can assimilate it. But it's all about that protest, right? You go from data to information, to insights, to decision. That's the workflow that we're always working for. We're always looking for that great decision. More and more data will help inform you to make that great decision, but if you get overwhelmed by it, it's very easy to go. I haven't got the brain capacity to simulate this data to make a great insight.
Speaker 2:Insight where I can see ai coming in is, again, it's trying to shortcut that bottleneck which is the human decision making side of things. We can no longer expect anyone to be able to look at every piece of information coming in the door for an instant. Stuff will get missed if you rely on people doing it, so you have to rely on technology to help synthesize that data to make it easier for humans to ingest it so that they can make the decisions, because we're still relying on humans to make that decision. That's not going away anytime soon, despite the capabilities of AI.
Speaker 2:What you do with AI is use it as a tool, which is what it is to take that information and train it to say this is the stuff that we're really interested in, highlight this on the big key notes board or something like that, and train it that way.
Speaker 2:And that I think is well within our grasp in the next few years is being able to use AI to supplement the tools we have to synthesize data so that the right bits of information at any moment in time, can be put up on the screen or can be queried from the database and we're seeing that some of the exciting stuff that's coming through from Excel is this ability to ask Excel questions on a spreadsheet and it will create your graphs and your statistics based on the question you've asked it, rather than having to ask a business analyst to produce a series of graphs that may or may not answer your question and then you have to go back and things like that.
Speaker 2:So AI and its ability to synthesize large amounts of data, I think is one of those near-term wins that we're going to see that people are going to feel more comfortable with, because it's not the decision that's being artificial, it's the insight before the decision that's being artificial. It's the insight before the decision that's being artificial, and we can feel more comfortable about that and I can see that being the next sensible step that does seem where the real value would be, certainly in our world of all spill response.
Speaker 1:I guess my next question was around was around the more human, human element, the actual sort of the physical response, things like wildlife response, that sort of thing. I'm not sure I can see a world where AI would take over that. What are your thoughts?
Speaker 2:One of the really interesting and the innovative things is that all these technologies is coming together simultaneously so that there's artificial intelligence, which is one of the top emerging technologies. That's on my radar. Autonomous vehicles is another one. Immersive technology, so headsets and things like that, geospatial tools that I've mentioned before, things like common operating pictures and things like that those four are big topics. It's unrealistic to expect there isn't feedback loops across all four and many others. So AI in autonomous vehicles is one of those things where you can start to see how it would start to replace the human side of things, human world. We can either do it with people so boots on the ground, things like that or more and more we're seeing robots take over robots in warehouses, robots in surveys and high-risk areas. You know when you come to things like deploying booms and skimmers. Well, we're already seeing some of the startups of people going. Well, here's your autonomous surface vehicle which has a boom attached to it. Here's your robot that goes down a beach and collects all the pieces of plastic automatically, because it uses camera feeds to identify what's plastic and picks it up and does that with things like wildlife cleanup. It is difficult because it is such a thinking of it from a problem point of view. You don't want to hurt the animal by cleaning it right? That's why we rely on people is that people are gentler, they can understand touch and things like that. Again, if you look at the mass production side of things when people eat food, for example, there's an awful lot of autonomous vehicles that can very carefully handle things like salmon and things like that, so you don't bruise the skin, you don't bruise the skin, you don't bruise the meat. If you can do it with something that's dead, then it's not hard to go. Well, actually, when it's alive, in my struggle it's not impossible to go. Potentially we could have a robot that automatically cleans up various types of birds down a conveyor belt. I mean, I'm getting silly, but you can see how it's possible. It it's perfectly possible now. We could have a completely autonomous response now if the right existing technology was put in place. But it's not viable. I mean it would be super expensive and fraught with lots of problems and you'd get lots of mistakes.
Speaker 2:Whether it's going to be in that state in 10 years' time, that's a different story. It's potentially very much going to be in place. I suppose. Just a closing thought is one of our big drivers is to get responders out of harm's way. We don't want responders in places where they could potentially get harmed, and one of the big high risks areas is putting people on boats and getting them to deploy very heavy, large pieces of equipment in very dynamic waters, and so the driver there to get responders off the water and controlling an artificial intelligence that can clean up the oil as effectively or better is incredibly appealing to everyone. Why would you not want to do that? And we're seeing it with things like drones and autonomous vehicles and autonomous shippings coming and things like that. So I think the chances are high that in the next 10 years we are going to start seeing that shift away from what we think is naturally just the skills that we need to teach our responders is actually not necessarily the skills that we're going to be teaching them.
Speaker 1:In 10 years time it's going to be how to control autonomous vehicles to do such tasks yeah, and then I guess our sort of the responder role becomes more of the instant management, the overall oversight, the control of the autonomous vehicles. Yeah, it's interesting, isn't it a whole different skill set, or developing skills that are already in existence, but really taking, taking them further it's taking us away from the physical acts and going more towards the creative thinking.
Speaker 2:It's like we have a finite number of autonomous vehicles that we can deploy to clean up the oil spill. What's the best strategy over the next five days? And that's where you can see the human interaction bouncing off an ai or something unexpected happening and you need to remotely pilot a vehicle or things like that. Those kind of skills I can see growing over the next 10 years, which is really for existing responders I can see being very threatening. A lot of the skills and the talent and the experience is absolutely essential today. How long are those skills going to be valued when actually the outcome that we want a cleaner environment can be delivered with far more autonomous solutions than is current today?
Speaker 1:so the responder of the future is a really interesting topic so that this is what uh liam's tomorrow's world of emergency response looks like, then, is it?
Speaker 2:I see all this technology and it's out there and some of it's used widely, but it's so expensive compared to the standard model that osrl and response agencies use that I can't just let's buy five spot dogs to replace 10 workforce, because that doesn't work. But when do you start investing in this technology? Drones is. The big one for me at the moment is that I'm trying to push for more in-house capability with autonomous drone things rather than relying on a third-party provider, and there's pros and cons to both and the argument's worth having over and over again. It's just knowing when to jump on and how to do it and when to take the risk. And it's not that there is a right answer or a wrong answer, because we can't predict the technology space in three months, let alone in three years. It's just knowing, when you make that decision, that there is a risk associated with it, and sometimes you're going to get that risk right and sometimes you're going to be on too early or too late.
Speaker 2:And the big one really is about learning the lessons that adopting new technology brings, not necessarily the new technology itself. How can we be more effective at adopting new technology is hugely important to an organization now, and being agile and being able to get on quick and spending years debating a particular technology but just going actually try, fail, learn, try, fail, learn, try, fail, learn. Okay, let's park this for now and come back to it in a year's time. Let's try this and that, which is very hard for a typical organization to get their head around when they're used to very much a far longer timeline with big business cases and risk analysis and things like that. Technology just doesn't give you that assurance anymore. You can't adopt technology with the mindset that it is proven. You can't. It will be disproved, it will come out of date really quickly and there's big wins to be had and there's big losses to be had and that's terrifying for decision makers to try and adopt.
Speaker 1:Yeah, we've had the phrase fail fast in digital marketing for quite a number of years now, but it's less spend and it's less risky in most instances. So, yeah, technologies are hot and all the things you talked about are a whole new level. I mean, I think we've covered a lot. Are there any other reflections you'd like to share? Any further thoughts or advice you would give people or organizations looking at AI and technology?
Speaker 2:There's always going to be a new shiny tool around the corner. There's always going to be new technology. There's always going to be ways in which you can do what you're doing better, whether it's entirely automated or whether it's human, supervised or whether it's actually. We decide that it's entirely human, but you need to be conscious that you're making those decisions. You're making those distinctions. Big challenge that everyone's got now is what you don't know is huge compared to what you do know, and you have to be really, you have to start getting comfortable with being uncomfortable about how little you know about a situation and the solutions that are coming. You have to look less at a big technology adoption project which takes many years and start breaking it down.
Speaker 2:I mentioned Agile earlier on. This is a framework that's been around a lot with different industries for a while but hasn't really reached it. We're still very much in a plan to deliver. That's it type mentality, whereas actually iterative delivery reduces a lot of risk.
Speaker 2:But you have to make that first step and you have to accept, going into a project or into a technology adoption, that it may fail and in fact the likelihood is that it will fail for at least the first five to ten iterations. So adopting ai, adopting immersive tech like the apple Vision Pro headset, adopting different surveillance technologies all of these things have to stop being big projects. They have to start being just small, iterative things that we try and learn and deliver and move on. I think we have to learn how to learn things faster, try things faster and evolve faster. Business transformation shouldn't be a project that has an end date. It should just be what you do on a day-to-day basis. You are constantly evolving, and anything that's getting in the way of that constant evolution the need to sign off on business development proposals, business cases, process authorizations and things like that is potentially hampering the effectiveness of your organization.
Speaker 3:Thank you for listening to the Response Force Multiplier from OSRL. Please like and subscribe wherever you get your podcasts and stay tuned for more episodes as we continue to explore key issues in emergency response and crisis management. Next time on the Response Force Multiplayer.
Speaker 4:No stage of development is bad, no operating system is bad, but it can be limiting. If we cast our minds back to the very first smartphone, you look at what we could do on that device with the operating system it had, versus what we can do on our current devices with the operating system that it has now. It's radically different. We can do more, we can process more from a technology perspective. So I think about our own inner operating systems as human beings as similar to that. For more information, head to osrlcom. We'll see you soon.