SafeTalk with SafeStart

S15Ep5: AI and Safety

SafeStart

Send us a text

Want more time with your people and less time buried in reports? In this episode, Danny sits down with Eduardo Lan to unpack how generative AI can shoulder the data grind so that safety leaders can focus on human things like culture and coaching. Understand AI in plain language—separating hype from practical value—and learn how you can turn a generic assistant into a sharp partner for safety performance.


Host: Danny Smith
Guest: Eduardo Lan


Danny Smith

Welcome back to SafeTalk with SafeStart. I'm Danny Smith, and today I would like to welcome back one of the newest members of our consulting team, Eduardo Lan. And Eduardo is a CRSP, that is a Canadian registered safety professional. He's a consultant and a coach with over 20 years’ experience helping leaders to implement human factors, elevate their culture, and to achieve breakthrough results across multiple industries. He holds a master's degree in organizational development and change from Penn State University. He grew up in Mexico City, and he is a frequent contributor to the Canadian Occupational Safety Magazine. So, we're really excited and fortunate to have him a part of our team now. If you were here for Eduardo's consultant Spotlight a few weeks back, you'll also recall that he is fluent in Spanish and in English, and I am not. So, I'm not even going to attempt to Abla today, as they would say, right? So, as I told the folks in the office when they first asked me if I speak any languages, I said yes. I'm fluent in Alabama Redneck, and I don't do too bad with English most days. Maybe that last part's a stretch. So anyway, in all seriousness, Eduardo, welcome. And how are you today? Everything going good?

 Eduardo Lin

I am good. Yeah, it's a bit of a whirlwind with everything that's been happening end of year and beginning of this year, but I am good and happy to be on the podcast again with you, Danny, and with all the lovely people that listen to the podcast.

 

Danny Smith

Today we want to spend some time talking about another area that Eduardo is very fluent and or proficient in, I guess you could say, and that is artificial intelligence or AI. Now, you've probably heard and seen lots and lots about AI over the past few years. But let's talk about this specifically in terms of AI as it relates to safety. And you may have even heard a lot about AI and safety, but let's bring it back to AI and human factors as well, because I think that's a really important thing. And that's where Eduardo comes in because he sees the application and the impact of safety and human factors with AI and how that affects everything. Okay, Eduardo, let's start by just defining AI in terms of safety and how it affects us.

Eduardo Lan

Sure. Oh, I'm glad you went there because yeah, there is there is the need to understand what it is, what kind of technology we're dealing with to begin. So let me take it way back. So, AI is an umbrella term that covers many types of technologies. It began over 50 years ago, maybe even 60, 70 years ago. And we have seen uh some of the advancements of it for many years now. So, think Amazon, think Spotify, think Walmart. They all collect thousands and thousands of data points on your web activity and thus know what you buy, what you return, what you like, et cetera. Now, what's new is this technology that is referred to as AI, but it's really generative AI.  And this is where what I want to talk about. So, generative AI marks the confluence of massive computing power, massive data, so all of the internet and virtually all of reported human knowledge and very advanced programming algorithms. So basically, they've been able to train computers to ingest billions of data points about everything and be able to hold those data points in memory and speak intelligently about any subject. And the generative part about AI is that it can work across voice, text, image, video, sound, etc. And then the implications of that in terms of usefulness and productivity, which are huge and exist in every field, including safety. So, if you think about safety for a minute and the needs that safety professionals and leaders have, it involves a lot of information that they need to analyze, hold in memory, and work with so that they can uh present to other stakeholders or make decisions based on that information. And now what's extraordinary about this technology is that, as I said, it has all this data to work with, and it can hold it in memory. And that's something us human beings can't do. So, it is incredibly useful in aiding us to be able to have those reference points, those insights, if you will, when we are speaking about this, and this meaning our safety culture, our safety

performance, our training programs, such that we have the information that we need to be able to make decisions, plan, etc. Does that give you a good overview, Danny?

 Danny Smith

Yes. I think so. And I think the idea that it can go beyond just the traditional, if you will, I'm going to use this term, the Google search, if you will, and it can really begin to expand even beyond the traditional internet searches, not picking on Google by any means. Heck I'm old enough, I remember when we're asking Jeeves, right? But anyway, that goes back a few years. Some of you may have to Google that one. But anyway, you know it it's really solid for me because I think it's the technology is something that can truly help us in the field of safety, just as it can in any area. But I think a lot of people are really afraid of this in a lot of different ways. And I think one of the big things is just that fear that, okay, is this going to replace the safety professional? And you know, as you were talking a few moments ago, and I was thinking about this idea of the fear of being replaced, it kind of took me back to when we first started seeing robotics being introduced in the workplace, and there was a lot of fear from line workers, if you will, oh well, they're going to replace us with the machines, if you will. Well, it's just the same mentality. You’re sure robotics have enhanced what we do in some facilities, but there still are people there working and fixing the robots and installing the robots and telling the robots what to do. Same thing's going to happen with AI in all areas, including as with safety, right? It's not going to replace us; it's just going to give us some additional abilities.

 Eduardo Lan

Yes. So let me let me let me answer that question in the most straightforward manner possible. So, every time there's a technology that comes online, it is understandably so scary. What does this mean to me? What does it mean to my job? Will I be able to learn this and keep up? Those are very valid concerns. And I would be, yeah, it would not be fair to say no, everything's going to everything's great and everything's going to be perfect, and there is going to be no challenges and no job displacement. That is not true. Right. So, take robotics, for example, but particularly robotics that is used in industry, it has created challenges and it did create challenges when it came online and was integrated into the various organizational processes. Some people did lose their job because they did something that now the machine can do, and some people became very adept at using both their skills as well as the enhanced skills that the technology brought online and became the go-to people to utilize this new technology. Now, that said, we're not at the point where robotics completely takes over human endeavor, if you will. That could happen, but it's not going to happen anytime soon. It's not something that's going to happen anytime soon. And in a similar fashion, maybe less so, but in a similar fashion, AI, particularly generative AI, is not at the point where it can be, and I'm going to throw out a term that maybe some of you have heard, agentic, fully agency, so that it can do um the tasks completely autonomously. So it's not at this point replacing human beings in most things. It will get better. It is getting it is getting better and better every day at a break net and break neck speed. And there are some things that it will be able to do more and more in an autonomous manner. Now, some people freak out when they hear that because who I mean, that's my whole job. I analyze it.

 Danny Smith

Yes, exactly. 

Eduardo Lan

I do the I do the report now, now this thing is going to do it for me. Some of that is true. We're not quite there yet. I don't even know if we want to be there completely, and I'll explain that in a minute. But I'm actually with my eyes wide open and understanding that this has many, many challenges, like any technology. I'm actually very positive about the net effect of this in the long term. And let me explain that, Danny. The reason why I'm positive about all of this is because I firmly believe that human beings were never meant to be pencil pushers and data analyzers. Our brain is not designed to hold thousands of data points, and we don't do it very well. Sure. and so I think it's actually positive that we now have a technology that can handle many of these administrative and analytical tasks for us, and that gives us a chance to do what we were always meant to do, which is to be out with people, to be present in the on the factory floor, in the field, talking to people, supporting people,  aligning  people, and doing the things that require sensitivity and relationship and communication and understanding that no machine will ever be able to do, which are incredibly important to build the kind of safety culture and safety performance that you all on this podcast are looking to have.

 Danny Smith

So, Eduardo, let me ask you this. Without going complete sci-fi nerd here, I’ve always been, you know, a fan of you know Star Wars, Star Trek, all of that. My wife in particular is a huge Trekkie. So defaulting to that for a moment, if you will. Thinking about Star Trek the Next Generation. We're All familiar, for those of you who have followed that or have seen that through the years, you're probably familiar with Commander Data, who is fully autonomous, but yet a I guess you would say he's a form of a robotic and AI type embodiment, I guess would what be the way to say I said that very poorly, but anyway, and some of the true Trekkies will probably correct me on what the official name of Data is. But the interesting thing there is you were talking about how AI can free us up to do things, it can help us with our reporting, it can help us with our analyzing things, things that traditionally we as safety professionals have had to sit down and spend hours doing. AI can help us with that, and as you put it, it can enable us to go out and get on the floor. Now, where am I going with this whole idea of data from the from Star Trek the Next Generation? If you think about it, he came across as very cold, very non-feeling, if you will. And that's the thing. We can't expect AI to go out on the floor and interact with our employees and have those relationships and build those relationships that form that engagement and culture. That's where we come in, right? And so, if AI can free us up from some of the quote unquote more mundane things and free us up to go out there and be the human face on the floor with our people and interact with them and engage with them, we know that engaged people are much safer. And so, it seems like AI can do a lot of things for us to kind of free us up from some of the more routine things that we're trying to do. And I guess that that was a long-winded way to go around and say what I just said. But I think there is that potential for us to capitalize on this and allow it to kind of free us up to do the things that we need to be doing, that we probably have not done as well in the past, that we have not taken as much time with as we should have, and that is spending time with our people.

 Eduardo Lan

That is correct. That is correct, and that is why one of the main reasons why I'm so excited about this technology, and as I said, I am I say that with a caveat, and that is, I do have my eyes wide open, and there are it is an incredibly powerful technology, and there are concerns and things we need to take into account, but the productivity gains, the benefits of this technology are not to be overlooked. And furthermore, this is here to stay. It's not a fad; it's not something that's going to go away. It's actually we're probably and I'm not exaggerating; we are probably we're probably facing the biggest technological shift of our lifetimes. And so, it behooves us to learn this. And here's here is the good news, it's incredibly easy to learn because another thing that technologists have figured out is how to engage with AI in natural language, meaning in your own native language. So, you don't need to be technical, you don't need to pro no programming or coding, you can simply ask via text or voice the AI, whichever AI you use, chat GPT, Copilot, etc. And it'll answer, it'll even advise you on how to use it. So, you can even go into chat GTP, GPT, sorry, GTP, I always get those wrong. It stands for generative pre-trained transformer, GPT. Generative because it can go from text to image to voice to video and generate new outputs in each one of those fields. Pre-trained because it was pre-trained with all of virtually or virtually all of recorded human knowledge and transformer, meaning it's a technology, it's the name of a technology can keep it all in context so it can see all of it at the same time.

 Danny Smith

Yes.

Eduardo Lan

Now, um did I answer your question, or did I go on a tangent there then?

Danny Smith

I think that that's, I think what we were saying is very complimentary there, and I think that's the thing. We see how this can allow us to become more of an engager, if that's a way to say that. And that's kind of where I was going with that. And it sounds like where you're going as well with that. Talk to me a little bit about, you know, we talked about the fear of replacement. That's certainly one thing that hits people. But another thing that I think that's out there, with anything that's unknown, there certainly are multiple fears that go along with things, and this kind of feeds into the human factors of this as well, I guess. One of the other things is the fear of the unknown. And thinking about that, can you talk about the fear of the unknown in terms of perhaps safety, perhaps the security of you knows, I won't say relying on AI because I don't think we're at that point but just using AI to our advantage as we're implementing things. Talk about some of the unknowns, if you will, or how to address those, the fear of those unknowns, if you will.

 Eduardo Lan

Yeah, absolutely. So that is and that is actually the biggest challenge to this technology. It's not technical. As I said, you don't need to know how to program or code or anything. You just talk to the thing, and you experiment with it and it'll give you the output you're looking for. And now in terms of reliability, it is actually highly reliable. It's made its progress by leaps and bounds over the past couple of years in ways that are quite staggering. So, if you try Chat GPT, I don't know, six months ago, eight months ago, you're we're not dealing with the same technology. Yes, it makes mistakes. Part of it is because that is the way the technology works. By it being generative, it has to make assumptions, finish sentences, finish thoughts suggest the next output. But that is no different than any coworker you work with. I mean, that you don't we we're not perfect, our coworkers are not perfect. This there it's always important when you're working with others, or in this case with this with the technology, to verify its output and see if it is aligned with your program, your culture, your various elements of your safety management system. And that is another area where. Where human beings are incredibly important to guide the AI, to verify the AI's output, and to collaborate with AI to be able to know more, make more better decisions. And as we said at the beginning of this conversation that we're talking about right now, to be out on the factory floor in the field talking with people, which is which is just key to building a safety culture.

Danny Smith

Yeah, it's interesting as you're talking about that. And by the way, Data was an Android. That was the word that could not come to my mind a few moments ago. I had actually checked while you were talking and I did a quick search on that to make sure I was correct, but it's like, duh, I knew that. Anyway, back to this idea of the data points, shifting from one data to another. But you would not go out onto the production floor, let's say, and talk to one employee about a situation and then make a decision or a recommendation based upon that one data point. You would go and gather talk to multiple people. You would take readings, you would take measurements, you would get the idea. To reach its conclusions, right? And so, it that thinking about it in those terms, it is yes, you're still going to have to go out on certain things and talk to your folks and learn from them, learn that experience, that the gut instincts of your people and of you as a safety professional. But uh this this technology just gives us the ability to look at what others are doing, what others are doing and saying in these in certain areas. And one of the things that I've learned just in my limited experience with AI is the more specific and you can be with your request and with your questions for it and what you're asking it to do, the better results you're going to get, right? And I think one of one other thing you mentioned there is, you know, you want to verify things. And that's an important thing. Yeah, I think about the old political saying, you know, trust but verify. And that's it's so true. It's exactly what we want to do here. AI can do some of the heavy lifting for us, but we're still going to have to go back and make sure that it aligns with what we're doing in our application and make sure it works in our environment, right? And that's going to come back to the people again.

 Eduardo Lan

Yeah, that is correct, there was something you said there that I want to pick up on. Let's break it down for a minute. Some of the biggest challenges are job displacement, our security and privacy, and then their the accuracy of the information it's providing, and then kind of this doomsday scenario of you know Terminator that's going to take over and end the world. So, let's start with the last one. That’s mostly science fiction. That's not it's not about to happen anytime soon. The possibilities of that happening are very slim. It's a technology that's built on human knowledge and human experience, and that it is utilized by humans. So, if anything, some of the risks in terms of it being kind of, I don't know, greedy or controlling or manipulative, or they all come from us. That's what this technology was trained on. It's not like it developed like it, like it has an actual brain and is planning to take over the world. So, let's put that to the side for a minute, and then let's talk about the other things. I we've touched on job displacement. There is there is going to be some of that. And the and there is the saying in the technology and AI world, which I think is very applicable, which is AI won't take your job, but someone who knows how to use AI will. And I think that will be increasingly true more and more as time passes. Now, I get the uncomfortableness around it, the misunderstanding, the fears. I have them myself. I'm not walking, I'm not engaging with this thinking it's all yes, honey and roses. Of course, there are issues. Now, that said, it's really, really important that we all learn this because this is, as I said, not a fad, the biggest technological shift of our lifetimes, and we'll be infused and immersed into every field we know.  So, the example I use is I get it. Like I get the, let's talk about previous technologies. I get the downfalls with cell phones, with social media, with the internet, with I get it. So, I got two teenage kids who are stuck to like to like glue to their phones, and it bothers me and it's damaging to their development. But imagine I went to you, Danny, and I said, Hey Danny, I'd love to work for SafeStart. I know you work there, and you yeah introduce me to somebody so that I can see about getting a job there. But here's the thing I don't use computers, and I don't use cell phones because whatever, because whatever fear we've got around them, it would be impossible for you to help me get a job at SafeStart. Well, the same thing is going to be true in very soon in every field with AI. Now, if you think about security and privacy, look, we've been sharing our information on the cloud with Google, Microsoft, Apple, et cetera, for years. These companies are not in the business of selling, sharing your information. Are there security risks? Yes, there are, as with everything. Are there privacy risks? Yes, there are, as with everything. But you need to know what you're doing. You need to be, don't use the Chinese model or the Meta model because the Chinese don't care about probably about your data, and Meta is in the business of leveraging your data and for everything they possibly can. But OpenAI, Google, Microsoft are not in that business, and we've been sharing our information with them for years. Second, make sure that whatever platform you're using, Chat GPG GPT, Gemini, you have clicked on the option where it doesn't allow the model to learn from your information. Now that is important. Third, if you really want to do this at an enterprise scale level, get the enterprise version of the of the model. It has all the security safeguards that any enterprise version has. And those are, I think, some of the risks that you that that are worth mentioning. There are others, of course. But again, the upside for this is just incredible.

 Danny Smith

Sure. You know, we've talked a little bit about you know how it can help us to by freeing us up to be more engaging, to spend more time with folks and things of that nature. Let's bring this back for a moment to how AI can really help us in terms of managing human factors and how it helps with SafeStart.

Eduardo Lan

Yeah, absolutely. Okay, I'm glad you're going there. So here, I'll start off with a suggestion. And my suggestion is go into chat GPT or any platform that you use, and once you turn off the function that it learns from you, begin experimenting with it. And anytime you've got something job related, we're talking about the job and in particular the safety job, experiment with AI to see how it can help you. So specifically on safety and human factors, you can record every meeting. Of course, there is you need permission and people need to trust that you're going to use those recordings well, but it not only provides a summary of every meeting, it becomes a database of everything that was said at that meeting and at every other meeting that you now can mine for insights and decision making. If you are you companies have tons of safety data. So, think incident reports, near miss reports. An AI that is fed that information will understand your safety history and your safety tendencies in terms of unsafe conditions, unsafe behavior. If you furthermore add information about SafeStart’s, the states, the errors, the CERTs, it will link all of that incident and near-miss data to those SafeStart states errors and critical error reduction techniques and tell you where you're having the most issues, in which states, what errors you're mostly making, and what CERTs or I'll expand that habits could help you deal with that. So, you said something, Danny, that I really wanted to touch upon, which is the more information you give it, the more context you give it, the more useful it is. That is that is key. That if people are willing to engage with this technology, and I hope you are, because your because the because your job, your life, the world is going there, and it depends on it. It's the one thing I want you to walk away from this conversation is that context is king. One of the reasons why people get disillusioned when they use this technology is because they're using the model, they're called large language models, in their general form and asking it a very general question and getting a very general answer back, which is mostly useless. So, imagine I had a client in manufacturing and I'm thinking of getting involved in manufacturing, and I go to the client and I say, So I'm thinking of getting involved in manufacturing. I'm not saying how, I'm not saying if I'm building a plant or I'm going to work there, I'm not saying anything. I'm just getting involved in manufacturing. Tell me what I need to know. Well, the technology is designed to be helpful, so it's going to start telling you things, but of course, they're going to be wrong because it doesn't understand that I, I'm looking to build my own bottling plant and that it's going to be this size, and I wanted to do this and I wanted to work for these clients, and this is the experience I have in the industry, and these are the number of people I'm thinking of hiring. So, to summarize, this technology in its general form is very useful. However, if you engage with it and provide it as much context as you can so that it truly understands what it is you're wanting to do, who you are, in what field, all those details that are key, then it will blow your mind what it can do for you.

The interesting thing I found with this is you can even go to it, and I think you referenced this earlier. You can even go to it and you could say, hey, I'm looking to open, opening, as you use your example, a bottling plant in manufacturing. And I would like some hints and some tips on things that I should consider. What would you recommend I consider? And just by doing that, you're it's going to open up the conversation, if you will, to prompt you to think about things. And then you're in you in turn are going to be able to give it more information back to fine-tune that. And so, it's that give and take that I have seen is as I've used this and have, I've talked to others about that, where not only can it provide you information, but it can help you refine even how you're asking it for information. And that that I think is that's a fascinating thing for me, to be for it.

To be able to tell you here's a better way to ask me that question. I think that's just amazing.

So yeah, or many times when you're starting out a project, like let's say you're going to yeah, you're wanting to improve your safety stats and you're not really sure where to start. You've tried several things and nothing seems to be really working; you can ask the AI to ask you questions. And because and that'll clarify things for you and will create a much better output. So, here's where I think I want to go, Danny, if you're okay with this, like practical uses, okay? Yes, so people understand where what this can do. So, first off, again, context is king. The more you can the context you can provide, the better outputs you're going to get. Ideally, you're building a custom GPT or a Google Gemini Gem or a plot project. It's and what that is, it's very simple. You build, you I mean, and you just go to GPTs and where it says create, you create you click on create. And then you can have the same set of instructions for that specific function or task that you want to do every time. And then you can upload a series of files that provide it context so that it knows exactly within which realm it is to think and output its information. Um so that's the most powerful example of context. Once you have that, this can I'll tell you some of the things I you I utilize generative AI for. So, I have an AI for every project, every client, every endeavor I'm in, both professional and personal. At a safety professional level, it can help you analyze, as we said, safety information like incidents and near misses and match them up against either the SafeStart methodology or other methodologies or your own programs, provided you've given the AI that context. And we've talked about the meeting note-taker, incredibly useful in that it takes notes for you, but incredibly more useful in that it becomes a repository of all that knowledge that has been shared in those meetings. It can help you plan out trainings, toolbox talks. It can help you analyze your safety numbers against government and whatever industry standards. That's the word I was looking for.

Danny Smith

The other thing I'm seeing with this in terms of you know human factors. I mean, I'm just thinking about you know things like your and you reference this, looking at your injury and near miss reports that you have. I'm thinking about the amount of time that we would spend as humans going through these and analyzing these time of day, location, day of week. If you got rotating shifts first day of the of the cycle or last day of the cycle of the shift, we would go through all of this where we can do the data dump, if you will, into AI and ask it those questions and it can do that so much quicker than we can and so much more efficiently, and then that gives us more information that we can know what we need to do to help manage that better. And things like that are just going to help us so much with, okay, what how what how much could fatigue be a factor in some of the incidents that we're having? Okay, well, they're happening in the last third of the shift, they're happening on the last day that people are working four on, four off. They're happening here. Those things that would point us to solutions that we might need to implement, or things that we could bring to others' attention. This can help us in so many ways where we were sitting there doing the data crunching, if you will, and this can do that for us, right?

Eduard Lan

That is correct. And I love the way where this is going and in terms of your thinking, because you're starting to see the possible benefits and the functionality of this. And the truth is that each of us within our own industry, our own company, our own lives are going to have to figure out how to use this because this is what is known as a meta technology. So, it so it can do it's not a specific technology that does one thing, it does hundreds of thousands of things. And only we know what those things are in our own work and lives. Now back to your example about analyzing all that near miss and incident data, not only can it do it more efficiently, it can do it more effectively than us. Because again, our brains are not designed to hold thousands of data points. And you can, I mean, you can put in the work to read that 200-page report and in those hundreds of incident and near-miss reports, but your brain won't hold it. And you will undoubtedly make a biased decision based on how you feel, your history, and so the AI can help us with that. Now, I think one of the biggest misconceptions is this kind of absolute thinking of is it AI or is it we who are doing the analyzing? It's both. And that's where this is powerful. And you can, yes, catch some of those mistakes that the AI will make because it is generative, because it isn't perfect. And ultimately, it's on you as a safety professional or a leader to make sure that what you are deciding, what you're putting out there is correct. But second, because you have a level of uh very subtle understanding that no machine could ever have. And so, the machine might say, you know, yes, let's move Danny to the to the third shift= because he's an expert on this process and we need somebody there. But I actually know. Danny and I know he has God forbid, a sick child, right? That he's that that he that he can't really go on the third shift because of that. And that if we do move him to that third shift, he's going to be very upset and we might lose him. And we don't want to lose somebody like him. So, the AI probably won't know this. 

Danny Smith

Exactly. Yeah. You you've got to have that human touch with it, and that's the thing, right? And that's I think that's where some of the fear comes as well, where people are just afraid that, okay, well, if the machine quote unquote takes over, but it's not. That's the thing. We're not going to be able to do that. It's not going to do take it's that's not taking over, right?

 Eduardo Lan

Yeah, that's science fiction. What it's going to do, what it's going to do, and it will push us into areas that are uncomfortable for us, is it's going to take over many of the things of the administrative burdensome task that we're so used to doing that for many people feel safe because you don't have to go out onto the factory floor and speak to people.  In my career about that for hours. In my career as a consultant, it's I have actually run into people who are incredibly uncomfortable going out into the field and just talking to people.

 Danny Smith

Absolutely. One other quick thought here, and we'll wrap this up. You know, one of the things I learned really early in my management career, probably one of the first things I learned was if you want to be promoted, if you want to advance, one of the best ways to do that is to train your replacement. And yes, and maybe replacement isn't the right word to use here, because we do have that fear of the machine taking over, if you will, or the AI replacing us, but it's not that. What can AI do for me that can help me to advance, that can take things off of my plate that I so I can be more effective as a as an individual, so I can be happier, frankly, as an individual, so that I can be more productive for the company and thereby be more quote unquote promotable, if you will. I think there's a lot of things that's that we can gain from that, if you will. And I think that's something that we certainly can explore with this too.

Eduardo Lan

So yeah, and I think that's a good point to end on, which is that yes, of course there are there's a learning curve for this, there's a technology curve for it, there are things we need to be on the lookout for, but ultimately this is about being more efficient, more effective, being a more productive, and I would end that with more human because it frees us up to be with others. And that is, I mean, on a personal level, on a professional level, we're always looking to see how we can how we can do a better job. So, this technology can help us tremendously with that. 

Danny Smith

Absolutely. Well, thanks for your time today. I really appreciate it. If you would tell the folks how to get in touch with you, if they've got more questions or want to sit down and have a conversation over a cup of coffee, or what's the best way to get in touch with you?

Eduardo Lan

Yes, absolutely. Would love to speak with anybody that's interested in this, really, and you can contact me on via my SafeStart email, which is Eduardo at SafeStart.com. I have that second email as well Danny. It's E D-U-A-R-D-O at SafeStart.com. You could also contact me via LinkedIn, and we could set up a time to talk about this. Would be happy to share what I know with you. Excellent.

Danny Smith

Very good. So that's Eduardo at afeStart.com, right? And just love to for you to connect with Eduardo talk with him some more. I'm sure he'd love to talk about this. You can tell he's a little bit passionate about it. Thank you again for your time today. Really appreciate it. and on behalf of Eduardo and the entire SafeTalk with SafeStart team, thank you for your time today. Hope you picked up something on this. Hopefully, it generated some more questions. I think that's the thing it always does for me. Anytime we talk about AI, it always kind of gets the wheels turning in my brain a bit. And that's a bit of the purpose of what we're doing for this today. I'm Danny Smith for SafeTalk with SafeStart. Thank you so much and have a great day.