
CE Podcasts for Nurses
Listen, Learn, and Earn CE Hours with Elite Learning. Elite Learning is among the first to bring you nursing podcasts that are part of an accredited continuing education activity. With real-world examples, interviews with subject matter experts, practical insights, and the opportunity to earn nursing CE hours- we’re taking learning to the next level. Subscribe and never miss a chance to listen, learn, and earn nursing CE. Learn more at elitelearning.com/podcast
CE Podcasts for Nurses
Embracing AI in Healthcare: Enabling Nurses to be Nurses Episode 1
This is episode 1 of the series: Embracing AI in Healthcare: Enabling Nurses to be Nurses Episode
Episode 1: The Role of AI in Nursing
Artificial intelligence is reshaping healthcare by streamlining workflows and reducing administrative burdens, allowing nurses to focus more on patient care. This episode explores the current applications of AI in healthcare, its impact on patient outcomes, common misconceptions, and the importance of balancing AI’s capabilities with human clinical judgment. While AI is a powerful tool, critical thinking and professional expertise remain essential in patient care.
Episode 2: Implementing AI in Nursing Workflows
Successfully integrating AI into nursing practice requires thoughtful implementation, leadership support, and effective training. This episode delves into how AI can enhance efficiency, improve patient outcomes, and support nurses in their roles while addressing key ethical considerations. By understanding best practices in AI adoption, nurses can leverage technology to optimize care while maintaining their central role in the patient experience.
---
Nurses may be able to complete an accredited CE activity featuring content from this podcast and earn CE hours provided from Elite Learning by Colibri Healthcare. For more information, click here
Already an Elite Member? Login here
Learn more about CE Podcasts from Elite Learning by Colibri Healthcare
View Episode Transcript
View this podcast course on Elite Learning
Series: Embracing AI in Healthcare: Enabling Nurses to be Nurses Episode
Embracing AI in Healthcare: Enabling Nurses to be Nurses
The following transcript has been lightly edited for clarity. Elite Learning does not warrant the accuracy or totality of audio transcriptions provided by an independent contractor resulting from inaudible passages or transcription errors. An occasional transcription error may occur.
Guest: James A. Lomastro, Ph.D.
Dr. James A. Lomastro is a seasoned senior administrator with over 35 years of experience in healthcare operations, financial analysis, performance improvement, strategic planning, and workforce development. He has played a key role in various Massachusetts healthcare reform efforts, particularly in the integration of hospitals with post-acute and community services. Dr. Lomastro's academic background includes teaching at Northeastern University and Boston University School of Medicine, along with adjunct roles at other institutions. He has served on numerous boards and committees and has been a healthcare surveyor for 22 years. He has written extensively on healthcare reform, including an online course on the topic. He continues to survey and accredit healthcare facilities and is a member of the coordinating committee for the Dignity Alliance of Massachusetts. He recently published an article in Nonprofit Quarterly on AI and healthcare.
Host: Candace Pierce: DNP, MSN, RN, CNE
Dr. Candace Pierce is a nurse leader committed to ensuring nurses are well prepared and offered abundant opportunities and resources to enhance their skills acquisition and confidence at the bedside. With 15 years in nursing, she has worked at the bedside, in management, and in nursing education. She has demonstrated expertise and scholarship in innovation and design thinking in healthcare and education, and collaborative efforts within and outside of healthcare. Scholarship endeavors include funded grants, publications, and presentations. As a leader, Dr. PIERCE: strives to empower others to create and deploy ideas and embrace their professional roles as leaders, change agents, and problem solvers. In her position as the Sr. Course Development Manager for Elite, she works as a project engineer with subject matter experts to develop evidence-based best practices in continuing education for nurses and other healthcare professionals.
Episode 1: Embracing AI in Healthcare: Enabling Nurses to be Nurses
Transcript
Candace Pierce: For too long, nurses have been buried under just mountains of paperwork, endless documentation, and administrative tasks that really pull us away from the bedside. Did you know the average nurse spends nearly half of their shift on administrative work? And that's time that we could be spending holding a patient's hand, we could be educating families, and being with our patients to help catch those early warning signs of complications. But what if we could change that? What if artificial intelligence could handle the routine documentation, maybe automate the scheduling headaches, and really help us in flagging important patterns in patient data, all while letting nurses focus on that irreplaceable human element of care, the kind of care that really made us choose this profession in the first place. I am Dr. Candace Pierce with Elite Learning by Colibri Healthcare, and you are listening to our Elite Learning podcast, where we share the most up-to-date education for healthcare professionals. Thank you for joining us for our series called “Embracing AI in Healthcare, Enabling Nurses to Be Nurses.” And our goal through this series is to help empower you with the knowledge and tools needed to help embrace AI and really enhance your practice. Joining me for this discussion is Dr. James Lomastro. James, welcome. Thank you so much for joining us for this discussion. And I wanted to start with, can you tell us a little bit about yourself and how you got involved with AI?
James Lomastro: Well, I've been in healthcare for almost 30 years. I spent a significant amount of time in hospital administration. And primarily I got involved very early in my career when I was at the BU School of Medicine. We were doing research, and we had our own computer. And that was in the early days. So, it was an old PDP 11/45, but I got to do interactive data processing. So right from the beginning, I was on a computer. I was typing at a screen and doing that type of thing. So progressively through my career, because I've had that early experience, I've always been the one, the administrator, the senior manager who they went to and said, we need to implement AI, we need to implement data processing systems. You know a lot about it. You've done interactive data processing. Let's go. In addition, I've owned a Mac computer, personal computer for about 40 years. I have one of the original Macs sitting on one of my desk right there sitting back there. So, we've always been kind of a hobby of personal computing. I remember being interviewed some years ago as one of the few CEOs that actually worked on a personal computer and did his own typing.
PIERCE: So, before computers became popular, you had one!
LOMASTRO: Yes. As a matter of fact, the only benefit I asked when I went to work at Mercy Hospital in Springfield was a personal computer. I think Sista thought I was going to ask her for a car. She was very relieved I asked her for a computer. But so, I've been really, really involved in this whole process, very interested in it, and learned a lot just by using it. And in recent years, I've had a lot more time to spend investigating artificial intelligence and the way in which it's going to radically transform a lot of what we do. One of the advantages we really have is everyone knows how to use these personal devices. These are not foreign instruments. Most nurses have a phone, a tablet. So, it's a lot easier transferring that type of skill to their workplace.
PIERCE: Right, absolutely. Can you start with by really helping us understand what artificial intelligence is, and how it is currently being used in healthcare?
LOMASTRO: Well, artificial intelligence is just a simulation of human intelligence. It does many of the tasks human intelligence does by the use of algorithms, which are just formulas that they use. Machine learning, which is a little more complicated. That's writing code for machines. And natural language processing. Those are the three ways in which it works. But the focus of AI is on data collection analysis and rapid performance of tasks, and that's what it's really good for. And that's where it excels. And it's used extensively in healthcare right now. You may not even know it's being used. It's used in diagnostic applications, data analysis, and electronic medical records. There are some radiological systems that employ it. Personalized healthcare, predictive analysis, used in pharmacies in order to see what drug interactions there are. And of course, a lot of it is used in administrative and operational efficiency to cut down on tasks and do various things like that. Artificial intelligence is being, I saw a rollout of a, well, we've had SIDs (Smart Interactive Devices), but actual robot that incorporates artificial intelligence to allow as a personal assistant, it's not going to replace a person, but it's there to do a lot of tasks that might be done by people.
PIERCE: Yes. So, looking at artificial intelligence, how are we going to see AI technologies really supporting nursing in the workforce as far as their daily task and those decision-making processes that they have to do?
LOMASTRO: Well, the thing it's going to do is reduce a lot of administrative burden. A lot of times, a lot of effort is spent getting information into computers, into the electronic medical record. , for example, from the various devices that are used, , whether it's a pump or a blood pressure gauge, all those devices are going to be connected so that when the nurse does the actual taking the blood pressure or checking or loading the pumps or anything, all that stuff's going to be uploaded. So, she doesn't have to write it down. She doesn't have to be bothered with the fact that the recording is. So simple things like that may do that. In addition, I've seen some applications where the nurse carries around a personal device that's basically a modified cell phone in which they can actually dictate their notes and dictate some of the patients requests and all that information goes in as text into the system. And then at later times, she might be able to retrieve it. Or AI can do things looking for words or expressions or stuff like that in order to alert the nurse to there's something's going on here that you might not be completely aware of. It just takes a lot of the time-consuming things that the nurse does and allows them more time to spend with the patient. At the same time, it may be recording significant amounts of information that normally may be missing because people just don't have time and attention to do it.
PIERCE: Right, when you have so many patients that you're looking after, and you have so much information. And I love where you were talking about how the systems can feed into other systems because in a lot of the places where I have worked, you already have that happening. So, I think it's, not really realizing, hey, this is playing a piece of artificial intelligence. We're already seeing what artificial intelligence can do without realizing. We see that as, that's so helpful, I can just feed that over, choose what I want to come over into the charting, into the documentation system. But we didn't really put that with, oh that's artificial intelligence helping me pull that over. But I also loved where you were talking about being able to have a personal device where you record something. I think that would be so helpful because you walk into that patient's room, and you notice these things and you pull a napkin, and you start, don't know if you're writing on a napkin or you have a piece of paper in your pocket and you're writing on that piece of paper until you can actually get back to a computer so that you remember what you need to document where I could pull that out and I could record it. And I assume you're saying like AI could pull that and then I could go in later and make just changes to it or grammatical changes or maybe I said something that was subjective and not objective, and I need to take that out. But having it there as a starting point rather than trying to keep up with that piece paper or keep up with that napkin.
LOMASTRO: And, and those devices have multiple uses. I've even seen them incorporating buttons on those devices so that if a person needs help, they don't have to call out in the hall. And so those devices are very useful. And they're in use, think, the Veterans Administration has pioneered some of these. In addition, what they put on those devices is kind of interesting. They'll also put on query systems. So, if the nurse is not sure how to operate the pump or the pump is, maybe they change the pump from the first time to the second time, she can take a picture of the pump and that will give her instructions. They'll download instructions as to how to use the pump or how to use the pump, and how to connect it. There were a lot of, so she doesn't have to leave the room or call somebody else to do it . So, I think a lot of the personal devices may help minimize the amount of requests, for example, when you don't, you're not sure of something and you want to check it out. And then you might want to go into the nurse's station. You could still be at the bedside and still check it out. And you're not disturbing somebody else who's trying to get their work done as well. Not so disturbing, know, asking them. And a lot of things are, if you're involved in a team, a lot of those notes can be uploaded to a team note. And the AI can go to the team note and see whether or not, for example, maybe I need to alert this other person about this particular condition, and maybe I need to know that, or maybe I need to call the therapist in, because they're not walking correctly. And also, I know that a lot of times in large facilities, you have a lot of expertise, but you don't have, it's difficult to get access. I had one facility where I had, I think we were about 25 or 30 rehab nurses and another hundred therapists, the inpatient/outpatient, all of them had special expertise. We had loaded all that information into a blue book. I could imagine if I could have taken that information and put it into a program and file so that if a nurse needed another nurse who had special expertise, she could just call that expertise up on that file and say, I'm running into this urinary problem. I'm not really sure what to do about it. Can you give me some instructions or help? Or can you come and take a look at the person and do it. That whole referral process can be automated as well.
PIERCE: Right. What about referrals since you're bringing out referrals, referral process within your facility, but also referral process within insurance?
LOMASTRO: Yes, they have some really interesting applications that actually connect the practitioners directly to case managers and insurance, especially if they need approval. Some people are using, I've seen the application of Teams and some of the AI things in which the insurance person or the approval person actually has a channel that they can connect to. It's all privacy, it's all secure so that they need approval for a certain procedure or something else like that. They could just get on that channel and get the approval or not approval, the prior authorization. It makes the prior authorizations a lot quicker. You're not waiting for paperwork for it. And the other issue is it alerts the person that the authorization has been approved. Same thing with an order. If you put an order through to a physician, a request for an order to a physician, the physician transfers the audit to you. could all be done electronically and it could alert the person that the audit has been transferred.
PIERCE: There's a lot of things that can definitely help with our time management and open up more time for actual hands-on patient care. How does AI improve patient outcomes and safety in healthcare settings? I know we're seeing how it helps with our time management and giving us a little bit of time back to really care for our patients, but how does it improve patient outcomes and safety?
LOMASTRO: Well, in terms of outcome, the more time the nurse has to spend with the patient, the more the patient, the greater the patient experience. If you could eliminate tasks that the nurse does so she could spend more time in direct contact with the patient, that's going to improve the outcome. One area in which you can improve outcomes is the whole issue of medication management in which there are a lot of interactions and various things like that. We know that a lot of the medication errors that occur are really systemic issues. And AI can help address those systemic issues by alerting the nurse to the possibility. This doesn't make sense. This whole thing doesn't make sense. This medication is not usually allowed for this particular, maybe, it may just be the case, and she could check, he or she could check it out, but that's the case. It makes the whole transfer of data a lot more efficient so that there are no errors which are happening because someone writes something down incorrectly or passes a note incorrectly because there's a direct transfer of information. Plus, it could check the accuracy of what you're doing. Did you really mean that when you put that in? It gives you feedback on that. That can eliminate a lot of time spent in recovery time. I took the wrong order, it wasn't correct, I did this and did that. So, I think that's where it can help significantly.
PIERCE: What are some common misconceptions about artificial intelligence and its use in healthcare?
LOMASTRO: Well, the first misconception is going to replace the nurse. Let's get rid of that one. It's not going to replace the nurse. There's no way it's going to replace the nurse. Health care is probably one of the few areas that you just can't use robotics or anything else like that. There has to be somebody there who makes the judgment. One area that's a concern is the fact one misconception is that it's going to de-skill the nurse. In other words, the nurse is not going to have the same skills that they had before because of how efficient and intelligent not going to do that. If anything, and it's worked right, it's going to re-skill the nurse, it's going to give her additional skills that she may not have had to concentrate on the skills that she's really good at, rather than at the administrative tasks that take a lot of time and paperwork. It's not infallible. That's a misconception. Let's not take it. It can make mistakes. And that's why you you to jump. You're still going to need good nurses who are competent, who are assessed for their competency to be able to assess, is it giving me the right information? I we have that whole issue of, I probably encountered it daily of AI hallucinations where they give you a source and you're looking at the source and saying, oh, that's interesting source. I didn't know about that. You check it out, there's no source. They made the whole thing up. So, you got to be careful that things aren't made up you go to Google scholars, I want to see the study and the names right, but the studies not. It's getting better. When I first heard about a year ago, it was really bad. You had to check almost every you still check everything; you had to go back, and I think let me let me just ask it again. And it's getting very good at correcting itself. And don't be afraid of correcting the if you if you're using an AI system such as Chad or Claude or Gemini or anything else, telling it's wrong. I don't think this is right. And it'll recheck its work. It'll come back to you, and they'll say, yes, I was wrong, or it was, does, AI systems don't think like people. We see because we're sentient, because we feel we have emotions and everything, we see a lot more things. It means something, the nurse looks at the person, something's wrong with that person, just not right. And Artificial intelligence is not going to pick that up. They just know that something's wrong because they've seen it before and it's in the back of their mind and they're feeling it, and they express it, and it comes out that something's wrong. You can't operate it without humans.
PIERCE: So, AI is strictly off of what we put into it. So, it works off of what we put into it. Therefore, it is fallible, because of the data that is put in or taken in is not necessarily correct.
LOMASTRO: But the problem might be yes because the AI system is connected to other data systems, it might incorporate other data. So, you've got to be careful of that. It might not, but you would put the information incorrectly. It may be the system is pulling information from other sources. And that's what you really have to, or other sources, and that’s what you have to do. It doesn't replace clinical judgment. And the implementation of that is not effortless, it's not just like pressing a button. You have to really work at it. You have to learn how to use it. It's like every skill. You have to learn how to use the system. And some people are better than others, just like some people are better at giving injections than others. Some people are better IVs than others. And that's the same thing with AI. Some people are better than others. And if you get a particular person on your team that's really good at it, then they can become a mentor for the other people. Just the same way you use, I know when the nurse has a problem getting an IV and she might call somebody else who says, really good at finding the vein and putting it in. Same thing with AI. I'm not getting anything out of this. I don't know how to use it. I can't get what I'm looking for. So let me just, let me ask somebody else.
PIERCE: And we do know bias is such a word that is used a lot today as far as cultural competence and in the way we think and AI we know it's going to be more objective because it's not going to have that subjective information being put into it or it shouldn't like what I would think is subjective as far as what I wouldn't document so it's going to have more objective information in it, but it's not, it could still have bias, correct, in the information it gives me.
LOMASTRO: Well, because the bias gets built into the algorithms. And that's where the real problem is. If someone's building the algorithm, they may unconsciously put a bias, whether it's gender, racial, ethnic, or their own pet theories about how things should be. So, depending upon how the algorithm is built. So, if people start seeing results that aren't consistent with their own judgment and things like that. They can start looking at and seeing whether or not the algorithm is actually biased and whether or not someone should be looking at the algorithm to ensure that bias isn't built into the algorithm. There's a whole literature on racial bias in algorithms. And just like anything else, same thing with, it's the way the system is trained. It's the way the algorithm is built. And it's not completely objective. It's what is put in there and biases can sneak in. And the only way that they are found out is somebody gives us feedback. It's a whole feedback loop. It doesn't get better unless someone says I have a problem with this. This thing isn't working the way I think it should be working. Or I didn't get, I just don't, the results are just not correct. It's just not right. What's going on? I've done everything it said. I've looked at it. So, I need to move back and say, maybe I need to talk to the people who design the algorithm to make sure the algorithm is correct. Maybe it needs to put more variables into it. And maybe it needs to capture more of what we're looking at. And that's an important thing that nurses can provide because they're the hands-on people. They're seeing what the results are. They need to continually feedback and not be shrinking violet about the fact that this is not what I want. I'm not getting what I need. What I'm getting here is I'm getting a lot of garbage, and it's helpful in this area and it's helpful in that area. It might be helpful in managing a diabetic person, but it's striking out as far as cardiac people. So, what's going on here? Do you know what I mean? What's going on here? Provide that feedback so that other people could look at that and see whether the system people, the IT people could look at it see whether or not what's incorporated into the algorithm. And they can ferret that out. They can look at the algorithm and they can see what variables it's pulling down into it and how it's weighting them. And it just might be the case of not that it has the wrong variables, but it's how they weight the variables in the algorithm.
PIERCE: Now looking at the information that we put into an artificial intelligence system, is there a threat of privacy risks in adding that information to the systems?
LOMASTRO: There are always privacy risks. anytime you have an access point to a computer, there is a privacy risk. I actually surveyed a hospital in California that had been hacked, and they've been hacked at a remote location by someone who was a part-time physician's assistant who got into the computer, wasn't careful and open the whole system up to hacking. So, what needs to be done is that you need to put privacy things in. And way in you do that is through training. You train people into these privacy systems. You have to be careful what you put in and how you put it in and who has access to it. It goes down to even simple devices. Make sure you lock your device; it's password encrypted and everything else like that. Just as, know, for example, when you leave your car, you lock your car? When you leave the computer, you lock the computer so no one else can get in and no one's watching you do it. And you change passwords, frequency, which I know is really a pain, but all those things can allay privacy concerns.
PIERCE: So, there's a lot of myth, common misconceptions you were kind of talking about there with the use of artificial intelligence, especially within that human oversight where we're the ones who are putting that information in. And so, we need to make sure the information that and the places it's pulling information from is actually giving it correct information. We have to remember those ethical practices of how to keep our patients privacy and there's sensitive data where it needs to be so that it can't be taken. And we also have to understand the capabilities and the limitations that artificial intelligence brings. And one of those limitations you just mentioned earlier was hallucinations. So, I wanted to circle back to that and see if you could kind of tell us what these hallucinations are and how they can impact their use in healthcare and how can we minimize the effect of hallucinations.
LOMASTRO: Well, I mentioned to these is when it's common, outside healthcare, is giving you the wrong source. Giving you the source for which you're going to go to and you're going to operate under. Giving you a wrong set of instructions, for example, you go in there and say, okay, here's the case, what do I do? It makes up instructions which are complete, it makes them up because it may not know, and instead is telling you, I don't know, I don't have information on that, it kind of wants to help you. It kind of elucidates, builds this little dream world, and it says, oh, here's the answer to you. So, you really got to check out the hallucinations and check out.
PIERCE: How do you determine that it's a hallucination? How do I look at something and determine?
LOMASTRO: You have to be grounded. You have to know your field you're dealing with. You can't just, you have to be good at what you do. You have to be a good nurse. You have to be good practitioner. You have to have knowledge and competency in that particular area. That's how you're determined because otherwise, if you're not grounded in the particular area, you're in, it could give you false information and you won't know it. So, for example, if I'm working on a project and I have a lot of knowledge of that particular area. If I'm a person with brain injury, I have lot of knowledge of brain injury. And it gives me the information about brands. And that's not really the case. That's not the case. Let me check it out. Let me see whether or not this is really what I'm supposed to be doing, what course I'm supposed to be following, or the treatment that's really prescribed for them, or the type of behaviors that I should be expected for someone like that that has that particular condition.
PIERCE: So has to be continual oversight of a nurse, a physician, a therapist, whoever it is that is using this. There has to be that continual oversight where we go back and review it. We should not be saying that artificial intelligence, whatever it gives us, is the final decision. Would that be a good way of putting it?
LOMASTRO: Yes, it's a tool that you use. So, you don't make your final decision. You don't use the tool as the final decision. Use this tool to help you make the final decision and make a judgment call.
PIERCE: Right. And that really helps, I think, with reducing that administrative burden to where we're trying to do all that research to try to find a diagnosis or to try to find the trends. It’s just an example as we go into the next question, which is how can AI help with reducing that administrative burden on nurses? I think that was a really great example.
LOMASTRO: Well, the administrative burden comes mainly from paperwork and the paperwork that they do and the transfer information. What AI can do is it can make the transfer of that information a lot less burdensome because it can analyze data in a way that quickly and in some cases more effectively because it's all task is to analyze the data. But it has to, you have to put on other digital platforms as well. Like for example, there's no reason someone can't just dictate a lot of that information in and why they have to write it down or type it in or anything else like that. And by doing that, we have that's with processing language systems now that are fairly accurate in terms of dictation so that that information can get in so if they don't have to type it out or write it around, they can dictate a lot quicker than they can type it. Again, they can dictate anywhere using a small device and that information gets into it and that relieves the burden. And also, in terms of transfer of auditors and procedures and treatments and everything else like that, that can all be done electronically.
PIERCE: There are some really great examples that you've given us as far as how we can use AI. And a lot of people think of AI as just like the different platforms you were talking about where we type something in, but we're actually already using some artificial intelligence tools. And I didn't think of them as artificial intelligence until you started talking about them, but just kind of ways that we are already easing into trying to relieve some of that burden that keeps us away from our patients.
We are at the end of our time for episode one. Thank you, James, so much for joining me for this series. It's just a pleasure to hear you talk about this and to see the future direction of where we're going in healthcare. And to our listeners, thank you for tuning into episode one, and I hope you'll join us for episode two, where James will be back to help discuss some practical strategies for how we can look at integrating more AI into our workflows.
Episode 2: Embracing AI in Healthcare: Enabling Nurses to be Nurses
Transcript
Candace Pierce: This is Dr. Candace Pierce with Elite Learning and back for episode two is Dr. James Lomastro. In this episode, we are going to focus on integrating AI into our workflows from training to data management and the ethical considerations for using AI. James, thank you so much for joining me for episode two.
LOMASTRO: Thank you.
PIERCE: Yes, well, before we get started on these questions, I did want to highlight some workflow statistics. We know that there are some real problems today with the burnout of nurses being an all-time high. And then we have some vastly differing statistics on how much time nurses actually spend on documentation and all those administrative tasks, which can be as low as 21 % to as high as 41 % per shift, and having worked at a few different hospitals, I know that this is true on both ends, really depending on the processes of the facility that you work in. But what was not differing was the amount of time spent on direct patient care, which was around 19%. And every 10 % increase in administrative tasks actually correlated to a 15 % decrease in patient satisfaction scores. James, what are your thoughts on these statistics?
LOMASTRO: Obviously, we want to increase the amount of time that the nurse spends with the patient and anything we can do, any device, any tool, any type of integrations of various systems we can use to do that. It's certainly going to improve patient engagement and patient satisfaction. Burnout's a very interesting concept. There is a lot of stress in nursing, and I don't think you're to eliminate the stress in nursing. But I'm not really sure the stress is the real reason why people burn. I think they burn out, because they don't have the tools to be able to bounce back. They have to be resilient to be able to say, OK, well, that was a crisis. We went through that crisis. Now let's go on. So, if they're burdened down with a lot of mundane administrative stuff, it really takes a lot of time and effort out of them. And they're not going to be there and available for the patient. And I think anytime we can put a system in place that increases the amount of patient-nurse contact, we should do that. And that could be really, as you said, the benchmark for whether or not AI is working, how much time increases for the interaction between the patient and the nurse. And the more time the patient spends with the nurse, I suspect it's going to correlate with higher patient engagement and higher patient satisfaction.
PIERCE: Absolutely. And I think you really hit the nail on the head with when you were talking about how, regardless of what tasks we have to do, there's too much time spent on these administrative tasks and there's not enough time that's really being given to the nurse to be able to be at the bedside and be the nurse that they want to be. And I also think that plays a bit into that burnout that we're seeing is they're not being able to be the best nurse that they want to be because they can't spend the time that they need to spend at the bedside being the nurse to the patient rather than electronics or paperwork or the telephone. But I did look at some implementation results. Do you want me to share those? Okay, so what I saw, which is, it was really small, but hospitals that have started using AI for a lot of these mundane administrative tasks reported a 30 % to 40 % reduction in documentation time. And the other piece of data that I found was that healthcare facilities that used AI for scheduling reported a 20 % improvement in staff satisfaction. There wasn't a lot of data, but would you say we're kind of still in the thick of the development of these tools?
LOMASTRO: We're really in the thick of the development. We really haven't really started rolling out AI program. I think one of the most effective, one of the systems that's really pushing this whole thing is the Veterans Administration system. They really have made a significant commitment to this. And I've been in a fair number of their facilities doing accreditation, surveying them. And I see a lot of improvement over the years as they implement their systems as they put this system in place. They're one of the ones, a few of them are the ones, one that have the, everyone has a device, and that device is connected to different systems and they, different systems. And they're constantly improving upon that system, constantly working the AI, the IT: nurse practitioner connection so that they're increasing the amount of interaction with their patients because they're away lot of the tasks or minimizing a lot of the paperwork tasks and a lot of the tasks of recording and stuff like that.
PIERCE: Which is really good. And that's really what I want to talk about too is how, if you're in a facility that has not actually started integrating AI into their workflows, what are some of those effective strategies to help with integrating AI into your nursing workflow?
LOMASTRO: Well, the first thing you do is get the nurses themselves in the leadership and it needs to start talking about how can we make this whole system better. Many nurses move from one system to another. So, you may have nurses who come into your particular system that's not using a lot of AI, that have used it in other places. Those are the ones that could become the champions for using AI in the system or administrators or managers or people like that who come into the system and say, hey, look, I've been at this other system, or I trained in this system. They may have younger or newer nurses who have been trained in bigger or more sophisticated facilities. Although size doesn't really, it is not really the only factor. You could have smaller facilities, could have AI as well. And so, they need to, as I used the word agitate, for looking at this as a way of reducing their paperwork and at the time they spend doing this type of stuff, because they may have seen it in other places, and they may see the benefit of it.
PIERCE: Absolutely. And what about the training that needs to come along with integrating AI into a nursing workflow? What's the best way to train them?
LOMASTRO: Well, the best way, but the way some suggested for changing is the first thing you need to do is to have all the people that are going to be impacted by those systems in the process, involved in the process. And the best way to do it, of course, is a needs assessment. And although we know what the problems are, it's good if we go down to the base nurse and ask them again, what are your needs? What do you spend a lot of time in? What would you like to see differently? So, we're not just taking a package and applying to them. And then we need to also take, in terms of our staff development, to have some sessions on basic AI principles. What can AI do? What can’t it do? So, there's a certain amount of literacy that is built up because you want to build up the skill to use in AI. And like every skill, you're going to need a certain introductory. You're going to need a certain introductory to it. You have to know the basic concepts and how to use them. And then I think that the next step should involve some hands-on experience and start small. Here's a simple way that we can integrate AI into our system. Let's try it. Let's put some mentors on the floor. Let's get some champions, some people who may get it quicker than others, and let them start the process of implementing this thing. And there's this constant need for feedback. Is it working? Is it making your job easier? And it's going to take a lot of time, a lot of effort, and a lot of listening to people. You just can't take a system off the shelf and say, here, even if it might work at one hospital, it might not work at another hospital, it might not work at another facility, and here. So, you need to customize it to the particular culture that is in that particular facility. know, one facility may have a lot of younger nurses who are very adept at using iPads and cell phones and on Instagram and used to moving things around, pictures and everything else like that. Or you may have another facility that has a lot of older nurses who may not have had the same experience. Yes, they use Facebook or FaceTime, and they use Facebook or FaceTime for their grandchildren or their children. But they may not have the same level of expertise. You have to adjust to that. And even within the facility, you can have various levels of expertise. So that's what you really have to do. I think it really starts with a solid needs assessment. And then you can start developing strategies for your effective AI integration.
PIERCE: Right, yes, really knowing what you need, being able to build what you need, and having people start at the ground and help build it up so that you have buy-in, I would think would be really important.
LOMASTRO: So, buy-in is critical. It's not going to work unless there's buy-in. I remember we went from a paper record to electronic record. And we didn't spend a lot of time doing buy-in. And it took us years for us to really see the real use of electronic record, because what we did was, we, what we did is because we were listening as well as we just put the old record, the old paper record on the computer. Instead of being in pieces of paper was now on the screen. And it was no, there was no real advantage. You couldn't sell any advantage. So, this was it. So what? It's, I feel it, but I'm on the screen.
PIERCE: Yes. I want to hold it; I want to flip it. I want to use my pen and my pencil. And I still feel that way today, being in critical care and we had this specific documentation piece of paper, and you flipped it. I mean, it had all these different tabs and turned it around and other people wrote on it too. And it was just right here in front of me, and I could, I don't know, I still miss that. Having to click on all the different tabs and all the different pages to find what I need versus when it was all just right here in front of me, know, except my labs.
LOMASTRO: Well, it was not really all there
PIERCE: What I what I needed at the time was there and then I could, know, you had the other pieces of paper,
LOMASTRO: You still have the physicians whose writing was illegible. It took a long time for the electronic medical record to really come to its fore and really do its promise thing.
PIERCE: Yes, yes and you still have you still have people that don't like to put orders into the computer they still like to call you they still like to try to get you to put them in you know but it's even today it's been gosh it's been years that we've had electronic health records and you still see that there's still not 100 % buy-in of everybody yet which I find interesting.
LOMASTRO: Well, I think it's going to change because most, for example, I have a good example. have a granddaughter. We went to, we're in Scotland and she didn't sign her passport, and the custom agent said she needs to sign her passport. And then he said, “Good thing he knew it. He said, make sure she connects all the letters because she didn't know anything about cursive. She used block letters, getting on the phone, doing things, moving pictures around. She's giving her a piece of paper, something written on that. She can't handle that. So, I think eventually because of it's so pervasive, it's going to change that way because the culture in general is like that. But I think in the beginning when people didn't have their own computers and laptops and iPads, it was a little esoteric.
PIERCE: Right. It took a lot of time to get where you needed to get to learn how to get to where you needed to get. And so now it's been around a lot longer. So, it's getting a little bit easier. But when it comes to data management, what role does data management play in implementing AI?
LOMASTRO: Data management is essential because you want to make sure the information going in is valid. It's the right information. It's correct. It's reliable. And then when it gets transferred from one place to another, it gets transferred accurately because sometimes that happens. You don't want drop-offs occurring. So, the problem with data management systems is they're so large that they're often managed by data management departments.
And sometimes they can be their own little kingdoms. So, people understand that that's the patient's data. That's the data we use in order to treat the patient. So, we have to be very careful that we have full access to that data and that data is actually reliable. Even the people who somewhat as it were, control where that data is stored, it's not their data. They're there to support us and help us and assist the nurses. Sometimes the IIT departments see themselves as being a, and not to support which they should be, but really being an identity to itself. And I should be asking them, this, not how can I do it and tell me how I can do it, but I need to do it this way. Make it happen.
PIERCE: Right. And that really look looking at implementing artificial intelligence from that leadership perspective that I know that you have. How can health care systems really be active in involving nurses and other health care professionals to have buy-in to help you design and really optimize the use of artificial intelligence and really addressing those practical concerns that their employees have.
LOMASTRO: Well, the leadership has to be committed to it. that just has to be committed to it. They just have to say, we're going to use whatever systems we can to increase patient, keep their eyes on the prize. The prize is to increase patient engagement and satisfaction, support practitioners who are at the bedside giving the care and then making the operations as efficient as possible. So, leadership really needs to be committed to those principles and whatever it happens. If an AI system can improve the time a nurse spends with a patient, they need to be committed to it and they need to support it and they need to provide the resources to do it. And the process has to start at the top and work its way through all levels. There can't be any levels which are not connected to it because otherwise it will fail. The nurse is the end user of the particular system, but She depends upon a lot of other people up the line and laterally to be able to support that. So, leadership has to really be committed to it. And nurses must insist that leadership is committed to it because it has been shown to reduce the paperwork. It has been shown to help nurses avoid making errors. It has been shown to increase the amount of time that the nurse can spend with the patient. Those types of things are important, and those types of things are critical, and they need to put upon leadership that part of their responsibility is to support them in that particular endeavor.
PIERCE: And from your, to pick your leadership brain, what are some of those practical concerns that you've seen that organizations maybe should be aware of when it comes to trying to adopt something like this?
LOMASTRO: Well, I think it's a newness. People are just used to doing things a certain way. An administrator may come in or a leader may come in and this thing is kind of foreign to them and it involves relearning. And there's always resistance to change. How do I know this is going to really work? How do I know this is going to optimize? It's just a new fad that's going on. To some extent, most leaders are somewhat conservative. don't adopt. They don't adopt the innovative ways of doing things. mean, they may talk about innovation, but they're pretty conservative. And to some extent, their major goal, their major purpose in there is to make sure that the organization stays afloat. So, they may be concerned that if I invest a lot of money in AI, it doesn't work out. What's that mean to the bottom line? What's that mean to our ability to get other resources? If I invest in AI and AI. Maybe I can buy another PET scan, or I can renovate the emergency room or something else like that. I mean, those are all the concerns. I think that in terms of getting leadership, I think a lot of it has to do with is people showing them that these are the results. And as it gets more accepted, as more people are used to doing it, they insist that we have these type of tools, just like the tools. mean, if you were manually checking of vital signs and this facility you've been tied to and you've been used to electronic monitoring system, you're come in and say, what are we still doing this for? And you could be a change leader in the thing. And the leadership, the change doesn't have to come from the top, it could come from the bottom. And they can actually...
PIERCE: Yes, definitely being a change leader is very important. And looking at
LOMASTRO: Go ahead. I'm sorry.
PIERCE: No, I was going to say looking at being a change leader and bringing artificial intelligence into your facility. One of the concerns that really just kind of jumped to my mind that I don't know if it actually is a concern, but it could it be that if we bring artificial intelligence in and it helps take some of that administrative work away, then is the facility going to then say, well, I don't need this many nurses anymore on this floor. I can decrease the number of employees that I need in this hospital because I've increased their amount of time at the bedside, which really just, I think, puts the nurses back at square zero, back to the same issues that they were dealing with, right?
LOMASTRO: Well, you'd be less than honest to say that that's not concern and that some may well use that as an excuse for reducing hours or making up for short staffing and it's just not going to make up for short staffing. I think pretty much currently, I don't think there's enough staff as it is in either hospitals, nursing homes or the healthcare facilities. So that anything we can do to increase the time that the nurse doesn't have to spend on these tests is going to be in the patient care and that's going to increase the hours they're going to be able to spend with the patient and I have a feeling that if we look at the hours spent per patient right now, given the staffing shortages and everything else like that, they're probably below what they should be and what we need to do to out-of-home care. It’s, but related to the fact is that the nurses just need to be vigilant, and that certain leaders or certain administrators may not see as an occasion. I can't say that it doesn't happen. And I can't say that I don't see it in the literature, especially in the so-called the industry literature, where they're looking at AI and reducing staffing rather than enhancing the patient experience. It is a definite risk.
PIERCE: Yes, that's saddening, but there's always those risks that are coming with these advancements. And I do think that this kind of ties us into ethical considerations. What are the ethical considerations that we should be thinking about when we're using AI in practice?
LOMASTRO: Okay, well, there are a lot of ethical concerns. There's a lot of ethical concerns on AI and I don't, my enthusiasm for AI, I don't mean to minimize some of those ethical concerns. The nurse is still going to be responsible for the patients. She's still going to be responsible for the role in maintaining the patient privacy and confidentiality as well as doing the patient care. So, it's AI, we're not going to be able to use AI as an excuse for not being aware of that. And also, one concern has to be done and depending upon the extent of the AI involvement in various facilities, the patient should be aware that the facility does the organization, and the nurse does use AI as a tool. And that while aware that when AI systems fail or cause harm, that we're still responsible and it relies with all of us who have implemented the system, the institution, as well as the nurse. There have to be some clear guidelines to establish accountability and correct any mistakes. There are going to be errors. And there have to be ways. Errors are missing the target, so to speak. And we have to look at those errors and say, what are we going to learn from these particular errors? And not just assess blame. Hopefully we've gone beyond the blame game when something occurs in an error just as you could have a medication area that the person gets the wrong medication even on the old system but not in the AI, input into it, we have to look about what systems failed that particular nurse in terms of providing that particular care or treatment or information. And then the nurse has to have a role in how the AI is implemented. And also, as we mentioned before, in the bias that are built into algorithms, nurses could be very helpful in terms of understanding where the biases may come up, because they're seeing the result. There's going to be a lot of issues related to equity and access. Hospitals and facilities that may be located in low-income areas may not have the same access to AI as other facilities because they don't have the same resources. And the rural facilities are in the same position. So, we have to ensure that as much as possible we're rolling out those same systems or at least as many systems as we can to those facilities has to be transparency. AI is not black boxes. You can look at the process. You can look at the steps. You can look at the information and everything else like that. So, if something goes wrong or is a near miss or anything else like that, we can go back and look and say, what happened here? What are the steps that we have to take in order to correct it? What is our responsibility and accountability for being able to correct this particular system? And again,
You got to be careful there's no overreliance on the system. It's not going to do the nurse's job. It's not going to make up if a nurse or a practitioner is not competent for their lack of competency. It's not going to skill them if they don't have the skills to begin with. And we have to make sure that nurses have the ability because they're the ones who are responsible for overriding AI suggestions. if they feel it's in the best interest of the nurses. And we have to make sure that we give as much training as we possibly can and that we don't have another ethical concern we talked about before. We don't use it as an excuse for reduced cost cutting or reducing staffing as well.
PIERCE: Yes, there's a lot to think about when it comes to trying to use AI as far as ethical considerations, making sure that the way that we're using it is effective, but also safe for patients and that we have that overlying look that we are the final say, not the AI, whatever technology it is that we are looking at. So, when it comes to using AI, when it comes to trying to bring that into your facility. Are there some resources and support systems that would be available to help us to really look at how we can embrace AI in practice?
LOMASTRO: There are especially for nurses, the nurses seem to be getting into this a lot. The American Nurses Association has sections on health information and management systems. University programs are starting to get more involved in it. I just noted that I live relatively close to the University of Mass. The nursing department has developed a relationship with an engineering department. I just saw a notice of it. As a of someone who I know is former dean is heading that up and it's becoming an important part of their curriculum as well. There are professional informatics associations. I think there's even starting to be a certification in nursing informatics. There is an alliance for nursing informatics, which is a group that looks at the implications of AI for nursing. There are a number of online platforms, they're and more are all along. There are general ones, and there are specific ones to nursing.
There are getting to be a lot more resources out there. And of course, they're all available online, which is really wonderful.
PIERCE: Yes, absolutely, so that we can easily find them. As we close out, go ahead.
LOMASTRO: Yes, and if somebody wanted to be certified, they could be certified online. And that makes it lot easier to travel to a place.
PIERCE: Yes, absolutely. There's so much to be thinking about when it comes to artificial intelligence. And as we wrap up episode two, James, is there anything that you want to make sure that you share or emphasize to our listeners?
LOMASTRO: Just don't be afraid of it. It's an instrument. It's a tool. Embrace it with caution. Embrace it knowing that it's going to help you out. But don't be afraid of it. Don't see it as some sort of black box or anything else. It's a tool. It's a wonderful tool that we have. It's wonderful tool that we have. And it's probably going to help us do our job a lot better at the same time. But I don't have the emphasis because I think there's a lot out in the literature there. Be aware that there are some ethical and other professional considerations about its use and just be well aware of that. And that is involved at every stage of the process. And don't let a system get imposed on you without your input because the system is not going to work if you're not involved.
PIERCE: Yes, that's some great insight that you've brought to us today. Thank you so much for being here with us today. And I really just wanted to kind of recap a little bit of what you were saying in a couple of sentences. know, just like healthcare is evolving, medicine is evolving, technology is also evolving. And I know, it seems kind of scary because we don't really know what it's going to look like. But I don't think the future with technology means that we're going to have to choose between human compassion, but also technological advances. I think that what we're looking at is a sweet spot where they complement each other. And I really liked the saying that I heard someone say is that maybe we can look at AI as handling the clicks while we focus on the care. What do you think about that?
LOMASTRO: I like that's a good expression. I'll write it down. I'm going to write it down.
PIERCE: Yes, let AI handle the clicks while we handle the care. So, I hope to our listeners this discussion has given you some fresh insights into how AI can be really harnessed to support you but not replace you as far as the human element in healthcare. These innovations, they should be developed with the goal of ensuring that you, as nurses and myself, and other healthcare professionals can really be fully present and focus on what matters most, and that's our patients. It's to help us really reclaim our primary roles as caregivers and educators and patient advocates. And I hope James, being here today with us has really, know, thank you so much for taking time to educate us and to really help us understand and highlight the effects that AI is going to bring into healthcare.
LOMASTRO: Thank you.
PIERCE: Yes, to our listeners, I encourage you to explore many of the courses that we have available on www.elitelearning.com to help you continue to grow in your career and earn CE’s.