Aussie Med Ed- Australian Medical Education

AI Medical Transcription in Practice: Balancing Innovation, Accuracy, and Ethics

Dr Gavin Nimon Season 5 Episode 75

Send us a text

Can artificial intelligence really write your medical notes for you?
Imagine finishing clinic on time, spending more moments looking your patient in the eye instead of staring at a screen, and cutting hours off your documentation workload — all without compromising accuracy, privacy, or patient safety.

In this thought-provoking episode of Aussie Med EdDr Gavin Nimon (orthopaedic surgeon) explores the world of AI-powered medical transcription — a technology rapidly reshaping how clinicians document care.

Joined by a panel of leading experts:

  • Dr Emily Powell, rural GP registrar and AI enthusiast from i-Scribe, shares how AI scribes are already transforming rural and emergency medicine.
  • Dr Ben Condon, Clinical Director of Heidi Health, reveals how AI models are trained for clinical accuracy and how they adapt to your style of documentation.
  • Georgie Haysom, General Manager of Advocacy, Education & Research at Avant, breaks down the medicolegal, consent, and privacy risks — and explains what AHPRA expects when doctors use AI tools in practice.

🎧 You’ll discover:
✅ How AI transcription works — from voice recognition to medical language processing
✅ How much time clinicians are really saving (and how it’s improving work–life balance)
✅ Why patient consent and data security are non-negotiable
✅ The truth about AI “hallucinations” — and how to spot and prevent them
✅ Legal and ethical responsibilities under AHPRA and the Privacy Act
✅ What the future holds: ambient clinical intelligence, adaptive note-taking, and the integration of AI into medical education

Whether you’re a medical student curious about the tech shaping your future, a GP looking to streamline documentation, or a specialist navigating the intersection between innovation and regulation — this episode offers practical insights you can apply today.

Aussie Med Ed is sponsored by -HealthShare is a digital health company, that provides solutions for patients, General Practitioners and Specialists across Australia.


AI Medical transcription is the automated process of converting spoken medical content such as doctor patient consultations, medical dictations. Surgical notes or clinical recordings into written text. Using artificial intelligence and speech recognition technologies, the technology excels at reducing documentation burden and improving work life balance. What actually is it? How does it work, and what's the downside? Today we are diving deep into something that's rapidly transforming medical documentation, AI transcription, whether you're a medical student, wondering what technology you encounter in practice, a GP considering streamlining your documentation. Or a specialist curious about the latest innovations. This episode will give you the practical insights you need to know. We'll explore how these systems actually work, what the real benefits are and the risks are, and crucially, how to use 'em safely and effectively. This isn't about promoting particular products, it's about understanding the technology that's reshaping how we document patient care.. Good day and welcome to Aussie MedEd. The Aussie style Medical podcast a pragmatic and relaxed medical podcast designed for medical students and general practitioners where we explore relevant and practical medical topics with expert specialists. Hosted by myself, Gavin Nimon, an orthopaedic surgeon, this podcast provides insightful discussions to enhance your clinical knowledge without unnecessary jargon. I'd like to start the podcast by acknowledging the Kaurna people as the traditional custodians of the land on which this podcast is produced. I'd like to pay my respects to the elders, both past, present, and emerging, and recognizing their ongoing connection to land, waters, and culture. Before we begin, a quick disclaimer. This episode is intended for general educational purposes only. It's not an endorsement of any particular product, service, or company. Will not be making any recommendations and listeners should not take this as medical or professional advice. As always, healthcare professionals should always exercise their own independent clinical judgment and consider current guidelines, regulations, and professional standards, before making decisions about patient care, patient confidentiality and compliance with AHPRA and other relevant regulations must remain paramount when considering any new technology. With that in mind, let's dive into the conversation about AI transcription in medicine. Well, joining us today to talk about this very interesting topic are two representatives from two of the many companies involved in this area, as well as a lawyer from Avant Medical Legal Defense, a company that also has an AI model available. I'd like to introduce Emily Powell. Dr. Emily Powell is a rural GP registrar and an AI enthusiast from i-scribe with her background in rural emergency medicine. Emily brings a frontline perspective on how AI transcription is actually working in rural clinical settings. Dr. Ben Condon also joins us from Heidi Health as a clinical director for Australia New Zealand. Ben transitioned from surgical and critical care into digital health, giving him unique insights into both clinical practice and the technology behind AI transcription. And finally, we've got Georgie Haysom. She's the general manager of advocacy education and research at Avant, a medical indemnity insurer with 30 years in health law and leadership in advanced AI risk work. Georgie will help us navigate the medical-legal landscape of AI transcription. Welcome to the three of you. I'd like to start off by directing my question first to Emily. Emily, let's start with the fundamentals. What actually is AI and how is it used in practice? That's a very broad question. So ai is used in medical practice in quite a few different ways and has been for some time, I suppose, over the last couple of years. What we've seen is the use of large language models and generative AI in medical practice to build tools such as AI, scribes. AI itself is used in lots of other ways in medicine. So for example in machine learning. So it's used to interpret and understand images in certain specialties. But in this particular case, so for i-scribe for example, and for Heidi it's more used. The large language models are what we're talking about. So we're talking about scribing medical consultations and dictation and turning it into medical documentation and thus removing that. burden of us having to type or write by hand as part of our medical practice. Right. Well, I understand there's actually several layers of this whole process. Can you walk us through exactly how it works and what actually makes up the separate layers of converting talking into medical notes? Yeah, sure. So a medical scribe can be used in a few different ways. And I suppose the one that everybody is most familiar with is something called ambient voice technology, which is where a clinician will usually meet with a patient or with several patients and their carers at once. And then they will conduct their consult as usual and they will have their scribe listening in the background. And the scribe itself will do a voice to text transcription. It will transcribe everything that is said in that consult. And then the technology uses large language models, which are really good at recognizing, and understanding human language in context. And so it uses those models to turn the voice to text transcript into organized medical documents. So that could be things like medical notes, letters your progress notes for the day on the ward your discharge summaries, really any kind of documentation that we do regularly. So the way they work is that the user will have a microphone, and so the program itself will transcribe voice to text, and then it will have inbuilt large language models that it uses to tidy up that voice to text transcript, and create the documentation. So the really important thing is that it creates a draft document. So this isn't a technology that is replacing us as clinicians. It's something that's said to augment our practice. And so at the end of that creation, we have this document that we can review and make sure that it sounds like us and that it is what we said and what we want to reflect from the consult before we finalize it for our medical note. And there are other ways that this technology can be made to sound like us. So, for example, you can tailor these models and change elements of it in such a way to sound more like an individual clinician's voice. More like their notes, more like the tone that they use. And that is because large language models have that really great understanding of the context of medical language and also of human language as a whole. So, in the past when we had voice to text transcription and before large language models where we just had the basics of natural language processing we weren't able to really I guess fine tune those documents in such a way we could get a good voice to text transcript. And in some cases we could pull out keywords and tidy things up a bit. But now with large language models, we can create whole documents based off of that transcript. So it really it improves the quality of the documentation and therefore the safety of the the documents that we are producing in our medical practice. And it also allows us to create a lot more different kinds of documents and opens it up to us being able to really make use of that in a way that we are cutting down our documentation burden. Yeah, from my reading, I understand there's obviously four different components. There's a speech recognition, then you're talking about the natural language processing, which actually converts it into what sounds more natural. Then there's medical language models, which then it converts into more medical technology. And then you've got quality assurance where it actually makes sure what the words put together. Makes sense., Is that correct? Yes. So those things might not happen necessarily one after the other. They might happen at the same time in different ways. So, with the voice to text transcription. So there will be different ways that is performed in different programs. Then the models, the different programs you use are also different as well. And then most importantly, the way that, when you're talking about the medical language, the way that those models are fine tuned. And the kind of input that we give them to make them reflect the clinician and the clinic and the medical language that is really important. And then finally, the quality control. So, again, having programs that really recognize and interpret medical language correctly that is important in medical practice specifically. So I should reiterate again, the quality control. Another important part of that is always going to be the human in the loop. So we are always gonna have that clinician there reviewing the documentation. And that is a very important step. This is never going to replace that step. Ben, how does Heidi's approach compare? Do you build your training data sets from scratch or leverage existing medical literature? Yeah don't really have much to add. It's a very similar process, I think for all Ai-scribes. I guess the thing that we have a medical knowledge team at Heidi that are doctors and also prompt engineers and are the custodians of the model. So they train the model to make sure that it's accurate, reliable, and able to handle, medical terminology appropriately. We don't use any consultation data or any of the conversations that our users transcribe to train the model. they do that separately. And how we train the model is kind of our secret source. So we don't talk about the specifics of it, but we don't use the consultation data to do that, but we've got a team that is constantly updating and reviewing models as they update.'cause they get a pretty significant update every couple of months to make sure that our performance is consistent and reliable and improving where possible. Excellent. Well, I understand you can actually get up to 85 to 95 percent accuracy does it vary between different specialties? Like, is an Orthopaedic surgeon using specific terminology very different to a general practitioner, and do they get less accurate in those specific scenarios? So no, there's no difference between specialties and I think as a GP that would be very sad if we got less accuracy than some other specialties. But the importance is not in the terminology used. So it doesn't matter if your specialty uses a different sort of language to another specialty. So input in is input out. That's the first thing. So accuracy is affected by some different elements. Audio quality of course, is always going to be an important element. There is an element of human behavior. So for example, we do find that we're changing our behavior when we are using these scribes. We have to make sure that everything is verbalized and therefore captured. So that might be voicing your examination findings. So if that information is not captured by the scribe, then it won't end up in the final output. So that's something we can do to make sure things are our documents are the most accurate and of course reviewing them as well. But in terms of accuracy as a whole, so, we at Akuru have co-developed tool called Game Lit, which is a tool that can be used to evaluate the quality of different scribes and accuracy is one of those domains. And so, that kind of tool can be used by individuals to look at different scribes and to compare the voice to text and the the final documents created and evaluate the quality that way. I think 88 to 95% is fairly accurate. We've certainly in our own. Research using those kinds of tools or that particular tool that is what we are experiencing. And so, that will continue to improve as these kinds of technologies are really honed. And at the more the more people we have on them performing quality control and research around that and we can keep fine tuning it, the accuracy will keep improving. So really to have that level of accuracy at the start of the AI scribing journey, I think is quite fantastic. But I expect that in the next five to 10 years we're going to see a big leap in accuracy in these scribes as well. So the important thing is that it should be equal for everybody and definitely not lower accuracy for certain specialties or disciplines. And Ben, did you find the same scenario as well yeah, very similar. So we don't discriminate between specialties or healthcare settings. So accuracy is kind of globally measured and agree. There's no real difference that we're seeing from our end. I think the main. thing when talking about accuracy of transcripts is actually going back to taking a step back and understanding the difference between a verbatim dictation, which a lot of people are familiar with and transcription. And the way I kind of explain it is if a medical colleague who's standing next to you could produce a note the same note that a scribe has produced, that can be accurate, but they can use different language potentially to what you've used. So a lot of the accuracy assessments that we do actually are looking at is it a stylistic element that perhaps is, it's not to your personal style, and we have mechanisms of which we can improve that for users over time. Or is it factually inaccurate or incorrect. So, yeah we're doing a body of work to kind of publish how we do that more specifically. So I won't preempt that, but I think what we're seeing is pretty similar to the numbers that you're quoting there, which is, which has been fantastic. I think Ben, you preempted my next question, which is really the adaptation technique. I mean, how long does it take for the user to learn the system? Does the clinician have to adapt the way they speak and is there an adaptation technique that the actual natural language processing learns your system of speaking and predicts what you're about to say So what's the sort of timeframe? So I think it depends on how particular each user is. So you can download Heidi Click Transcribe, and I was, you know, you get a very good note that's reasonably accurate. If you have stylistic preferences in terms of the structure of your note or the way you want your letters to sound, you can create a template relatively easy with our AI chat bot feature and get something that's very, almost identical to what you produce, you know, in minutes. And then it's a matter of like the nuances of your language, maybe the abbreviations you like to use or the specific you know, safety netting that you might include. Do you use a snippet, which is our version of the macros, or we've got a personalization feature that learns from the stylistic edits you make so you don't have to make those stylists edits all the time. That happens over, you know, from five consults onwards. So we find that there's a relatively short onboarding process, but I think it really depends. What I've observed is it really depends on how particular people are about their notes. Some people are more chill so to speak and happy for them to be set out in a kind of a default way with a, with one of our kind of generic templates and others are very specific in terms of the way their notes are structured. And that takes a bit more time. But what we're seeing in hospital trials is, two to three weeks people are using it and very comfortable with it. So it's pretty quick. And I think I, I should point out, but I note there's a lot of these AI transcriptions all around the world. but there'll be differences between those in the United States. They might specialize in more of a United States terminology and accent as opposed to the Australian ones. We've got our own unique style. If we move on to the the actual infrastructure I mean, how important is to have a really good infrastructure, and we see this today in our recording over the internet we're using for our interview today. It can vary depending on the quality of the audio you get and the quality of the video. What about the infrastructure for doing a AI transcription? Do you need to have a specific acoustic room where it's a better quality audio, you don't get as much feedback, and do you need a better internet, connection, a better bandwidth? And what about the computer use as well? Is there any particular things you need in that scenario too? It is really versatile. So iScribe can be used on any platform as long as you have an internet connection. And really you just need minimal bandwidth. It just needs to be able to transcribe. So, all platforms will need a microphone. So we provide microphones, but you can use your laptop mic, your phone mic. As I mentioned earlier, audio quality is very important. Just quality in is quality out. So a better quality microphone is always going to give you a better transcription. But it doesn't have to be a very expensive mic. We find that phone mics, laptop mics, computer mics are generally just fine. And it can be used virtually anyway, so I use, i-scribe in Ed, and Ed is a really noisy environment. And so, the great thing about large language models is that they are very good at picking out words in that kind of noisy environment and context from that. So it doesn't need to be a particularly quiet room. It can be really used anywhere on the go, and that's what makes it so fantastic and so easy to use. Yeah kind of we've built Heidi to keep up with you wherever you're working in a clinical setting. And we know firsthand that they're generally loud and there's lots of things going on. We're able to filter out idle chit chat and, and focus on what's clinically relevant to each consultation. And even to the point where our iPhone app actually can record whilst you're offline and then we'll transcribe once you're back online for those in rural areas as well. So, yeah I think they're all these scribes are built knowing that you're never gonna get perfect environments in clinical settings and, we've even able to support, I think it's 119 languages now and that helps with accents and all sorts of things as well. So, yeah, to reassure everyone, it will work everywhere you work. Brilliant. I'll introduce Georgie. Georgie, tell us a little bit about the Avant platform you've got as well Yeah, so it's very similar to the most of the other scribes available. we have the voice to text transcription piece where you can essentially use it as a dictation tool, but also the scribing tool much in the same way as The other ones that we're talking about today, Gavin. Excellent. What do you think the time savings are? how would this help me in my clinical practice and will it make me sound better as well. Yeah, so I mean, I think we're seeing kind of 65 to 85% reduction in documentation time, in clinic or in hospital settings, depending on how much documentation you do. So, those savings get exponentially greater if you're a psychologist or psychiatrist that's spending an hour with a patient having a long conversation compared to maybe, you know, a surgeon doing a short procedure where there's a relatively, short transcript. But, I mean, what that looks like for you is at the end of a consultation, a procedure you'll have a note there ready for you to review and you can review that note, while the consultation's absolutely fresh in your mind and move on to the next and also generate any letters or care plans or anything that you need to do, based off that consultation as well. So, I mean, in, in practice it's meaning that you're not having to stay back, you know, maybe hours at the end of a session or at the end of a long day to, to catch up on your notes or you're not having to spend, a considerable amount of time between patients doing all that work, more contemporaneously. So it's definitely improving workflows and allowing clinicians to spend more time patient facing caregiving, which we think is awesome, but also making sure that you can get a break in the middle of a long day or get home on time more consistently, which we're really excited about. And have you noticed the same sort of benefit as well, Emily Absolutely. I think in our own research, people are saving between 30 minutes and three hours a day, which is quite incredible. And we've all been excited about getting more of our lives back and having more of that time. But it's also about having the quality time with patients. So the time savings is an important factor, but for me as well, being able to look someone in the eye while I conduct my consult, walk around the room with them, really engage with them, has been a game changer because sometimes you feel like you're not making the most of that human connection that we really enjoy from medicine. And so having this kind of technology really allows that as. Excellent. Well, perhaps ask Georgie now. I mean, obviously this all sounds exciting. But are there any downsides to this? What are the medicolegal risks of using an Ai-scribe?.Thanks Gavin. And as the lawyer in the room, I guess that's my job is to talk about the negative stuff. But before I do, I did wanna just mention that we've had sort of mixed information from our members around the time savings. So I think some people have felt that that does take a lot of time to review the notes, and maybe that's just a transition period when they first started, that they believe that it takes quite a lot of time to review the notes and get them to the way they like them. However others have said it's an absolute game changer and that they go home at the end of the day. Feeling less tired because they haven't had to put in that sort of cognitive effort around writing up the notes. So, so there are some fine benefits and some I guess are still yet to see the benefits of it. In terms of the medicolegal risks, there's three main risks that we've identified with regard to scribing tools. So the first one relates to issues around consent, which I can talk about in a bit more detail. The second one is around privacy. And the third one is around accuracy, which we've talked a little bit about. So the first thing in relation to consent is that you do need to get consent every time that you use an scribe with your patients. And there's three reasons for that. So the first one is that you need to get consent from a privacy perspective. You will be collecting information via this tool and under the privacy legislation, you need to get consent for collection of information and then you need to get consent to use and disclose that information as well. So you need to get it from a privacy perspective, number one. Number two, and this is the sort of nerdy legal stuff and you need to get consent. To avoid a potential breach of surveillance devices, legislation. So around the country, there's legislation which was intended to apply to situations where people are surreptitiously recording private conversations and attempting to use that against people. So that legislation was set up for that purpose, but when you look at the definitions under the legislation in some of the jurisdictions, it would apply to these types of tools. So taking that sort of conservative approach and to avoid potential breaches, we recommend that you get consent to record, essentially record if you like, the conversation, a private conversation between you and the patient. So that's the second reason. And then the third reason to obtain consent is really an ethical, professional conduct and transparency piece. Trust in AI tools is still relatively low in this country and some people are very nervous about. Tools being used to record their information. So it's really important if we want patients and the community to trust the use of AI tools, then we need to explain to patients when it's being used. So that's the third reason, Gavin. It's also a requirement of the AHPRA professional standards that the statement that they put out said that you do need to obtain consent from patients for that purpose as well. Now Does that still apply for, say, when someone uses the AI tool purely as a dictation tool, where like the patient has left the room and you're dictating a note about what happened in the consultation as opposed to when they're in the room and actually listening to the conversation? Yeah, I think it would depend where the data goes. So I mentioned first of all the privacy aspects of it. The good thing about the three tools that we're talking about today is that they're within Australia. If you're using a tool that's going off somewhere to, other jurisdictions, you have obligations under the privacy legislation to get consent for that specific purpose. And you also are responsible for any breaches that happen overseas, essentially. So it does depend on where that data's going. Right. I also heard you can also get AI hallucinations instances where the system generates information, which hasn't been said. Perhaps you can tell me a bit about what you've been noticing from Avant's perspective and is this an issue that we need to be worried about? Yeah. Yeah. So we did a webinar earlier this year and another one last year. And we asked our members about the sorts of hallucinations that they'd been seeing or the sorts of errors that some of these scribing tools have made. And there's some really great but sad ones, so wrong side, picked up you know, left instead of right. Misnaming some of the medications one of our members I asked some questions around a neurological findings and the like, and then the tool made up a a whole neurological examination. And we've had examples of the oral contraceptive pill being noted for a male patient. So it is hard to know whether these are just sort of training issues. We talked earlier a little bit about the training that goes on to make sure that the tool adopts the way that the health practitioner speaks and reflects that, whether that's part of it. But this is why it's so super important to check Absolutely. Every time. Unfortunately, we've heard about situations where doctors have become a little bit complacent. So they've looked at the first couple of times of the output from the tools and it looks great. It's recorded everything that they really wanted to record, and then they get a bit complacent. And our really strong advice is don't be complacent. You must check every time. Because if there's an error in those records, it could be repeated throughout time and and into a whole bunch of other records. Which could then cause a great deal of patient harm, so definitely important to check every single time. And Ben, have you noticed that AI hallucinations are an issue with your program and is that being improved as time goes on. I think hallucinations are a feature of using large language models and AI generally, so we're not immune to that. it's one of the many reasons we encourage all of our users to check their notes. We've got mechanisms in place that we use to reduce hallucinations and we have good data internally that shows that we're doing a good job of that. I think put this in context is like, this shouldn't be a surprise to people. Hallucinations are something that we're upfront about and you know, we warn people about the potential of them occurring despite them occurring relatively rarely from what we've seen. And you know, there's an interesting study done by Darren Fu and the RACGP published that actually humans hallucinate more than the Ai-scribes. So I think there's, there is an element of fear with hallucinations that might be out of proportion to the risk. And I think really the thing that we just wanna reassure everyone is it's one of many reasons. The other being your ultimately responsible for the notes. So you should be reviewing and approving it. But yeah, I think it's a fact that we need to be aware of, but I don't think there needs to be a level of fear or hesitation about it, if that makes sense. I presume your response is the same, Emily, similar to Ben's, Yeah, so absolutely agree with Ben. I think if we think about it in the way that if we have a really good intern who takes notes for us, we would always review them before we sign off on them because people can hallucinate too. And it's something that we want to strike a really good balance of in i-scribe where we are both growing trust and encouraging safe and responsible AI use and educating on AI use at the same time as having our own safeguards in place to try to reduce or prevent hallucinations in other areas. And that way over time building people's trust in the system. Because yes, we know that there may be errors introduced that is the nature of ai. But the important thing again, is having that human there reviewing and checking. It's not meant to replace us. Right. Georgie, who bears responsibilities when something goes wrong with an Ai-scribe? I mean, are they basically finding themselves liable and are they covered by their medical defense insurance, or do we need to have certain extra insurance in place? So I'll answer , the second question first, in relation to cover, I can only speak for a Avant, but AI tools are part of the tools in healthcare and practitioners. Covered for healthcare. So there's coverage under our policy for that. And I have to say as the lawyer subject to the terms, conditions and exclusions under the policy. But people who are not insured with Avant and will need to check with their own indemnity insurers to ensure that they're covered under that. Who bears the responsibility is an interesting question. Absolutely. Doctors are responsible for the content of their notes and like they are in any circumstance, whether you use an AI tool or not. What's a little bit unclear though from a legal perspective because we haven't had any cases in relation to this, is who's responsible if something goes wrong with the tool itself? So from a product liability perspective, I guess what we've seen in some situations is. contracts and terms and conditions in AI products that shift the responsibility for the operation and function of the tool entirely onto the user. I've certainly seen that in some diagnostic AI products. So I guess the point there is that you need to really check the terms and conditions that you're signing up to, and if you have any questions about that, then you should ask the developer or the provider of the AI tool. The problem with the shift of liability is that a medical practitioner or a user might be taking on liability that they're not covered for under their policies. So usually taking on contract contractual liability beyond your liability under the general BO is not covered under your policy. So it's really important to check those terms and conditions and ask if you have any questions about them. Ben where does the data go and how do we ensure that this data's kept safe it's a really important question and I can only obviously speak for Heidi, but the data transcribed from the consultation is de-identified and encrypted and then sent to our private servers onshore here in Australia where it's kept de-identified and encrypted until the user deletes it in which, case it's gone. I think other kind of privacy principles we also have is that we only ever access that data to generate a note or a document or a, task that the user has prompted us for. We don't train the model with that data. We don't sell any of that data or use any of that data for any other purpose other than making a document, or note at our user's request. And we've been really intentional in setting those really clear boundaries. The other thing that we've done is we've got independent verification of our security and privacy capabilities. So with the international standard ISO 27,001 certification. We're also compliant with the Australian Privacy Principles and have SOC two compliance as well. And then we're also have the equivalent certifications in other jurisdictions that we're in, like GDPR and Europe NHS certification in the US and HIPAA compliance in the US. So I think really like all of that is to say is, you don't have to take our word for it. We get independently audited at a three to six monthly cadence, depending on the certificate to validate all of that. And, really privacy and security has been baked into the foundation of the product. This has not been an afterthought. This is something that's kind of existed as long as the product has. So Emily how do you protect privacy in i-scribe? How's that done So i-scribe is very strict on where we send and process our data. We, first of all, we have a de-identification process, so we don't just apply pseudonyms, we do a full de-identification. So that means we remove things that both fall under, say, the HIPAA group of identifiers, but also identifiers specific to Australia. Some scribes use a pseudonym process where they replace names and other identifiers With codes and markers. We don't do that because under the privacy Act we recognize that is still recognized as personal identifiable information. We never send any of our data offshore to be processed, even if it has been de-identified. It's all processed and stored within Australia. And so, as Ben mentioned, we are compliant with the A.P.P's and with other frameworks within Australia. And so, that also goes for our New Zealand clients as well. So we have scribes in New Zealand people who use our scribe in New Zealand. We don't currently have any users on our platforms internationally. If we do, we'll obviously be compliant with international frameworks there as well. But our focus is Australia. Excellent. I can see you're both very passionate and take this very seriously, Georgie, you'd find the same with Avant too. yes, absolutely. Ours is absolutely a closed system. It's all within Australia, and of course, it was developed with our medicolegal expertise. So we've taken into account all of those medicolegal requirements as well in developing our voice box transcription tool as well. I think I should just point out to the listener too. Of course, there's also other scribes around not just the three we're talking about today, which the listener may needs to take into account. But obviously I really appreciate the time that Ben, Emily and Georgie put into coming on Aussie Med Ed to talk about this important technology. If we go on to cost effectness obviously there's a bit of a outlay to sign up for an transcription software like iScribe or Heidi or Avant. how effective are they? I mean, where's the return of income for that, Ben? Do you feel that's a money well spent? So Heidi, we've got a freemium model. So we have a free tier that's free in perpetuity to all users where you're able to get unlimited scribing sessions and scribing minutes. You just lose the functionality of custom templates. So we think we give great value for money there and we think, as this technology becomes more and more commonplace it will be an expectation that, scribing is. a given and you'll be paying for other features and the like. So, we offer great value for money in that regard. We think we give really competitive value for money for all of our users compared to other scribes in the market. And we also gift the proscribe to free for all medical students, nursing students and clinicians in training as well. So we're not after your money until you're a consultant and relatively comfortable. So we think we've done all we can to provide real value. And yeah I guess it's ultimately up to the user to decide for themselves. Yeah sounds reasonable. Emily, you think the money's well spent and you've been using this yourself in your own clinical practice of, would you buy it if you weren't attached to iScribe? Definitely, I think that it's made an incredible difference to my own practice, but I think that our pricing is also very competitive. We don't have a freemium model, but where we do stand out is that we have a really dedicated support team. We have all Australian support, and it's very comprehensive. And that includes in-person onboarding, training, and support. And so our model is slightly different in pricing to Heidi's, for example, and to other scribes. But we really emphasize certain aspects that we find important in our own practice, such as having that support there when you need it. And so I think the important thing about value for money is that no clinical practice is the same as another practice. And so it's important that people think about what. Is useful for them and what is important for them when they sit down to do their work that day? What are they going to need? Whether that is accuracy, whether that is speed, whether that is support and how the scribe that they're choosing supplies that. And Georgie, I'm setting up a practice for the first time. Day one, I turn to my medicolegal defense, lawyer, and say to look, I'm trying to work out whether my life balance as such that I go to AI transcription, or whether I just keep standard notes. What would you recommend? Look I think it's gonna depend and that's always what a lawyer says. It depends. Our recommendation would be that, you know, you should try them out. If you are interested in it, try it out and try out the different ones. You know, that you might find one works better than another. And yeah, so try them out. See if they work for you. Give them some time. We have had some people in our organization who are medical advisors who use them and they've tried different sorts, scribing tools, including our own, and then come to a view about which one works best for them in their practice based on. How it works, and also, of course, as we've just mentioned, the pricing models. So yeah, try it out, see if it works. But once you do decide which one you're going to use, it's really important to continue to monitor its use and make sure that it continues to work for you and your practice, and that it's fit for your purpose. Yeah, But looking at some of the future directions, with AI transcription and reading up about what's possible it just feels a bit of excitement. I've heard about what we call ambient clinical intelligence, where not only am I producing a transcription of what's happened, but it might actually be popping up and saying, but you forgot to ask about this. Or, can you ask this question? Or Have you thought about this diagnosis? Is that something that Heidi and i-scribe and Avant are looking at the future? It is something that we're looking at very closely. I think the thing that we have to remember is that I think it'd very much constitute a medical device and require additional certification. And obviously quite robust testing to do that. Something that we've been advocating for and discussions with the TGA and Department of Health is just clarity on actually how we go through that process and how we get that independent certification for that type of technology. But we're really excited about, you know, incorporating that into the product and we've been open about that as a future horizon. I think the main other call out other than the regulatory call, call out is in thinking about how we build trust with those types of products. So if we were to hypothetically build something that was a, a black box and just presents an answer that's much less trustworthy than say something that surfaces an approved database, maybe a hospital's guideline or, you know, up to date for instance, but gives you a step by step kind of guide or shows its work in that way, in a more transparent process, which will ultimately, I think, engender a lot more trust from users. So yeah there's lots of things to work through. I think. I'd be surprised if it wasn't on everyone's roadmap, but there's some significant hurdles to overcome to kind of get, bring a product like that to market. Yeah, the TGA regulation's really important. So as we see these kinds of tools, c hanging and developing over time. We're definitely going to see more regulation of them. And that is standard and important for protecting patient privacy and safety. And so I think the future is incredibly exciting with ai, you know, regulation is gonna be needed, but that doesn't mean we are going to be blocked. We are going to be able to have a future in medicine where we will have AI assisting with diagnostic decision making and clinical inference and suggestions. There's all sorts of ways that it can be used. So, you know, a scribe may now be creating documents for you, but in the future it may also be then taking your plan, booking your next appointment making reminders for patients, sending them patient information directly. All of those things that are really probably not that far around the corner. And so as all technology companies in Australia continue to build those. The regulation will continue to catch up, but I think that there are a lot of really exciting things ahead on everyone's roadmap. That's such an interesting point. looking at agents is what Emily is talking about, where AI can actually then control your computer and actually book things or do certain things should imagine, that would actually be a lot of privacy issues associated with that as well. Once you incorporate agents and then suddenly let a computer, an AI tool suddenly control your calendar and organize your bookings and other things, is that a issue for a medical legal scenario? It is much the same as any tool that collects information. So the same rules apply with regard to those tools as scribing tools. It's just a dif different sort of tech. And and those sorts of chat bots, those agentic AI tools are being used more and more in healthcare. And certainly I think that is the way of the future, but same rules apply. You need to disclose that to patients. And you need to obtain consent if they're accessing private information, much like any tool you mentioned the tools. Moving into sort of clinical decision support and yes, that would then start to get into the TGA regulatory space. Currently, Ai-scribe tools are not subject to any regulation. So you are very dependent on doing your own due diligence as a practitioner to make sure that it's compliant and asking those questions of the developers. But once we get into that clinical decision port and software as a medical device, it does start to get into the TGA space. One of the things that we see from a medicolegal perspective a lot is diagnostic error issues and I guess one of the risks associated with tools that start to move into that space. Is that there will be if you're not using them carefully, like any clinical decision support tool, you could end up with a bunch of cognitive biases that we see in diagnostic error cases. So another word of caution from the lawyer. When you use these things, just make sure that you still exercise your own clinical judgment. With regard to the recommendations that are made..Right then, do you think this is obviously a new area, but obviously a lot of doctors and probably in the future, all doctors will be using this. Do you think they should be introduced as part of the medical curriculum? And what advice would you give a medical school setting up a curriculum in this scenario? Yeah, I think absolutely. I think the goal of any medical school is to produce interns and junior doctors that are kind of prepared for the workforce that they're entering, and then able to use the tools that exist, you know, appropriately and safely. We've had some really positive conversations with a lot of the Med schools around Australia and New Zealand already looking to incorporate the safe use of Ai-scribes into their practice. And even using Heidi as a teaching tool with specific kind of education templates that prompt them on questions that they might have missed for a chest pain history and things like that. To try and improve the feedback and that they're getting whilst they're practicing for their Osce's and things like that. So, yeah, we're really supportive and encouraging of this being Included in the curriculum. I think literacy on what constitutes a safe tool, literacy on legislation around these tools. You know, the more informed our medical students and junior doctors are entering the workforce, I think the safer our patients are and also the more accountable the tools playing in this space have to be because you have a, a better informed workforce. So, yeah, only have positive things to say about it. And Emily you are a clinical registrar. You have the juniors coming to you asking advice. What is your thoughts when they speak to you about this? So I'm a huge advocate for, I, I think my advice to anybody junior or otherwise starting out with a scribe would be to try different scribes, find one that really suits your use case. They're not one size fits all and we don't want them to be find one that is within your budget and of course, find one that is Australian and that you are very happy only stores and processes your data within Australia. Find one that is compliant with all the important regulation and individual hospitals as well have their own individual regulations around that. So, always speaking with your hospital about their own governance too. But the important thing is then to work with it. And so. Again, it is a tool to augment your practice, so make sure that it sounds like you, make sure that you can fine tune it for yourself. Make sure that your templates are all sounding and looking like the way you want them to. And then make sure you've got your own processes for reviewing as you go through and use it safely. Excellent. Georgie, finally, for the medical legal perspective are there any real big benefits of using an AI transcription tool that actually we haven't covered? Is it a plus for using it as opposed to doing standard consultations sitting on the computer and then writing the notes in front of the patient? There are lots of benefits to using Ai-scribes. For those who are not great typists, it means that you can have more complete documentation. Obviously you've gotta balance out having too much documentation and making sure that the information that you put in your note is clinically relevant. So that's super important. The other benefit is communication. having eye contact with your patient rather than sitting behind the computer. Typing that helps with rapport. And then the other one is around verbalizing the examinations. And we believe this will help with improving patient understanding of the purpose of examinations and also improve their health literacy From a medicolegal perspective, we see a lot of cases. Where patients don't understand why examinations are taking place, particularly if we're talking about intimate examinations. And so being able to verbalize and explain to a patient while you're doing it has the added benefit of helping the patient understand as well as putting it on the the record in the clinical note. So lots of benefits. Gavin And Look, I think if we go down to really, what's the most important thing you want our listeners to understand about AI transcription? I think we'll ask each, all, each and every one of you what you think the most important one is. Georgie, check every time, don't. Become complacent. You have obligations to make sure your notes are relevant and up to date and clinically accurate, so make sure you check them every time. Ben. This is a really exciting tool and an exciting space, but like anything that needs to be handled responsibly and we need to manage our expectations and that, you as the clinician are always gonna be in the loop and ultimately responsible for the output. So treat it with that, hopefully excitement, but equal parts responsibility and care. Emily. Tech shouldn't be retrofitted into our practice. So find something that fits for you that isn't one size fits all, and that is versatile. When your practice needs to change quickly, something you can take on the road with you, something you can go to nursing homes with, something you can then go back to your own private practice or clinic and sit with. There are many ways of doing this, but, you know the world's your oyster with ai, so try different things out and find your true fit. Well, thank you very much. I think that wraps up our deep dive into AI medical transcription. I'd like to thank Emily, Ben and Georgie for sharing their expertise and giving us the real story behind this technology. I think the key takeaway is that AI transcription offers genuine benefits, time savings, reduced documentation burden, and potentially better work life balance. But it's not magic. It requires proper setup. need to put ongoing oversight into it all and understand its capabilities and limitations. So if you're considering AI transcription, remember Georgie's advice about due diligence, evaluate the services carefully and understand the privacy implications. thank you very much all three of you. Much appreciated. well, that wraps up our deep dive into AI medical transcription. I'd like to thank Emily, Ben and Georgie for sharing their expertise and giving us the real story behind this technology. Reminder, once again, this conversation is intended for educational purpose only. It's not an endorsement of any product or company, nor should it be taken as medical or professional advice. you're a healthcare professional, thinking about incorporating AI transcription into your practice, make sure you evaluate the services carefully. Consider the relevant regulations and use your own clinical judgment in line with AHPRA standards and guidelines. Once again, thank you for listening or watching Aussie Med Ed. Until next time, stay safe.

People on this episode