
RCSLT - Royal College of Speech and Language Therapists
This is the official podcast of the Royal College of Speech and Language Therapists - RSCLT. We were established on 6 January 1945 to promote the art and science of speech and language therapy – the care for individuals with communication, swallowing, eating and drinking difficulties.We are the professional body for speech and language therapists in the UK; providing leadership and setting professional standards.We facilitate and promote research into the field of speech and language therapy, promote better education and training of speech and language therapists and provide information for our members and the public about speech and language therapy.
RCSLT - Royal College of Speech and Language Therapists
How is AI being used to support people with aphasia - a language disorder which affects a person's ability to speak and understand?
In this, the third of our conversations around AI in speech and language therapy, we look at AI as a tool for people who have aphasia - a language disorder that affects a person's ability to speak and understand. Caitlin Longman, Accessibility Lead at the Stroke Association tells us about her work with people with aphasia and we hear from some of them about how they are using AI and also what are their concerns. Dr Abi Roper, an academic speech and language therapist and self-confessed 'techy' tells us about her research into AI to support aphasia.
Interviewees:
Caitlin Longman, Accessibility Lead at the Stroke Association, guest lecturer at Strathclyde University.
Abi Roper, Speech and Language Therapy research fellow at City University and Joint chair of the Computers in Therapy CEN.
Resources:
For RCSLT Members only:
Artificial Intelligence resources: www.rcslt.org/members/delivering…ligence-resources/
Integrating Generative AI in speech and language therapy: a practical guide www.rcsltcpd.org.uk/lessons/ai-in-slt/
Open access:
Computers in Therapy CEN (‘CITCEN’) citcen.org/
Co-Pilot copilot.microsoft.com/
ChatGPT chatgpt.com/
Image gen Free AI Image Generator - Image Creator in Bing
Factsheets from AbilityNet about AI abilitynet.org.uk/factsheets/what-…-how-do-i-use-it
Free webinar from AbilityNet about how AI can help disabled people abilitynet.org.uk/webinars/how-can…-disabled-people
Webinar series from the Health Foundation about AI and the NHS (exploring priorities and anticipating the future) www.health.org.uk/about-the-health…re-of-healthcare
A guide from the NHS Confederation on AI: "AI in healthcare: navigating the noise". www.nhsconfed.org/publications/ai-healthcare
Blog from NHS England ‘How artificial intelligence is helping to speed up the diagnosis and treatment of stroke patients’. www.england.nhs.uk/blog/how-artifi…troke-patients/
Research:
AI technologies in therapy - experiences and perceptions of SLPs
Suh, H., Dangol, A., Meadan, H., Miller, C. A., & Kientz, J. A. (2024, June). Opportunities and challenges for AI-based support for speech-language pathologists. In Proceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interac
Please be aware that the views expressed are those of the guests and not the RCSLT.
Please do take a few moments to respond to our podcast survey: uk.surveymonkey.com/r/LG5HC3R
How Artificial Intelligence is being used to support people living with aphasia
Transcript Date:
28 December 2024
Speaker Key (delete/anonymise if not required):
HOST: JACQUES STRAUSS
CAITLIN: CAITLIN LONGMAN
ABI: ABI ROPER
MUSIC PLAYS: 0:00:00-0:00:06
HOST: 0:00:06 Welcome to another RCSLT podcast. My name is Jacques Strauss. This episode is another in our AI series, in which we look at the role of Artificial Intelligence in speech and language therapy.
I spoke to Abi and Caitlin, two speech and language therapists who have been taking a closer look at how AI can be used for stroke and aphasia. I started by asking them to introduce themselves.
ABI: 0:00:32 Hello. I’m Abi Roper and I’m a research fellow and speech and language therapist based at City University of London, and I’m very interested in technology and speech and language therapy.
CAITLIN: 0:00:43 Hi, I’m Caitlin Longman. I am very fortunate to be the Accessibility Lead at the Stroke Association and every now and then, I guess, lecture at Strathclyde University. I’ve been passionate about technology and aphasia for a number of years, and so it’s always great to have an opportunity to chat about it.
HOST: 0:01:02 For those listeners who are not SLTs, it might be helpful to remind them about what aphasia is. And Caitlin, you have a clip from one of your colleagues, Janet Rockliffe, at the Stroke Association, who has aphasia and can give us some insight into the condition.
MEDIA CLIP: 0:01:19 [Speaker 1] Aphasia is where the language area of your brain is affected. It can cause problems with speaking, understanding what the person says, finding the right words, reading, writing, and using numbers. But it’s important to know aphasia can change over time. When I came round from my coma, I didn’t realise I had speech problems at first. It was my family who really struggled to understand what I was trying to say. When I became less confused, I did realise I had problems.
My speech therapist told me that had receptive and expressive aphasia. I’ve had aphasia for many years now. I know verbally I’m back to normal, but my written skills are still badly affected. This includes my maths too.
Strategies and apps have really helped me. I use my phone reader all the time, reading the information back to me. More recently, I’ve started to use AI in many ways. For example, it’s really good to help with explaining jargon.
[Speaker 2] Aphasia is a sudden change in language that happens to people who’ve been used to experiencing language in one way, and then very suddenly, a change in the brain most commonly because of a stroke, but sometimes it could be because of a brain tumour or a head injury, or it can present at the beginning of dementia as well, a degenerative condition. It means that the communication that somebody’s used to using suddenly gets disrupted.
And one analogy that some people use to help give a sense of that feeling of what it can be like is for us to imagine that we find ourselves in a foreign country. So, we’re still very intelligent and we’re ourselves in the middle, but we can’t use our language as competently to negotiate our environment as we’re used to being able to, and that includes our reading and writing, our understanding of other people’s talking, and our ability to express ourselves.
HOST: 0:03:33 Let’s start with a very broad question: what do you think the potential is for something like AI to help people with aphasia?
CAITLIN: 0:03:42 I think there’s huge potential for using AI to support people with aphasia. I think one of the things that’s always been challenging is, after a stroke, how people engage and use technology can really change. And some people maybe haven’t even had that opportunity to use technology in the same way. But by using something like ChatGPT or Copilot it’s letting people have an opportunity to engage with information, but also to be able to ask the AI to adapt it to a level that’s appropriate for them.
Also, some of these platforms have the opportunity to use screen readers, so that that information can be read aloud so people aren’t having to process that information in huge chunks. So, there’s lots of ways that I think people can start to use it and play around. It could be for supporting work, or it could be for fun and creative activities like poetry, which we’ll be able to share an example of a bit later.
ABI: 0:04:38 Well, one piece of research that I’ve explored which leads on from what Caitlin’s been saying is the use of automatic speech recognition. I’ve been working in technology for speech therapy for a long time and we’re suddenly at this critical point where we’re starting to have speech recognition models that are fit for purpose to be able to support people to practice their communication and engage with technologies.
So, there is a potential there, and it’s beginning to be explored in the literature, for supporting people to have a high level of interaction spoken practice beyond that which can be provided in a clinical speech therapy setting, so that somebody can practice using a device, interacting conversationally, and getting feedback on their speech to a level that is supporting their rehabilitation of their language. So, that’s a really exciting new development that I think we’re just beginning to see that’s been lacking in speech and language therapy for a lot of years.
HOST: 0:05:40 So, ChatGPT, let’s say I had some sort of speech and language difficulty, like aphasia, and then I had received therapy from a speech and language therapist. Would I then be able to practice my conversation with you at home and you have an understanding of the difficulties that someone with aphasia has to be able to improve my language.
CHATGPT: 0:06:09 Yes, you can absolutely practice your conversation skills with me. While I’m not a substitute for professional therapy, I’m here to help with language practice in a supportive way. I can adapt to various communication styles and difficulties, including those associated with aphasia. We can work on different aspects of language, such as finding the right words, forming sentences, and improving comprehension, all at your own pace. Let me know how you’d like to start.
CAITLIN: 0:06:39 Exactly that. So, we know lots of research evidence suggests that there are hours and hours of practice required – in the same way as for physiotherapy you might require to do lots of sets of muscle repetition exercises every day, we would expect that same level of dose is required for language rehabilitation. And we know that a speech and language therapist might get an hour appointment a week with somebody for a fixed period, but we know that for somebody to make big change they need more than an hour a week of input and supported therapy activity. So, they can work with their therapist in that time to identify goals to work on and then have a really structured feedback opportunity that they can structure around their life when the time is right for them at home to do that bit of practice – whether it’s in short stints across the day, whether it’s in one long batch, it really gives people flexibility to have ownership of their rehabilitation process.
ABI: 0:07:40 I think this technology is moving so quickly and so as we look forward to the future, I think the sky’s the limit. At this stage, what I think is really important is, one, people get the opportunity to use this technology and find out if there are ways that it can really benefit them, make their lives easier, or actually just be something that they enjoy using. But what we often find, especially with people with aphasia, is they don’t always get the opportunity to, one, learn about it or, two, get the opportunity to practice and develop those skills so that they feel confident engaging with that technology.
HOST: 0:08:13 Caitlin, can you then tell us a little bit more about what the Stroke Association’s position is on these issues?
CAITLIN: 0:08:20 I wouldn’t be able to comment on the Stroke Association’s behalf because technology is always a bit of a tricky subject for us because we’re not an organisation set up to support people using technology. We always work in partnership with fantastic organisations like AbilityNet, who can really drill down into the nitty gritty of people’s individual devices.
But we do recognise that technology is a huge part of everyday life. So, in the past, we’ve developed a number of resources to help support people to engage better. One of those is the getting online guide for people with aphasia, and that was a resource that Abi actually helped support develop during COVID, which, step by step, talks through some of the challenges with people getting online. There’s also a number of resources that we’ve got on our website.
But we can’t separate out how connected our world currently is with technology, whether it’s booking your hospital visits now online, or having to make orders or engage or put in a complaint. We really need to recognise that people communicate in so many different ways, and this might be a potential barrier or facilitator for this group.
HOST: 0:09:29 That’s a really good point. I guess on the one hand, technology can be a barrier, but on the other hand it can help facilitate communication. So, Abi, I wonder if you could just give us a bit of sense of the, I guess, the sort of two sides of the equation here?
ABI: 0:09:44 Yeah, definitely. Well, I guess I would say the magic of speech and language therapy is that it’s bridging those barriers and supporting people to extend and access that. So, Caitlin’s quite right – we’ve got lots of evidence to say that there’s digital exclusion which happens when people have language or speech difficulty. That means that interacting with the myriad reading requirements that technology asks of us suddenly becomes prohibitive.
And navigating through complicated structures and complicated menus and having things hidden away in places that aren’t immediately obvious when you engage with the technology means that even very everyday activities that people might have been able to do prior to their acquiring aphasia become a big challenge.
And so, we’ve done some work about understanding what the routes to accessing different types of technology can be. And I think one of the things that I’ve learned from that is that there’s a really important point about making technology developers aware of the different access requirements that people with different language abilities have.
There is a really important part about putting technologies in front of people and working with them, and that’s where speech therapists come in to support people to gain access.
As Caitlin mentioned, we have often the immediate ability to just pick up a piece of technology and run with it. It becomes more difficult for somebody when they acquire aphasia, very often. And so, the support of agencies, people around, people in speech and language therapies to support people to retrain and learn to access those skills is really vital.
So, we’ve got those barriers. But once we know that we can design effectively for people with aphasia, we can start to use well developed technologies that have been informed by design from the outset with people with aphasia to give people these routes to practice, but also access to everyday life and participation in a way like Caitlin’s touched on – the opportunity to be connected with the world and all of the technological interfaces that we need to navigate now to interact with the world.
HOST: 0:12:05 Okay, so we know that technology can be a barrier for people with various kinds of disabilities, and so we’ve had accessibility standards for a long time. We’ve also had technologies like screen readers around for a long time. But I want to come specifically to the question then of AI, and now that generative AI is more widely available, how do we think this can help overcome some of the barriers?
CAITLIN: 0:12:34 I think it depends on the individual about what’s really important to them. So, when we’ve done sessions with people with aphasia, some of the ways that people want to engage with that technology is doing things like, can it help me answer my emails? I really struggle with writing, I know my grammar might not be right or my spelling might make errors; can I put in enough keywords and say, I want to generate an email, that it comes back with something that I can then edit and use and make my life a lot easier?
I think it’s those types of skills that people can then take elements from and build on it to help develop themselves.
HOST: 0:13:10 So, writing a prompt is easier, less taxing, than writing an entire email?
ABI: 0:13:15 Absolutely and it’s not just things like emails. I think one of my examples that I love the most is we know technology can be really hard to use. And if you use an iPhone or you use an Android phone, the instructions can be really different. You could go into one of the AI platforms and say, I have a Samsung phone, and give the exact make of it, can you tell me how to make the text larger on my phone? And it will then give you really easy step-by-step instructions. And if the first instructions come out and they’re actually too complicated, you can then prompt it again and say something like, can you simplify them; can you make that a bit easier?
It can really be used for helping people to adapt everyday items that they previously maybe would have struggled to adapt themselves.
HOST: 0:14:01 Caitlin, I wonder if you could tell us a little bit more about aphasia and accessible voices.
CAITLIN: 0:14:08 Absolutely. Aphasia and Accessible Voices is the Stroke Association specialist involvement group, which is run by Janet Rockliffe, our accessibility assistant who you heard from a bit earlier, as well as myself. And this is a group that’s been running for the last two years, and we have different stroke survivors from across the UK who have different access needs, including aphasia.
Now, the group itself is so vital for us at the Stroke Association because they help us to shape our internal work. So, they’ll review it, they’ll comment, and they could be involved from beginning to end of a piece of work. So, a really great example of that is we developed our new communication support packs, and this group worked on telling us what should be in them, how they should look, what the images should look like, even down to how we should be sharing this information more broadly.
But they also work on other things like testing out the accessibility tool on our website, or looking at accessible versions of our election manifesto. They really do a little bit of everything. And they are experts in their own right. They have this wealth of knowledge that they bring, and because they’ve got that different lived experience, they also come at it from different perspectives, and we can talk and discuss about it as a group.
What we also like to do is we all recognise, regardless of who we are, whether we have aphasia or another condition, or even myself and Janet, we’re constantly learning and trying to develop. And one of the topics that was raised was AI, because it’s a hot topic at the moment and it comes up in the media all the time.
I think it’s also quite interesting to remember that our group members are also in different stages of their stroke recovery, but they also engage with technology in different ways, so we have some members who are really confident with using it, and others who maybe [inaudible 0:15:59] or maybe feel less confident with using certain technology.
So, when we went into the session, we started with just talking about the basics: what is AI; what it might look like; how they might have seen it in their daily lives so far. And we had a number of our members share about where they’ve seen it popping up. So, I have some examples that people have shared from the group.
MEDIA CLIP: 0:16:29 I’ve got a friend who’s got memory problems and she has a lot of pills, and so she… Alexa tells her when to have the pills.
CAITLIN: 0:16:34 And it really varies, so we also had examples from Mags about how she was using Alexa in her home.
MEDIA CLIP: 0:16:40 So, I can say, Alexa put the heating on, and it puts it on. So, I can’t do the numbers on the… so, she does it for me.
CAITLIN: 0:16:54 But what was really nice was that a lot of our group members weren’t just using it for practical functions, they were actually also using it for a lot of fun. And so, here’s Mags talking about the other ways that she’s using Alexa.
MEDIA CLIP: 0:17:05 Well, I only use for Alexa sing… Take That, or I’ll just need it for when I’m ironing, Alexa, sing songs from the 60s. And [inaudible 0:17:27] does it and the children do it. But if we talk about Alexa and then she’ll come on, we have great craic with her!
CAITLIN: 0:17:38 I think those are some really lovely examples of people already starting to incorporate technology. But very few of our group members at this point when we were running the session were using any of the AI tools available to them.
What was really interesting, as we were going through the session, is we were demonstrating… we looked at both ChatGPT and Copilot because we wanted to give people an option of if they wanted to try something maybe they’d prefer one over the other. And what was really interesting was some of the questions that came up when we were talking about the different technology. And so, here are some of the questions that we were asked.
MEDIA CLIP: 0:18:17 Ask all the time. Will it do both? Can you make an easier word for…
CAITLIN: 0:18:24 And that was something which came up a number of times within the session. We would get an answer back or we would ask a question and it would still feel really complicated. And being able to prompt it or ask it to use simpler language really made it a lot more accessible for people who are engaging with that information or who wanted to find out a bit more.
But we also had questions like this.
MEDIA CLIP: 0:18:48 So, my question, it’s a very… it’s good. But can you… what you typed with the ChatGPT and the Copilot, can you say it… can you type without writing it in, can you just say the question?
HOST: 0:19:05 At the time these interviews were done, ChatGPT voice mode was not available. Now, the only problem with ChatGPT and voice mode is that it is a subscription. It costs about £20 a month. But in all likelihood, more large language models, or LLMs [Large Language Models], with voice mode will become available, and the price of ChatGPT Plus is likely to decrease.
CAITLIN: 0:19:30 We did also have a few criticisms that came up from the group, and I think in general, with all the media coverage that’s been going on, people are quite fearful of this technology. They’re not quite sure what it can and can’t do. But one of the things which came up, and potentially it’s because some of our members are teachers by background, we did an activity where we fed in some information to make a poem and the group were astounded with how quickly it formulated a really fantastic poem. And [G-e-e-t-a 0:20:02] commented on how it made her feel.
MEDIA CLIP: 0:20:05 That poem is beautiful. But it’s all super-quick. And it… now I understand what you and [Janet 0:20:14] were saying about schools, colleges, and universities. It’s not using their brain. So, it’s quick and easy. So, I can see the loads of positive things from this, but it makes me think your brain’s not working.
CAITLIN: 0:20:36 And this really started a fantastic discussion about, well, when do you think it’s the right time to use it? What would you want to use it for? And it was really interesting that different members had different priorities of how they would engage with this technology, and that’s completely okay. And some people wanted to try out different things and see how they felt about it.
So, it’s not something that member would pick up and use overnight, but we’re hoping that from these types of sessions people would go away and feel a lot more confident about giving it a go.
HOST: 0:21:05 So Abi, my question for you is, we have all these new technologies that are coming out, how much does it help if a person was familiar with the technology before the onset of aphasia? And so, for older people, how much does it matter if they weren’t familiar with the technology?
ABI: 0:21:19 I think it’s a really nice question to consider because it gives us a chance to try and address one of the misconceptions, is that somebody who’s older or who hasn’t used technology before might not be able to engage, or might not be interested to engage with technology, or somebody who’s got very severe aphasia might not be able to.
The reality is, if there is a tool which brings something positive to you and you can find a benefit from it, it’s amazing the amount of bad user interfaces people will put up with to achieve a goal! And I think there’s also, potentially, for somebody who’s been tech-savvy before, they will maybe more acutely notice the difference in the fact that that might be something they were very comfortable and competent in before, and that change because of the aphasia, the consequences of the aphasia, the changes in reading and writing, being able to navigate through these things means that, actually, they might have suddenly lost confidence in their technology prowess. And so, it can be really important to support someone to experience success in those contexts, whichever context they’re coming from.
HOST: 0:22:28 So, service users found Alexa really useful. And I’m sure that most listeners will agree that Alexa is not fantastic at voice recognition, it’s not particularly intuitive, and you can’t have a regular conversation with it. Whereas the voice modes of LLMs are leaps and bounds ahead of the Alexa that we are used to. And you can tell the LLM if you have a speech and language difficulty, so I’m sure that the release of these technologies can bring tremendous benefits.
Having said that, Abi has developed a therapy skill for Alexa, and I think this is really interesting in informing the way in which we think about deploying these technologies for therapeutic purposes. So, the aim of this therapy was just to see if we could hijack Alexa to practice therapy to use speech. And so, we took an existing aphasia therapy that we knew was effective if you delivered it in a face-to-face context with lots of practice. And we said, well, let’s replicate that so that you can do it as an interaction between your device and you at home without a therapist.
And so, we took a therapy called semantic feature analysis, and we created a skill that works on an Amazon Fire Stick which plugs into your TV, which means you get visuals with it as well, so you can have written text and images on the screen, which is supported by the voice that Alexa has. And then you can speak to Alexa without needing to say a wake word by pressing a button on a remote control. So, it overcomes some of the challenges that voice recognition technology has for some people with aphasia, which might be that it takes a bit longer than usual to speak, and so Alexa sometimes goes to sleep, and that you might struggle to say a wake word before you get on to your main sentence.
So, we created a skill that somebody could practice their therapy with on the computer screen and tried it out with two people. One gentleman practiced at home with it. He had a Yorkshire accent and it was a bit unreliable on his Yorkshire accent, but he persisted with it and he made therapeutic gains. So, at the end of five weeks of practice, he had significantly more words in his naming test than he did at the beginning, so that’s a real success story for that one.
But a second gentleman worked with it for a week and then was like, Abby, I can’t stand it, it’s making me so stressed, I’ve got to stop. So, the interaction is not there yet for everybody. We’ve got some work to do to try and think about how Alexa responds if she doesn’t understand what you’re saying, and how can we make those interactions [friendly 0:25:05]?
HOST: 0:25:05 So, this is an interesting question. Do you think we should also be training LLMs on datasets of people who have speech and language difficulties to make them better at understanding people with speech and language difficulties?
ABI: 0:25:27 Well, do you know what, there’s an interesting tension here because if we keep it on the regular speech recognition, which is trained to recognise a recognisable voice, and we support people with aphasia to have success and they’re getting feedback from Alexa saying, no, I don’t understand it, and they keep going until Alexa understands them and feeds back and gives them a response, then actually, that voice is probably becoming more clear and understandable to a wider audience beyond that.
So, for the purposes of this, actually having a neutral feedback rather than me… because I could understand what the person is saying, but I’d go, yep, that’s close enough.
HOST: 0:26:01 But of course, there is the case that some people’s speech and language therapy will not improve, or indeed get worse, such as primary progressive aphasia, which is a terminal neurodegenerative disease that at first affects communication, and we would like the LLM to remain functional as long as is possible. So, there is something to be said for datasets of people who do have speech and language difficulties. And we also touch on the subject with Professor Annalu Waller in another of our AI episodes.
Which brings us to the end of this podcast. Firstly, I want to say a very big thank you to Abi and Caitlin for their time today. Recently, Abi, Caitlin, and Mags – a service user you heard from previously – created a presentation for the Aphasia Alliance, a grouping of aphasia organisations across the UK. Mags read a poem she’d created about aphasia using generative AI to express her feelings about the condition. And I think this is a lovely way to end.
MEDIA CLIP: 0:27:00 Silent Struggles. Words once flowed like rivers, now they trickle slow and shy. Thoughts are trapped in tangled webs as I search for ways to try. Aphasia steals my voice away, but not the thoughts within. I [find to find/define 0:27:24] the words to say to let my story begin. Patience is my closest friend in this silent white fight. Each word a victory, a blend of courage, hope and light. So, listen close and you will hear the whispers of my mind. For though my speech may not be clear, my heart is not confined.
MUSIC PLAYS: 0:28:05
END OF TRANSCRIPT: 0:28:21