Measurement and simulation of steered acoustic fields generated by a multielement array for therapeutic ultrasound
JASA Express Letters 1, 012001 (2021); https://doi.org/10.1121/10.0003210
Authors: Eleanor Martin, Morgan Roberts, and Bradley Treeby
In this episode, we interview Eleanor Martin of Wellcome/EPSRC Centre for Interventional & Surgical Sciences (WEISS) at the University College London about a method for modeling therapeutic ultrasound to help with planning treatment.
Read more from JASA Express Letters.
Learn more about Acoustical Society of America Publications.
Music Credit: Min 2019 by minwbu from Pixabay. https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=music&utm_content=1022
Welcome to Across Acoustics, the official podcast of the Acoustical Society of America’s Publications office. On this podcast, we will highlight research from our four publications, The Journal of the Acoustical Society of America, also known as JASA, JASA Express Letters, Proceedings of Meetings on Acoustics, also known as POMA, and Acoustics Today. I'm your host, Kat Setzer, editorial associate for the ASA.
Joining me today is Eleanor Martin, of the Wellcome/EPSRC Centre for Interventional & Surgical Sciences (WEISS) at the University College London. Dr. Martin is the lead author on the article “Measurement and Simulation of Steered Acoustic Fields Generated by a Multielement Array for Therapeutic Ultrasound”, which appeared in the January issue of JASA Express Letters. Thank you for taking the time to speak with us today, Elly, how are you doing?
Hi, Kat, I'm really good, thank you. Excited to be talking about this paper.
Yeah, thanks for speaking with us. First, can you tell us a little bit about your background?
Sure. So, I'm a physicist. And I work on therapeutic applications of ultrasound. So particularly, I do measurement of ultrasound fields, and validation of models of ultrasound propagation. And as you said before, yeah, I'm at University College London. And I hold a UKRI Future Leaders Fellowship.
Sounds awesome. Thank you. First can you explain what ultrasonic transducers are? What are some of the things we can do with them?
Sure. So a transducer is literally something that converts one thing to another. So an ultrasound transducer converts electricity to vibration, and the other way around. So with ultrasound, we often use piezoelectric materials. And these are materials that when you apply a changing voltage to them, then the material expands and contracts. And it's this kind of expansion and contraction that makes vibration that generates the sound. So we can make transducers of different shapes and sizes and constructions. And that means that we can play around with the frequency of the sound that they emit. And we can make different kind of shaped fields that are suitable for different applications. We can also put together a bunch of small transducers arranged in some way to make what we call an array transducer. So these are the kinds of things that are used for imaging, which lots of people might have seen, where you have this sort of long transducer, which is actually made up of a line of individual little transducers. And that allows you to move the beam around and focus it into the body and to locate objects.
Very interesting. So can you give us a bit of a background on how ultrasonic transducers are used in therapeutic settings?
So there are a growing number of applications of ultrasound for different therapies. So for example, one of the best known ones is probably high intensity focused ultrasound. So this is where you have some kind of source, which is sometimes curved, and usually made up of a bunch of elements in one of these arrays. And it tightly focuses the sound field and concentrates energy into the body, a bit like when you focus sun through a magnifying glass and you can start a fire. So the aim is here to focus the sound to burn cancerous tumors and other things. So then there's also applications in drug delivery as well. So delivering drugs to specific sites in the body. So you can use the sound to release drugs from within the little bubbles that have been carried in or to help the drugs to diffuse out of the circulation into the tissue more effectively. And another big area of therapeutic ultrasound, which is really growing at the moment is ultrasonic neuromodulation. So in this application, ultrasound is focused through the skull and to the brain, where it then affects the function of the neurons. So the aim is not to damage them, but to modulate their function in some way. So for example, it might be used to reduce abnormal activity which are associated with tremors or something like that, or epilepsy, or to change the workings of brain circuits which might help in the treatment of depression. So the work that we were doing in this paper was sort of in this context, where you need the ability to focus into the brain to target specific regions, which is made really difficult by there being a skull there. And you obviously can't measure the pressure inside the skull. So you need to be able to simulate it to estimate what it is.
That is so interesting and very cool. So how is modeling typically used in ultrasonic therapies?
So there are lots of different types of models or varying levels of complexity that people are using. So one example would be that to correct the distortions that are caused by the skull for one clinical device, they use a relatively simple model, which basically traces the path of the sound waves through the skull. And then you can correct for this distortion in order to get a better focus inside the patient's brain. And then there are much more complex models that can also capture the absorption of the sound and the tissues of the body and in waves that are reflecting and bouncing in all directions. And these ones might be used for things like assessing the outcomes of treatments, or the suitability of a particular patient for treatment, or even just for transducer design, and sort of evaluation of different transducers and things like that for therapies. But for neuromodulation because of the skull, we use these array transducers to allow us to steer and focus the ultrasound. And we need to work out how to drive each of the individual transducers to correct for this distortion. So we use it to, to do that kind of correction as well, to work out how to delay the signals to the elements. So they will arrive at the right time, like I said before, to try and work out what the pressure is in the brain where we can't measure it.
Okay, that all makes sense. So you mentioned that modeling fields generated by therapeutic ultrasound arrays can be prone to errors. Can you explain these errors and the challenges around them a bit more?
Yeah, so as I said before, these array transducers are made up of a group of transducers, or elements we call them sometimes. And each of them is supplied with an electric signal, which is what generates the sound. So if we send the same signal to each of the elements, then they all have all the signals at the same size or amplitude, and the same timing or phase we call that then if it was a curved array, then you'd get a natural focus at the geometric focus point. So kind of at the center of the circle that this array is sitting on. But if we want to move the focus away from that point, which is what we call steering the beam, then we can introduce delays between the signals to the elements, which mean that the signals arrive in time with each other at some other point, that's not the natural focus. But when we do that, then all the elements are working together. And in most cases, there'll be some interaction between them either electrically, so signals are picked up on one wire that were generated by another, or they might interact mechanically, sometimes the elements are physically joined to one another, which means that they might behave differently depending on exactly what combination of input signals you give them. So if you try to steer the beam to one place, the elements might behave differently than if you tried to steer it to somewhere else, because of this sort of interaction that they have with each other. So usually, we do some sort of measurement to characterize the array of transducers. For example, to look at the pressure on the surface of each of the sources. But if that changes every time you change the driving signals, for example, to say, to a different place, or to compensate for a different patient's skull, then it starts to become difficult because we can't make those measurements in every single condition. So if you do assume that the elements behave the same all the time, and then use that information to try and simulate different fields, then you get errors. And there are other things with array transducers as well that it's also important to know where the elements are located relative to one another, and what shape and size they are and things like that, which might be different to sort of how exactly they were designed once they've actually been manufactured. So and yeah, one other aspect is how are you representing each of the elements in your model as well, that it might be simple to assume that they behave as sort of what we call ideal radiators where the whole surface is moving backwards and forwards like a like a piston, we call that piston source. But actually a lot of the time for real transducers, there's usually some variation in the way the sources vibrate across the surfaces. So we can get that information from measurements. But that's obviously much easier for you to get one transducer. Whereas if you've got hundreds, then it might be difficult to get all that information and then to use. It would be simpler to model them as an ideal source.
Once again makes a lot of sense and is very interesting. So for this study, how did you characterize the transducer?
So in this study, we used a technique called acoustic holography. So this allows us to reconstruct a 3d ultrasound field from a 2d measurement basically. So we make a measurement of the pressure in the ultrasound field across a 2d plane, which is perpendicular to the direction the beam axis that the field is propagating in. And then we used in this case, we use the angular spectrum method to then propagate this 2d measurement backwards through 3d volume, which sort of covered the source. And then we could look at what the field was on the surface of each of the individual elements in the array. And we could also get the position of each of the elements in space, as well as the pressure on the surface of each one. So that tells us everything we need to know about source.
Okay, so how did your study aim to reduce uncertainty in modelling these transducers under the variety of driving conditions that may be used?
So the aim was to try and find the best approach for simulating fields that were generated by the transducer. So we did that by looking at the different errors that were introduced by some of the things that I mentioned before. To see which of those were most important and how we could reduce them, basically. So one of the things we did was we found the actual positions of the elements in the array relative to each other, using the holography methods that I just mentioned. And we looked at what would happen if you used those measured positions, to simulate the field generated by the array, compared to what if you used the positions that we kind of assumed the elements were sitting in when we designed the housing of the transducer. And then we also looked at what was the difference if we used the, the size of the element that the manufacturer told us compared to one that we'd sort of obtained from a measurement. We also looked at the difference between assuming all the elements behaved in same way as each other, meaning that there's no difference in the pressure they generate, even when they're given the same signal. Whereas actually, because they're all slightly different to each other electrically, that they generate slightly different signals. And then we looked at what happens if you allow them to vary from each other, using the information that we got from the measurements, or if we added in this prediction of the electrical crosstalk that we made between each of the elements. And the idea was that we just kind of compared all of the simulated fields, with all of these different ways of doing it against measurements that we made to see which ones got us the closest to the measured fields. And that would give us an idea of what is most important to know about the transducer and how it behaves and what we should do to make sure that we are modeling them as accurately as possible.
Okay, great. So tell us a bit about the transducer you used for this study. Can you explain the importance of why it was designed the way it was?
Yeah, it was, it was made of 32 individual, small, three millimeter wide, flat circular elements that we had placed in a 3d-printed housing in a pseudo random distribution. So they're kind of almost randomly placed. But with some constraints, they didn't get too close to each other, and so on. And that was on a curved surface. So there was a natural focus, it was kind of a bowl shaped transducer. And having a number of elements, 32 in this case, gave us the ability to steer the field around. And then there's a natural focus because of that curve. The frequency that the elements resonated at was 550 kilohertz and that's suitable for getting through a human skull without too much absorption. But high enough that the focal regions don’t start to become too big, which is what happens as you as your frequency goes lower. So yeah, so I think that's mostly it. We use this individual elements in 3d-printed housing, so that we could move them around if we wanted to make a different transducer, which always make things a bit easier. So yeah, that was kind of the rationale behind that.
Well, there you go. Yeah. So you mentioned that the transducer elements were modeled as a set of identical circular piston radiators? Why did you choose to model the elements in this way? What effect did it have?
So actually, for these small elements, though, I said there were three millimeters across in the frequency that they vibrate is 550 kilohertz. So that means that they're actually only about one wavelength across. Which means that they do actually have a pretty good sort of uniform pressure distribution well, so it's actually if you look at the pressure on the surface of the source, you get a sort of Gaussian distribution of pressure. And that's equivalent to what you get if you had a uniform source velocity. So if the surface of the source was all moving at the same time, and that's how we would model an ideal radiator in the simulation code that we use. So actually, it's a pretty good fit, in this case, to model them as ideal radiators. So we looked at a number of the elements to see other behavior and decided that we could actually represent them all pretty well by using a single ideal source with, you know, just one size that represents everything. And that makes things a lot simpler, just having one, one source that you place around the different positions in your simulation. And it was fine, because there wasn't much difference in this case.
Okay, that makes sense. What is electrical crosstalk? How do you predict it with your model?
So electrical crosstalk is when you have basically, if you have a signal transmitted in one circuit, then it's when it causes interference or a signal to be generated on another nearby circuit. So in this case, we're talking about the wires that connect the signal generator to the elements and the elements themselves. So the voltage on one wire that goes to one element, causing a voltage to be induced on another neighboring wire and element. So we tried to predict this by using some measurements to work out what the relationship between the electrical input signals that we set were. So the signal was sent to each of the elements, and then the acoustic outputs that we got on each on each element was measured. So that means that what the pressure was on the surface of each of each element. So we know that the crosstalk signals caused by one element another depends on the distance between the wires. So if the wires are close together, you get a bigger crosstalk voltage generated than on ones which are further away. So we could open a piece of the cable to have a look at where all the wires were and how far away they were from each other. And then we looked at how different the elements were to each other by measuring the pressure they generate when you drive them individually. So that gives us some information about that as well. And then we made some measurements under several different conditions. So we had different input signals, and then we measured different output, acoustic outputs on the elements. And then we sort of put all this together to find a matrix that we could multiply the electrical signals by in order to get the acoustic output signals. And that we kind of arrived at that by doing some optimization and, and making everything match up mathematically, and that seemed to work in the end.
Okay. Tell us about how you stimulated steered fields.
Yeah, so the steered fields, these are fields where the focal point is moved to some position that's away from the natural focus of the array. So we do that by introducing the time delays between the signals sent to each element. So they all arrive at the intended point in phase or kind of at the same time. So to work out what those delays are, we can just look at the positions of the elements and the position of the intended focus. So you just work out the relative distances between each element and the focal point, and then add some time delays to the signal to each element to compensate for that. So in this case, we calculated those phases, and we applied them in the experiment to steer the fields generated by the transducer. Then for the simulation, we took these input signals and multiplied them by our matrix that we found in order to map them to the acoustic output pressures on each element. And then it was those acoustic output pressures that were then put into the simulation. And then the simulation was run using, we used a function called the acoustic field propagator in the k-Wave toolbox. And basically, that told us what acoustic source signals we needed to put into the model in order to get the field with the focus steered to the right place.
Okay, so how did you validate the accuracy of these simulations of steered fields? And on a similar note, how did you evaluate errors?
We looked at the accuracy by comparing the simulated fields to the measurements that we made. So we made measurements and a bunch of different conditions. So 10 different focal positions. So basically, we applied these phases to steer the field to a particular place. And then we used a hydrophone, which is like an underwater microphone and aligned that with the focus, so found the position where the pressure is highest. And then we made some measurements along the lines that pass through that focus. And then that's what we compared the simulations to. So we looked at the amplitude of the focus, the focal pressure, the position of that focus, and then the kind of the mean squared error across those scan lines between the measurement and the simulation. And we did that for all those different sources of errors. So the manufacturers element size, the positions of the elements, the crosstalk, and all of those different conditions worked out to be 12 different combinations that we basically averaged the errors from the 10 different fields for each of those 12 sets of simulations. So we got basically like a couple of numbers for each different case. So we found that the biggest factor was the crosstalk. If you didn't include that in the simulation, then the difference in the amplitude of the focal pressure was actually about 20%. And then when you did include it reduced to 3 to 4% difference between the measurement and the simulation. And the important thing for the position of the focus was the was using the measured positions of the elements. So if you didn't use those, and the difference between the measurement and the simulation was about three and a half millimeters in the position. And when you did include those it reduced to about one and a half millimeters.
Oh okay, so what were the big takeaways that you got from the study?
So I think the main thing was that, for this particular transducer, electrical crosstalk had quite a significant effect, but that we could correct for it. Or we could in this case, we didn't really correct for it, we just predicted what it would be. So we showed that for this transducer, when you take the effects of electrical crosstalk into account that you can actually get a much more accurate simulation or prediction of what the what the field will be. So the field distribution, the amplitude and the position of the focus. So that's really important if you're doing patient specific steering, or correction of distortions to focus the sound, as you can't characterize the array under each driving condition, but you need an accurate simulation of the field. So hopefully, it means that for these kinds of transducers where the crosstalk is mainly electrical, that you can actually predict that and still get an accurate simulation. So we still have some work to do when looking at how that works when the crosstalk is mechanical rather than electrical. But yeah, we're working on that.
Well, that's really great. Thank you so much for taking time to talk with me today. You've given us a lot of interesting insights about how the use of simulations or how simulations are used in therapeutic ultrasound, we really appreciate it. Have a great day.
Thank you. Thanks for having me on the podcast.
Thank you for tuning into Across Acoustics. If you'd like to hear more interviews from our authors about their research, please subscribe and find us on your preferred podcast platform.