The Language Neuroscience Podcast

‘Wired for words: the neural architecture of language’ with Greg Hickok

Stephen M. Wilson Season 5 Episode 35

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:21:31

In this epidode, I talk with Greg Hickok, Distinguished Professor of Cognitive Sciences & Language Science at the University of California, Irvine, about his new book ‘Wired for words: the neural architecture of language’.

Hickok G. Wired for words: The neural architecture of language. 2025; MIT Press.

Key Hickok papers:

Hickok G, Poeppel D. The cortical organization of speech processing. Nat Rev Neurosci 2007; 8: 393-402. [doi]

Hickok G. Computational neuroanatomy of speech production. Nat Rev Neurosci 2012; 13: 135-45. [doi]

Hickok G, Houde J, Rong F. Sensorimotor integration in speech processing: Computational basis and neural organization. Neuron 2011; 69: 407-22. [doi]

Hickok G, Buchsbaum B, Humphries C, Muftuler T. Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt. J Cogn Neurosci 2003; 15: 673-82. [doi]

Matchin W, Hickok G. The cortical organization of syntax. Cereb Cortex 2020; 30: 1481-98. [doi]

Hickok G, Venezia J, Teghipco A. Beyond Broca: Neural architecture and evolution of a dual motor speech coordination system. Brain 2023; 146: 1775-90. [doi]

Rogalsky C, Basilakos A, Rorden C, Pillay S, LaCroix AN, Keator L, Mickelsen S, Anderson SW, Love T, Fridriksson J, Binder J, Hickok G. The neuroanatomy of speech processing: a large-scale lesion study. J Cogn Neurosci 2022; 34: 1355-75. [doi]

Rogalsky C, Pitz E, Hillis AE, Hickok G. Auditory word comprehension impairment in acute stroke: relative contribution of phonemic versus semantic factors. Brain Lang 2008; 107: 167-9. [doi]

Hickok G, Okada K, Barr W, Pa J, Rogalsky C, Donnelly K, Barde L, Grant A. Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures. Brain Lang 2008; 107: 179-84. [doi]

Other papers mentioned:

Wilson SM, Entrup JL, Schneck SM, Onuscheck CF, Levy DF, Rahman M, Willey E, Casilio M, Yen M, Brito AC, Kam W, Davis LT, de Riesthal M, Kirshner HS. Recovery from aphasia in the first year after stroke. Brain 2023; 146: 1021-39. [doi]

Risse GL, Gates JR, Fangman MC. A reconsideration of bilateral language representation based on the intracarotid amobarbital procedure. Brain Cogn 1997; 33: 118-32. [doi]

[Music] Welcome to episode 35 of the Language Neuroscience Podcast. I'm Stephen Wilson and I'm a neuroscientist at the University of Queensland in Brisbane, Australia. Just a quick heads up, I have a research position available in my lab right now at the University of Queensland to work on imaging aspects of our NIH-funded project, Neural correlates of recovery from aphasia after stroke. This is a level A or B research focused postdoctoral position, which is open to international applicants and UQ will sponsor a visa for the successful applicant. If you or anyone you know is interested, please go to langneurosci.org/join to learn more. Okay, well my guest today will need no introduction for most of our listeners. I'm really pleased to be joined by Greg Hickok, distinguished Professor of Cognitive Sciences and Language Science at the University of California, Irvine. Greg is one of the most brilliant and influential scientists in our field and he has a brand-new book coming out, "Wired for words The neural architecture of language", coming out from MIT Press on November 25th, the same day that I'll release this podcast episode. Today we're going to chat about the book, which lays out Greg's up-to-date model of language processing in the brain, building on his previous work and of course 150 years of findings in our field. We'll talk about the dual streams, the sensory theory of speech production, dorsal and ventral precentral speech areas and last but not least our diverging views on the laterality of the ventral stream. Okay, let's get to it. How's it going? It's going well. As well as can be trying to do science in the United States these days, but how are things with you. Pretty good. As you know, I moved to Australia a few years ago and how about you? Like how's your life going apart from, you know, science challenges? Life’s pretty good. Life's pretty good. I can't complain. We have a grand kid now. Oh wow. Yeah. He is already three years old, living in Nashville, oddly enough. Oh, okay. Older daughter lives in Nashville. What's she doing there? She's, they were in Arizona and didn't want to be in Arizona and didn't want to come to California and settled on Nashville as a place. Okay. Is it affordable and interesting in some ways, so… It is. Well, it's not affordable anymore, but you know, so your daughter has like really replicated my life of moving from California to Arizona to Nashville. (Laughter) Yeah, exactly. Yeah, clearly that was yeah. Maybe she'll hit Australia next. It'll be good for her. Not so good, but maybe not for you. Yeah. So, are you still getting to the beach a lot and taking advantage of Southern California? Oh, yeah. Yep. I head down to the beach a couple times a week, not surfing much anymore, but paddling. I have like a racing style canoe, like the surf skis they call them, that are just kind of like an outrigger without the outrigger. So, I go paddle that around for a few hours a couple times a week. Okay. So, that's more your speed than surfing these days? Well, yeah, it's just easier to get out there and you know, it's gotten crowded in the water and sharky, you know, it's gotten, it's, you see sharks out there now. Oh, wow! More like Australia. So, yeah, great whites. So, oh, goodness. Yeah, I'm good with, with paddling and canoeing around here and out. Kayak around. Okay. Yeah, there's been quite a few shark attacks in Australia this year. Pretty, pretty shocking ones sometimes. But it doesn't stop my family from playing in the water. Right. Yeah. Yeah, exactly. Okay. So, you've got a book that's coming out. We're going to chat about that. When is the, do you know when the book is coming out? Like when's it going to be officially released? Yes, it’s coming out I think the 24th or something. Okay. A couple weeks from now. Okay. So, just for context, we're talking on the 13th for me, 12th for you. And so, yeah, it usually takes me at least a week to edit anyway. So, we'll release our conversation on the same day that your book gets released. That's awesome. And yeah, it was really fortuitous because like I emailed you to see if you wanted to chat and not knowing that you had the book coming out and then learned that you did and that was perfect because I've got a chance to, you know, read your current opinions about all these things that you've been working on for 25, 30 years. And then it gives us kind of, you know, a big, big picture thing to talk about. So, yeah, thanks for making the time to talk with me. Yeah, of course. I'm excited to do it. I think you're the first person I've talked to about the book besides a few couple students. Oh, that's cool. Yeah. Media. The first and only podcast about the scientific study of language in the brain. So, what is your book? Is it a textbook, a memoir, or something in between? I think it's definitely not a textbook, although it's, so I wrote it, I wrote it with my students in mind. I've been teaching this course, Language in the Brain at UCI, since 1997. And there's no textbook; there wasn't what I thought of writing this book and I decided I would write a textbook myself and pitched it to MIT Press back in probably 1999 and it got approved and I got a contract around the year 2000 to do it. But then that was around the time when David Poeppel and I were developing the dual street model and I'm like, oh my god, I had to figure out how this stuff is working before I can write a textbook on brain and language. And then time got away, and I kept working on other things and wrote a book in between but then decided I owed MIT Press this book. So, I got back to it in 2016 or so. And by then David Kemmerer had his book out textbook, which is quite good and there was no point of writing another textbook. So, I decided to make it a kind of, that's a pretty high paise, to David Kemmerer, by the way. Like, he'd already written one, so there's no point writing another one, but yeah. Yeah, I thought it was quite good. David and I joke that his description of the dual stream model was way better than ours. (Laughter) We thought it was pretty, you know, it was pretty good. So yeah, no one like his textbook too. Yeah, I asked MIT Press if I could editor there, if I could just write a monograph kind of my view of the world of brain and language and I wrote it basically as the following the content of my course and tried to make it accessible to undergraduates at least with some guidance. So that was kind of my target for this book. Yeah, so it's kind of in between. Yeah, and so having taken a look at it and I can share with our listeners that, okay, first of all, you should definitely read it. I loved it. It's for many reasons. One of which is just it's written in such a accessible conversational way, right? So, like I so I know you and I've been chatting with you for decades, believe it or not. But I really when I read it, I just hear your voice, you know, like it's just like this is Greg talking and it doesn't, it doesn't read stuffy at all. It just kind of reads like, well, there's what I thought and then I thought I might be wrong. So, then we tried this and it turned out I wasn't wrong. I was definitely right all along. (Laughter) So, you know what I mean? Yeah, yeah. Did you do that deliberately or is that just the only way you can write? It's kind of my style. But yeah, I did want to make it accessible and interesting. I don't stuffy textbooks are just kind of boring to read and scientific papers are kind of boring to read. And I just wanted something where I could go through this stuff without, without pulling punches on the details, but trying to make it accessible to people who were interested, really interested in understanding bits and pieces about this system worked. And it's really not just for it. It's not just for undergrads by any means, right? I mean, I think that there's like, you know, I think that all of our colleagues in the field could take a lot from this book. Yeah, that's right. I wrote it accessible to undergraduates, but I when I was thinking I was writing these arguments and referencing everything so that it would be a perfectly legitimate argument for the practicing scientists for sure. Yeah, cool. So, I don't know if you've ever listened to my podcast, but I always kind of like to talk to people about how they got into the field, how they developed their interests. And you sort of tell us a bit in the start of the book and I'll read a quote here. It says,"This is a book about the biology of language, my primary area of research. Oddly enough, I never much liked the topic during my college years." And then you go on to explain how you got interested in the neuroscience of language. So can you just share that back story with us? Yeah, sure. So, I was interested in mind and brain as an undergraduate, and anything mind and brain fascinated me except the language stuff. I was in psychology classes, because you take classes and learn about language. And a lot of it is just like, "Oh, there's these nouns and verbs, and then there's phrase structure, and there's this and that and it's just like, yeah, it never excited me. It was nothing I was interested in. So, yeah, it didn't, it bored me, and like I said in the book, I remember picking up a copy of a neuropsychology book and realizing that if I wanted to do neuropsychology someday, I'd probably have to learn about these aphsias and stuff. And I was like, all right, fine, I'll learn about it, whatever. But yeah, but then I happened to get into grad school at Brandeis University where the advisor that I would be, a, work with doing neuropsychology was Edgar Zurif. He was a famous aphasiologist, and I was like, Okay, I guess I'm learning language. But how did you get paired up with Zurif if you weren't interested in language? That's an accident, actually. So, I was applying to grad school. I applied to MIT, Johns Hopkins, I wanted to work with the neuropsychologist in those kinds of places. My undergraduate advisor was someone named Mary Louise Cain, who was an aphasiologist, and I was doing face perception research with her because that was interesting to me. And I was applying to programs, and she said, Hey, you should apply to Brandeis. That's a good program, and Edgar Zurif is there. And I was like, Well, I don't really want to do language, but okay. And that's the program I got into. I didn't get in at any other place. And so that's where I went. And so, I started learning language. That's like just an interesting little side note there. Isn't it? For those listeners who are struggling to get into grad school and wondering what grad school they're going to get into and sending all these applications. You only got into one grad school and then go on to become you. (Laughter) Yeah, it was my backup plan, and I got in and said, Okay, I'm going to do this, I guess. That's great. That's a great story. So, I mean, so did Edgar you interested in it? You know who got me interested in it? I tried to read some of his papers before arriving in Massachusetts in Boston. And it was just this still wasn't that interesting to me. I was reading his papers and, alright, whatever. But I took a course on graduate syntax by Jane Grimshaw. And she's a well-known linguist doing syntax. And I just jumped in, and she taught the course and I remember the very first assignment, homework assignment I got where we had to analyze some sentence structure. And I remember going to one of my fellow grad students and saying, I don't even have any idea what format an answer is supposed to look like. It was a phrase structure kind of set of rules that they were looking for that she was looking for. And that was my beginning. But what I learned from Jane was really important things. That language is this kind of internal system that generates these things. It's part of how the mind works and how language gets learned and processed. And I realized that it was really interesting. That it wasn't what I thought it was. And so, I got interested. And then kind of went into hardcore, not syntax syntax, but I did straight up psycholinguistics. And when I finally did start diving into aphasiology, it was grounded in serious, Chomskyan kind of theoretical syntax, if you read my early papers. Yeah, I have done. Sorry, my dog's just decided to start barking at nothing probably. Molly! I can edit that out. I can't even hear it. Yeah, Zoom cancels these things. Okay, so yeah, no, I've read those papers of yours from the late 90s. It doesn't read like later Greg, but like it's like formative, right? Yeah, yeah, I was another interesting story. So, I took, as a third year graduate student at Brandeis, I took a course by Steve Pinker and Ellen Prince, who were then all about the past tense and, you know, connectionist and critiques and all that sort of thing. And as my project, my paper project for that course, I did a paper proposing an experiment in aphasia studying past tense production. And Steve liked it and eventually invited me to become a postdoc at MIT with him, the new McDonnell-Pew Center for Cognitive Neuroscience, which I went and did. But I wasn't terribly interested in doing aphasiology, ended up working with Ted Gibson on some straight up psycholinguistic stuff. And at the end of my first year, they were going to kick me out of MIT because I wasn't doing neuroscience, but I was just doing straight up kind of cognitive stuff. Why didn't they kick Ted out then? (Laughter) Well, he, yeah, no, he was a graduate student, no, he was a postdoc. I don't know, he probably should have. (Laughter) He might have been in under a different program or something, I don't remember. But anyway, so, I decided that I would start doing syntax and brain and that's kind of the, they told me that I could stay another year if I just started doing that. So that's how it started. Yeah, I didn't know that. I didn't know that back story. That's interesting. So then, so that's how you kind of got into it. Now, in the book you talk about, in the preface, you say, you have a tendency, I wouldn't quote again, like, you have a tendency to frame the issues in historical context. And I really like that approach. That's, that's the way that I think too. But as you note, I quote, some readers don't necessarily want this kind of framing, preferring instead to just get the facts of the modern understanding without mulling over the old ideas and results. So why do you think it's good, why do you think it's good to think this way? And why do you think this way? I naturally like history. So, I'll go back and see, you know, what happened back in the day. That's just fun for me. But I found that it's useful because it shows you things about where these ideas came from and what kinds of ideas got rejected, why, and sometimes for the wrong reasons. So sometimes what happens in a field is, is someone comes along and says, Oh, no, that's a bad idea. And here's why and everyone buys it. But the argument is bogus. But then the idea is gone, and everyone's forgotten it. So, going back and seeing what the ideas were, where they came from originally, why they were rejected or accepted. And then, you know, moving forward, it gives you a broader perspective on the various, the range of ideas and arguments. And I found plenty of flaws in old arguments as I try to detail in the book that helped me understand why we have the biases that we do today and how we can move forward most productively. So, I'm happy to tell. Yeah, I appreciate that framing of everything in your in your book and in your work in general. And so again, at the start of the book, you talk about how the human brain is specialized for language, which is something that we would obviously agree on and the most of our listeners would probably agree on. And you raised this interesting question about whether it evolved from scratch or did it evolve by tinkering with what was lying around. So, can you kind of share like, what's your perspective on that and what ramifications does it have for understanding the neurobiology of language? Yeah, so it's always been difficult to be a student of the neurobiology of language. Whereas if you're a setting vision, we can go and study cats or macaques or we have all this information from other homologous visual systems where we can kind of jumpstart the process of understanding how things are working in humans. I grew up with the the idea, the belief that there was no homologue. There was no animal model for language. You know, obviously, because we're the only speaking animal, at least at the level of sophistication that we have in the world. So how do we make progress? And then if that's true for the only speaking animal or linguistic animal, how did we get it? How did it start? And that's been, of course, a fundamental question that people had wondered about for a long time. So, it's just a fundamental question about how language evolved. And the problem is that evolution doesn't really work like that. It doesn't work by inventing brand new things. It tinkers with existing systems, modifies them, and it can modify them through descent quite substantially so that they become quite unique. But they have to come from somewhere. So where did language come from? And people that propose various ideas. But yeah, that's kind of the, the basic problem with that issue. And why I think the approach that I've taken kind of helps solve that in some ways. Yeah. So, you basically, you're perspective, and which I share is that it evolved by tinkering. And that ends up having, then as you kind of explore all the different aspects of the language network, you always kind of relate them to other principles of neuroscience or principles of brain organization in different modalities and systems. Yeah. Yeah, for sure. I think this was a natural thing for me to do. I didn't consciously start doing this. But back in the 90s, I was reading Milner and Goodale’s book on the two visual streams, the visual brain in action, I think it's called. And I saw a lot of parallels to language. And so that's kind of drew me to start thinking about language in those terms. And it helped me organize how I thought about very a wide range of data in aphasiology and in functional imaging. And ultimately led to the dual stream model that David and I proposed in the 2000s. And so, I realized that that kind of approach, thinking in terms of evolutionary homologies, was helping me think of things that I wouldn't have thought of otherwise and to develop hypotheses to test. And they turned out to be reasonably accurate in my view. So yeah. I think it's a really helpful approach. Yeah, cool. So, let's kind of start talking about the model that you develop in this book. And it's got several different components to it. But like you just mentioned the dual stream model. And obviously that's what you're very well known for. And it's also a prominent part of this book. You know, your 2007 paper has almost 7000 citations and the earlier papers have many thousands as well. And this kind of like pervades, I'd say the first half of the book. So can you talk about, you know, the dual stream concept in the visual domain just for, you know, briefly for those that are familiar with it and then kind of talk about how you develop those concepts in the language domain. Yeah, sure. So, in the visual domain, it's conceptually the argument is pretty simple. So, as you're looking at Stephen's image in the video here, you see a microphone sitting there in front of him. And there's two things you can do with that information. You know, conceptually, one is that you can look at it and understand what it is and map it to some semantic representation and understand that it is a a recording amplification device and it's doing something. It's meaningful to you. On the other hand, if you are in the room with Stephen, you could reach out and grab it. Now, those two tasks are fundamentally different. It's going to be a microphone, whether it is in its place now, whether it was above Stephen's head, lying on the table, upside down, it's still a microphone, no matter its position in space or orientation, and maybe there's a little mini microphone. That's, you know, same semantic category, but different size and shape. So, it doesn't change the semantic content, but what it does do is change the way you might interact with that if you're going to reach and pick it up. So, all those features that don't matter for categorizing semantically, suddenly matter for everything when you're trying to reach and grasp it. So, the idea there is that the brain perceives information and processes it along two different streams, one for concept, conceptual understanding. That's the ventral stream and one for motorically interacting with that object. So that's ‘the what’ and ‘the how’ stream respectively. So that's the visual domain and arguments were laid out nicely by Milner and Goodale. I encourage folks to read it. And it occurred to me that something very similar is happening in in speech and language. So, as you're listening to my words, there's two different things you can do with them. You can hear them for and understand them, or you can take those perceptual signals, those phonemic sequences and map them onto your own motor system so that you can say the same words. And obviously, we do that in development as part of learning because you hear speech words and speech sounds in the environment and you figure out not only how to make sense of them in terms of their meaning, but you figure out how to be able to reproduce those words in those phonemic patterns yourself. So those are two separate kind of mappings, which we hypothesized were a ventral and a dorsal stream system. And that was kind of the foundation for thinking about speech processing in terms of these two streams. Yeah, cool. And so, you know, you talk about that historical perspective that you bring to everything, and you know in your papers from the first decade of century as well as in this book, the relationship between the dual stream concept and Wernike-Lichtheim model. So can you talk a bit about like, you know, to what extent were those old guys barking up the right tree and what and where did they kind of not quite have it right? Yeah, so Wernike sometimes doesn't get respect in the modern world. We all know that it's not right in all details and there's a tendency when you know models aren't working very well for this or that you toss them out. They don't do anything for syntax, or you know any of the more complicated linguistic things that we think about these days. But they were onto something. That Wenike’s model was the first dual stream model that predates the visual. We always think of the Hickok-Poeppel dual stream model is derivative of the, I always thought this derivative of the Milner-Goodale two visual streams going back further, Ungeleider and Mishkin model with the ‘What, Where’ system. But Wernike proposed essentially a dual stream model where you had information coming into the auditory cortex. It was sent in two different directions. One was to the motor system, one was to the conceptual system. That is essentially the dual stream model. And in fact, our dual stream model was merely an elaboration and reframing and updating of Wernike’s original model. So, I don't take credit for that at all. I think it was already there in the literature. And he even talked about important concepts like what we today call feedback control in motor planning. He talked about it in terms of a corrective function of the auditory system on motor cortex. So, it was quite advanced at the time. Absolutely. Yeah. I like how you always draw those connections throughout the book to the history of all the ideas that you talk about. So, I think that probably most people would, I'd like to talk more about the ventral stream. And that's kind of the question that I raised that I wanted to talk to you about, especially laterality. But maybe saving that for later in our discussion. But moving forward with the dorsal stream that perhaps is sometimes a less of a focus of other people reading your work, but I think might be very much a focus of your own interest, especially in this book. I think you really develop your dorsal stream ideas more. So, can you let's, can we talk about what goes on in the dorsal stream and you kind of describe it as the sensory theory of speech production. So, what's that all about? Yeah. So, one important thing just as a prerequisite before we start talking about this is the dual stream model as David and I developed it and as later, Rauschecker and Scott developed it in their version of this, it only makes sense from the perspective of auditory perception. So the dual stream model is a very perceptually grounded model. And I think people misunderstood it and took it as a model of language processing in general. Like some people I remember would say, oh, the dorsal stream is involved in phonological processing. So, the way that we comprehend words is we take phonological information, process it in the dorsal stream, and then you come back into the ventral stream for comprehension. But that's not at all what we were saying. So, it's important to think about the dual stream model as making sense from the perspective of the auditory system. From the auditory system, speech information comes in and it can go in two different ways. If we're just talking about producing speech, naming pictures, natural volitional production, dual stream model doesn't make much sense because you're going from concepts wherever they're represented and we have ideas that I laid out in the book to kind of morphosyntactic or lemma level things to phonological level things and then on out to the motor system in for speech motor planning and such. And that's just one stream from concept to lemmas to phonological output. And there's no dual stream involved at all. It involves in fact both of the systems that we talk about in terms of the dual stream, the ventral and the dorsal part. So that's the first thing. So, what's the point of the dorsal stream? So, the idea there is that it was mostly developed to kind of explain the architecture of the phonological system for speech production. And the basic idea is derived from another field motor control, that motor control theory, which is all about hitting sensory targets with your actions. So again, let's think about reaching and grasping. So, if you're going to reach for Stephen’s microphone, you are taking a sensory information about its location and its orientation and its size and you're using that to guide some action towards it. So, it's if you didn't know that sensory information, you could never plan the sensory movements or the motor actions to do the coding. So, movement planning is grounded in sensory systems in that sense. So, the idea is that linguistic processing is grounded in a similar way. So when we're producing a word for a microphone, say, even though it's not physically out in the environment, there is a sensory or an auditory related code for that sound pattern that's stored in your brain because you've heard that word over again in auditory related cortex. And so that is the target that your the sensory related target that your phonological planning system is aiming to hit. So to speak. And so it relies on this posterior auditory related target that is then translated into motor phonological plans and then executed. So that's kind of what the function of the dorsal stream is, is to integrate posterior targets that are sensory related in an abstract way to motor plans for hitting those targets. Yeah. Okay, that's so fundamental. And you know, you give this really nice example in the book that I think drives home why this is the right way to think about things. And it involves putting a pencil in your mouth. And so, like our listeners are not going to see this because it's an audio podcast, but I have a pencil on going to put it in my mouth. And your point is that you put a pencil in your mouth that you can still talk. And so I'm going to try that right now. I'm going to put the pencil in my mouth. Okay, now holding a pencil return why taste and I can still quite effectively produce a sentence that should be intelligible. And that shows and I had to make some very dramatic motor accommodations to achieve that which I was completely, which was completely subconscious obviously. But I think it's a powerful demonstration of this concept that like in speech production we are trying to reach sensory goals and that we've got this system that's very flexibly able to do that. Yeah. Yeah, that's right. That's a good example. We it's not just that we are controlling the position and trajectory of articulators. We are aiming for an auditory target which is a point that Frank Guenther has made repeatedly and done a great job demonstrating that. And you see that in lots of other paradigms too like ultra-auditory feedback (UAF), where we will quickly accommodate to perturbations and what we think we hear or say or what we are hearing us ourselves saying artificially. So yeah, lots of evidence for that. But the pencil test is the easiest demonstration. Yeah, it makes the point very strongly. So how did you get interested in how did you come to work on this particular question of like how these auditory motor transformations in speech production? Yeah, so we had proposed the dual stream model and the dorsal stream involved three main components. There was like a phonological target system which is basically we think of as the superior, posterior superior temporal sulcus which stores the sound related phonological representations of words. Then there's motor phonological plans that is the motor plans whatever format they are in aimed at hitting those targets and a translation system in between. And the reason why I ended up focusing on the dorsal stream so much is because that was the one part of the model that we didn't have direct evidence for. We proposed an in-between area that was doing this transformation and that idea came directly out of monkey research on visual motor grasping where they had identified regions in the intraparietal sulcus (IPS), that were involved in transforming visual inputs into motor plans for controlling eye movements or grasping actions. So, we thought well if speech works the same way there's got to be a translation system in between an integration area that's doing this kind of coordinate transformation as they call it in the visual world. And so we went looking for it in fMRI. So, we spent a good chunk of the 2000s identifying this circuit. We called that translation system area SPT for Sylvian, parietal, temporal. Although the lab we just called it informally in the lab the ‘spot’ and that's how we reverse engineered the acronym SPT. (Laughter) So, we had identified this and done what I think is a pretty decent job in arguing that it was doing it was doing in some sensory motor auditory motor function. But then people would always say well what do you mean by transformation? What do you mean by integration? And I would always say I don't know yet. I'm just working on establishing the area first and then I'll figure out what is doing computationally. So, by the end of the 2000s I decided to turn my attention to what it's doing computationally. And I had two places to look. One was psycholinguistics where you had detailed models from Pim Levelt and Gary Dell and other people about how we produce speech. And then you had the motor control people like Frank Guenther and many others who were doing experiments on altered auditory feedback (AAF), and these sorts of things developing computational models. So those were the two places I could look for what might be going on computationally. And the odd thing that I discovered was that they seemed to be studying exactly the same thing even at the same level if you look closely. But they were not talking to each other assumed that they were studying different things and used completely different vocabularies. So, I spent a couple of years just going over all the motor control literature including visual motor stuff, going over psycholinguistic literature to see if I could somehow try to integrate these things. And I found that both have truths and that some form of integration was a worthwhile enterprise. And so that's how I started getting into motor control principles and integrating them with my ideas about how linguistic processes work. Yeah, so I mean I think it's a 2012 paper where you first lay out that model, right? And then it gets developed more in this book or maybe not in more detail but like maybe conceptually more. Yeah, it's kind of maybe organized a bit better. My first foray was actually with John Houde, who was a motor speech person at UCSF who was a fellow MIT student with me when I was there, David Poeppel and John Houde, we were there and John was doing this interesting motor speech stuff that I had no idea what it was. But we teamed up because I looked at John's motor speech models and tried to see if we could integrate them. So, we wrote a paper kind of arguing for some form of an integration. And then a 2012 paper extended it a bit and put it in more context. Yeah, but it is kind of laid out a bit more clearly I hope in the book. Yeah, I think it's more accessible. I mean, I remember reading the 2012 paper back in maybe 2012 or so. And I liked it. But I feel like I understood it better now. Or although maybe that's just me growing up. But yeah, so it ends up being this kind of hybrid of a motor control approach and a psycholinguistic approach. Can you, do you think you can explain like how exactly you meld those things together? Like, yeah, those two different streams of research? Yeah, for sure. It's a fairly simple idea. So, motor control architectures basically have sensory targets. They have three parts, sensory targets, a motor execution or planning system, and then an in between translation system. So, you have these three components. And then you have computational operations like when you're motor planning, you can check to see whether the plans that you're developing will match the targets. And if they're not, you can do some error correction process. This is called feedback control. And there's internal and external forms of that as well. So basically the idea and in psycholinguistics, you have models that propose linguistic levels. Like if you look at Dell's model, you have a semantic layer of processing, you have a word level or a lemma level of processing. And then you have a phonological level of processing. So, three stages of processing. The idea that I had was that maybe each level of processing, like focusing on the phonological first, maybe the phonological level of processing in psycholinguistic models is actually composed of three parts. Like a motor control architecture, it has a phonological target system that's more related to sensory systems. It has a phonological motor system that's planning at the phonological level to hit those targets once executed. And then you have a translation system in between. And so that's basically the idea. That's what I proposed for how phonology looks. It's not just one box of units. It's three different boxes that are doing different things for phonological output planning. So, what else apart from phonology gets that sort of tri-partite division in your model? Yeah, so I started with phonology, but of course I was always thinking if you look back at the models that I drew, like in that 2012 paper, I have this three part division of labor in multiple, in a hierarchy. So, there's a low level of kind of phonetic control which involves low level motor cortex, low level somatic sensory cortex on the sensory side, and the cerebellum as the in between translation system. And then you go up a level and you're in the part of the, the dorsal stream, traditional dorsal stream as we proposed it. So posterior STG, inferior frontal regions, and then SPT in between. And then I drew a single box for the lemma level, the word level that fed into these systems. And as I was drawing this, I thought I wonder if this can be divided or should be divided into two as well, three parts as well. Knowing full well that people doing research on the word level have identified both anterior and posterior regions involved in word selection and planning, but there wasn't enough evidence for me to separate it out yet. So that eventually led to a collaboration with my former student, William Matchin, to develop this idea of a sensory motor-like architecture for morphosyntax And that was our 2020 paper that proposed that. That didn't have a middle part. It only had the posterior side and anterior side. But just recently, and I think it just snuck into as a note in the book that we might have beyond to a in between area for morphosyntax. Where do you reckon that is? Inferior Parietal? Yes, of course. Saying that. So again, this is theorizing based on thinking about how the rest of the brain works, right? You have temporal lobe, coding, sensory stuff for vision, for audition. You have frontal lobe, coding, motor stuff. And the parietal lobe is the seat for visual motor transformation. It's where SPT lives for what we think is auditory motor translation. So where would the morphosyntax translation system be? Probably in the parietal lobe. And, you know, I went back and looked at some old ideas from David Gow, who was talking about things that made no sense to me at the time. But looking back, I was like, wait, I think he was on to something here with this. And then really great work by Kathy Price, subdividing the supramarginal gyrus (SMG), and her group into different components. One of them that seems to be very word like, and then turning up facts like people with conduction aphasia who have damage in that general zone, sometimes have pragmatic speech. They always have pragmatic speech. Not all of them. Some of them also have comprehension problems. And yeah, it gets a little complicated. But we shouldn’t start talking about how clean or messy aphasia is because that will get out of hand. Yeah. And when I say always, I mean in general. But yes. Yeah, no, that makes sense. So, so just this, the overarching scheme is that all of these different layers of the speech language system involve frontal, parietal and temporal components that are respectively motoric, translational sensory motor, I guess, because I said parietal in the middle, didn't I? And then sensory or connecting to the conceptual system, I guess. Yeah. So it's, I get in trouble for saying stuff like this because the linguists want to say, oh, we're just reducing language to the structure of language to sensory motor systems. But what I'm not saying, and I try to hammer this in the book, I'm not saying that this is just a sensory motor system. I'm saying that this is, has all the richness that, you know, whatever linguistics decides or determines is in the system. It's just distributed over this kind of abstract sensory motor like architecture. Yeah. Which is actually completely orthogonal to the linguistic divisions, right? And it's interesting, right? Because then you end up basically saying, well, yeah, every aspect of language is going to be frontal, temporal and parietal. And that's, there's lots of evidence for that, right? From both imaging and aphasia. And it starts, and you start to see how someone like our mutual friend, Ev, can, can come to the perspective that it's like that the language network is a rather unitary kind of structure. And I think that both you and I would tend to not agree with her on that final conclusion, but you can kind of see like what you can see the conditions for the evidence that leads her to think that way, right? Absolutely. Yeah. Because all of the facets do kind of end up like, you know, having your anatomical substrates throughout the network. That's exactly right. Yeah. It involves the whole that whole system, language involves that whole system at each level. So yeah, depending on how you, you know, what paradigms you use and whatever data you're collecting, you're going to see multiple systems involved. Exactly. Yeah. It's a very interesting and new idea. And I think it's like, yeah, made, like the argument, I think, is nice and clear in this book. And so I think everybody should read it just for that reason alone. You also, you have a chapter, and I don't want to get into this in too much detail, but I can't help being really interested in your chapter on the parallel hierarchy of speech production areas in the frontal, the dorsal, ventral division. And what you call the dorsal, pre-central speech area, which obviously, you know, that I'm interested in. So, can you tell us about, you know, what are your views on the dorsal, pre-central speech area? Where is it? What's it for? What's it doing? Yeah. So this, this is interesting. I got interested in this area because in functional imaging studies and all my SPT auditory motor circuit mapping studies, we would always see this dorsal pre-motor area that is right at the back of the of the middle frontal gyrus. If you had back towards the middle frontal gyros and hop over into the next, the pre-central gyrus, it's right there. It overlaps area 55B, which is in the Glasser Atlas based on human connectome project parcellation, interestingly, and it constantly shows up. It is one of the strongest language activations you see in your own study, Stephen. It was the one that showed up in your sensory and motor area in the frontal of the ventral, Broca, traditional Broca’s area didn't show up much. So, it showed up in Kathy Price's early data, her paper on hearing and saying, your work, my work, keep showing up. And I, we always wondered what it was, and I remember a conversation when you first joined my lab for a brief time, and we sat in my office and tried to figure out what this dorsal area was doing unsuccessfully at the time, but we were, you know, interested in what it was doing, and I didn't understand what it was doing until just a couple years ago. So basically, the insights came from, so overall, you have what appears to be two hierarchies of motor control or motor planning. One is their traditional ventral one, which involves broca’s as area, and a lower ventral motor cortex, which also activates in my auditory motor mapping studies. But then you have this more dorsal one. One of the breakthrough things came from Eddie Chang's work at UCSF showing that that region showed, in intracranial recordings, showed a correlation with pitch height during vocalization. So, it had something to do with pitch. And interestingly, there's been a lot of great work on mapping laryngeal motor cortex in humans. We have two of them. One of them is dorsal sitting right near this area that Eddie mapped, and it shows up in all our studies of sensory motor processes and speech. So that suggested an anatomical, functional anatomic connection between this dorsal speech area and dorsal laryngeal motor cortex. And then work by my grad student, Jon Venizia, had identified this area as being particularly auditory in its response properties. It showed spectral temporal receptive fields that coded pitch. And so this started me thinking about this area being important for pitch control via the larynx. And then thinking about what higher level functions might also be involved because that same area or just anterior to that also seems to be getting, implicating syntax at some level or another. And so, the thought was, well, maybe it's prosody because we know that prosody has something to do with syntax. And this, you have this dorsal, middle-frontal gyrus, dorsal pre-motor speech area, dorsal laryngeal motor cortex hierarchy that's involved in coordinating respiration and prosody for pitch and prosody control via the dorsal laryngeal motor cortex. So that's the separation. Prosody kind of gets parceled out of the speech motor control system. And the rest of it, the phonetic control is the more ventral circuit. Okay, so yeah, to summarize that it's sort of, it's related to laryngeal motor, well, it's a adjacent to laryngeal motor cortex or one of the parts of that. It seems to be involved in pitch and prosody in contrast to more ventral frontal speech motor areas that are more about articulation. Yeah, this is really interesting. This is actually how I met Eddie, right? So, I met Eddie in 2007 and he'd seen our fMRI paper on this area. And he was seeing it in his ECoG recordings and just being quite perplexed by it. Because another thing which I don't think you talk about in the book is that it has a very fast auditory response, right? So, this is a little spot in pre-motor cortex that responds to auditory stimuli within 100 milliseconds. So, you only see that with ECoG, right? We don't see it with our tools that we use. But Eddie was very struck by that and just basically trying to figure out what it was and that's how we became friends and started, you know, gradually developing some collaborations. And you mentioned that you guys found evidence that it has sort of auditory representations rather than motor, yeah? And I know that you've seen Eddie's stuff on that too, right? Where they showed in 2016 that it basically patent like an auditory area rather than a motor area. And so, you know, isn't it just, isn't it? Does it like, why is there a patch of auditory cortex sitting up in the pre-central gyrus? I mean, do you ever thought about it from that perspective? I mean, is this just like, does that need to be there because of the centrality of these coordinate transforms that your model is all about? Yeah, so that's we're thinking as an evolutionary biologist helps a bit. So, if you look in Macaques, their dorsal stream projects up to that general region. And you think about it and it projects up to near the frontal eye fields, which is kind of a misnomer. Frontal eye fields are not just about eye control. They're about orienting and some people think of it as a general orienting response. So in Macaque and in us, the function of that pre-linguistics, pre-language was apparently auditory orienting. So, hearing sounds in space and orienting towards them, controlling head movements, eye movements, attention towards those systems. So that was essentially the frontal control area that was using lower level auditory information. And of course, if you think about auditory localization, you immediately think about interaural level and time differences to orient in the horizontal plane. But the cues that are useful for orienting in the vertical plane are spectral. So much more rich, much more of the kinds of things that you'd find coded in primary auditory cortex. So all of that acoustic information would be useful in a pre-motor area that was important for orienting. And the idea that I have, that's hard to test, but the idea is that we evolved this dorsal area to control voice pitch where you also need pitch feedback. Because that's where the relevant sensory information was coming in. It was the part of the brain that was getting the relevant information. And so that's the logical place if you're evolving a system to evolve control of vocal pitch. And apparently that's what happened in birds as well, who have vocal learning for bird song. So yeah, I thought about it. It has to do with orienting, I believe, in attention. Yeah, I have always thought it was like attention related. I mean, I think that the first really clear evidence for this, the existence of this area was actually from MEG studies, like mismatch negativity where they had this 100 millisecond MMN response. And that was in the late 90s. And that was always kind of a scribe to an attentional thing, as it's coming in these like oddball paradigms. So yeah, this is really interesting. And this is a great example of like how you're, you know, you're, you're thinking as always very grounded in this sort of evolutionary perspective. So yeah. Yeah, yeah, I do try it helps. Again, it's just another source of constraint. It's really hard to do this work, right? In language neuroscience. And we need all the constraints from all the fields we can get. So, you know, I draw from motor control to cycle linguistics to general neuroscience, to evolutionary biology, anything I can get my hands on to help narrow down the search space, essentially. Yeah. Well, I really enjoyed the chapter on the, what do you call it? The Dorsal Pre-central Speech Area. Yeah, I'm broke. Yeah. Yeah. Okay. So last topic, very interesting, like, let's get back to that, that ventral stream. And I, as I emailed you originally, I wanted to talk about laterality. But before we kind of, you know, to set the, to set the stage, you're talking about laterality, can you talk about like the series of processing steps that happen in the ventral stream, kind of maybe which, Henschen outlined a hundred years ago, and they have further refined yourself and others have further refined here. So, what's that sort of processing hierarchy in your view? Yeah. So there's roughly speaking and it's obviously more complicated than this. But the, the rough sketch is that early on in auditory cortex, say in Heschl’s gyrus, there's coding for spectrotemporal information. That is information that varies spectrally and frequency over time. And that gets, that gets translated or from that you can derive phonological level representations. And I don't think that, that there are phoneme representations and auditory cortex. I think there are demy syllables or little chunks of syllables, but that's a separate issue. But something phonological is happening in the lateral superior temporal gyrus and dorsal bank of the superior temporal sulcus. From there, that information at the phonological level gets input and you can derive some information about word level. And by word, I mean abstract word and psycholinguistic terms. It is referring to lemmas and linguistic terms I'm talking about morphosyntax. And that's middle temporal gyrus, kind of regions that are, is coding word level stuff. And then from there, you're mapping it out into the conceptual semantic system. So, so acoustic features, the phonological to word level, the three main stages, I would say that we have a pretty decent handle on. Okay. And there's a lot of evidence for that, right? And like, you know, you point out that, you know, Jeff Binder’s 2000 fMRI paper, kind of like provides this really nice picture of what Heschl had come to 80 years before. So, I think we probably mostly, I mean, well, actually we don't all agree because there is dispute over whether it's anteriorly directed or posteriorly directed. But let's just say it's, we think it's mostly posteriorly directed in between us. And that's the basic layout of the the ventral stream. Now, then there's this other interesting aspect that you proposed in your papers in the first decade of the century, whereby the ventral stream is to some extent, which you can state for us, some extent, bilateral. So, what are your current views on the bilaterality of this series of steps? My current view is that at the level of phonological, up to the level of phonological processing, most of us are perfectly symmetric in terms of our lateralization, which is a controversial idea and something that I didn't even believe until recently. So, this site, so that just to state the facts, beyond that, once you get to the level of recognizing words and getting to higher levels, it becomes a little bit more left-dominant. But still, at the level of the ability to recognize words and understand their meaning, that is mostly bilateral in most people. So, in symmetric in many. So that's a summary of what I believe now. Okay. So, I don't agree with that. But I know that like you like a good argument. So, what's the evidence that leads you to come to that view? Like what are the main things that really struck you that led to that position? Yeah. Well, it's very hard to find people with unilateral damage to the superior temporal gyrus and has significant single-word comprehension problems. And by that, I mean not comprehension problems where there's lots of semantic foils or other sorts of things. This basic kind of can you tell the difference between bear and bear when you're pointing to pictures? We did a large-scale study on this and found that only about well less than 10% of people in chronic stroke have significant deficits, where significant is below like 85% something like that. So that's the fact. So, you know, most people, if you look at the distribution, most people are at ceiling on tasks like that, even with complete destruction of the superior temporal gyrus on the left. So that's one bit. If it was more lateralized, if it was significantly lateralized, we would expect to see more deficits in more people. So that's one argument. The other argument is if you damage the systems bilaterally, you end up with word deafness, that's where the severe deafness syndrome comes in. We've also done this in WADA and in acute stroke and we still can't identify severe single-word receptive deficits on these tasks. I'm just interrupting from the future here briefly because we've forgotten to define WADA, which makes this a bit hard to follow otherwise. So, WADA means the WADA test and that's named after a Japanese neurosurgeon named Juhn Atsushi Wada, who invented this procedure in the late 40s. This is a procedure where you use a barbiturate like sodium amobarbital to anesthetize one hemisphere of the brain at a time and this is done prior to surgery to determine lateralization of language or other functions. So, the idea is you transiently take one hemisphere out of the action and you can see what the other hemisphere can do. So, Greg and I are going to talk about some studies that have been done using this procedure to look at language lateralization. Okay, let's get back to it. And then I went the next step for this book to look back at the WADA data that I had collected and look to see because overall if you look at whether if people have left hemisphere anesthesia, you put the left hemisphere to sleep, you ask if they can comprehend words, then you do it with the right hemisphere. There are worse than comprehending words when the left hemisphere is asleep on average compared to the right. So, there's some asymmetry there. I never did it by that. But then if you look at the distribution of that asymmetry, it turns out that more than half of the people are perfectly symmetric and it's only a smaller fraction of people who are much more left dominant. So, my view is that the left dominance that we see in the population when we do functional imaging studies or group level stroke studies is being driven by very small biases in most people are pretty symmetric. A few people are slightly left dominant and then a small fraction of people are strongly left dominant. You average all those together and you get a kind of left dominance. Okay. That's what I think is going on at the population and individual level. Okay. Thanks. Yeah, that's a great summary. So, I want to kind of think about it from like, okay, things we agree on is that we sort of ask the question that like, I think both of us think about the question the same way. Like when we're asking like, what does it mean to be a bilateral? We're asking like, what can the right hemisphere do if put to the test, right? Is that the way you see it? Like, when you say that we're bilateral or that most people are bilateral, what you mean is that take the left hemisphere out of the picture, the right hemisphere can still comprehend words. Yep. Okay. I think another thing that we would agree on is that comprehension is much more bilateral than production, right? So, you've never made claims about the bilateral, bilaterality of speech production. You think that the production system is pretty lateralized? I think, well, that's traditionally my view, but now I'm questioning it because we're taking average it is and we're collecting by a sample. So, I think it is lateralized. I believe that and more so than comprehension, but I think we're overestimating the degree of lateralization. Okay. But we think that there's a difference. Yes. So yes, and I would agree with that too because like, I mean, what you said initially, the fact that single-word deficits, like word comprehension deficits at the single-word level are quite rare in aphasia. That's, as you said, that is just a fact that we have to grapple with. What it means is debatable, but that I would say comprehension is much more bilateral than production. I think another thing that we would agree on is that different aspects of comprehension would differ in the extent to which they're bilateral. So, for instance, you said, like, that spectro-temporal stage, we would both agree is fully bilateral. And then when you get to mapping and then the phonological word form stage, you think it starts to be maybe a little bit lateralized in some people, whereas I sort of think it's more than that. And then when you get to the stage of mapping and under meanings, you think, oh yeah, there's it's a little bit lateralized in some people, maybe more so in others. And I think, well, that's pretty strongly lateralized in my view. But we both agree that there's like, as you go through the hierarchy, it changes, right? It becomes more lateralized. Agreed. Yep. And another thing that I think is interesting, you didn't quite say this in the book, but I'm pretty sure you would think the same as me. When you get central, like, you know, as you're going through the hierarchy, like we just said, if you get more lateralized, but when you get central, you actually become bilateral again, right? Because meaning is very bilateral. Yeah, central meaning, like once you get up to concepts and yeah, yeah, I honestly never thought too much about it, but obviously, yeah, if you're going to get to the phonological form and then get to the word meaning, then you've got to have some bilateral capacity. Although, well, I'm not saying you have to have bilateral capacity. I'm just saying, if you can get through that bottleneck of sound meaning, you'd then have a semantic representation, which is very bilateral, as we see in stuff like Alex Huth's work, where everything looks really symmetrical. So those are, I think, of the things that we agree on. And I think what we disagree on is the extent of laterality of those mid-range, you know, that kind of that middle of the system, like where it's getting, you know, not the acoustic analysis and not the central representation of meaning, but just that bottleneck. And I guess, like, I guess I want to just ask your opinion about, like some of the, I think, things that are a little bit difficult for your viewpoint. So, and things are difficult for my viewpoint, right? So, I think, difficult for my viewpoint, for instance, is the fact that a few people with aphasia have profound word comprehension deficits. Difficult for your viewpoint is the fact that any people with aphasia have any word comprehension deficits, right? Like, so, you know, I'm sure you've seen our paper in brain from a couple of years ago. And, you know, like a lot of our people with large MCA lesions or large temporoparietal lesions, they do have pretty profound word comprehension deficits, certainly acutely, and in many cases into one and three months, resolving over time, which is super interesting, but like, what do you, what do you make of that initial single word comprehension deficits that we do see in most of our people? Well, not, I would never say most. I'd say more than half of our people with really substantial left temporoparietal damage. Yeah. Yeah. Yes, your paper is probably one of my favorites in terms of testing these ideas. So, which is why I want to get some of that data. I'm working on it. I'm working on it. Yeah. So, initially, so if you have giant frontal lesions, I, you know, I don't, I don't know exactly. I mean, diaschisis is an issue, right? Maybe the right hemisphere is suppressed. Maybe, maybe there's some frontal things that are, you know, people are just having trouble with selection processes, something that is not about the recognition of the word. Yeah. I'm not sure. In some ways, I think the giant lesions are a little bit hard to evaluate in the acute stage because I, you know, imagine that a lot of things are going on with these different lesions and tell things resolve a bit. But you would know that better than me. So, yeah. So, that's, I mean, the acute stuff is a concern, which is why we did this, the acute study, where we did it more focally. We had a bunch of people with acute stroke measured their blood flow and their lesion acutely and then looked to see whether people with substantial damage to these areas we think, like the STG, basically it was superior middle-front middle temporal gyri. If you had a lot of damage there, independent of how big the lesion was and where everything else was. Do you have a lot of people with significant deficits? And the answer, in that study was more than half of people had no problem whatsoever, despite damage to the ROI that's supposed to be lateralized. And then you start getting a trail off of ability in terms of the distribution where everyone is above threshold and only about, again, 20% are having some clinically significant deficits at single word. So, yes, it does happen acutely, but it's a smaller fraction and the distribution, I think, is what's interesting. So, the way I've started thinking about is rather than debating about whether that counts as lateralization. We just measure the distribution of these things and say, well, this is the degree of lateralization. The modal tendency is symmetric or whatever. And then you have 15% of people who are going to be below this level and that is just an empirical measure of lateralization. Yes. Okay. So, that's actually another thing that we agree on that I forgot to put in my list of things we agree on, is that there is very substantial individual variability in the degree of bilaterality. And I think we disagree on what the distribution is. I think we disagree quite significantly because I would never think that bilaterality would be the norm, but I understand that you do. So, I mean, like, from my point of view, when I think about those acute data and yes, it's definitely not inconsistent with what we see. I think that when you talk about an, like, you know, damage to a superior temporal ROI or even if it's in the middle temporal, I think in most people, there's going to be a lot of residuals left temporal function that's still going to be available to interpret the acoustic information from the right hemisphere. If indeed, like, left, if left, if left Heschl is really out of the picture, you need to do an awful lot of damage to take away, like, potential left hemisphere substrates for interpreting that spectrotemporal information that can come in through the right, which we all agree on. And in general, I'd say with functional imaging of people with aphasia, which is sort of basically my thing, there's a few things that strike me again and again. And one of them is that in almost everybody, you will be shocked by how much residual functional activity there is even in people that have huge lesions. So, we'll take somebody that has like a huge MCA lesion and you just think, "Well, that's just wiped out the language network." You look at it on a, you know, there are acute DWI and you're like, "That person's never going to talk again." Then we bring them in and scan them at like one month, three months, 12 months is our goal, even at one month, or maybe at three. You'll just see activation right up to the edge of the lesion, you know, like, and then the lesion tries so hard to destroy the language network, but there's always bits of it left and it's very, it's extremely rare to get somebody that literally has no left hemisphere language network because these people, honestly, they don't survive because that would involve an entire hemisphere being destroyed. So, I'm very struck by, you know, I'm not saying that the right hemisphere isn't playing a role in comprehension or doesn't have any comprehension abilities, but I think that you might be overestimating the lack of left hemisphere substrates in some of these people, even if they've got damage to those. Yeah. Left hemisphere regions. What do you make of the WADA data then? Yes. Okay. So, the WADA data, well, I think that when in your WADA study, it was, you were testing them for quite a while because you were doing semantic and Phonological distractors, and I think that they were, I don't think that the hemispheres were fully out of the picture for a much of the period that you were testing them. And so, I think of what I've seen in our clinical WADA data that we've not published, by the way, so this is just anecdotes and should be not given full, you know, really should be back to publication, but like what we see is pretty complete single word comprehension loss in most people under anesthesia of the left hemisphere. And so, like I said, we haven't published that, but that's what we've seen in our data. But what who has published it is this Risse et al. from the late 90s, and they have, you know, when they, that's there, you know, they basically say that most people have zero single word ability when the left hemisphere is anesthetized. And I know that's quite different to what you found. But I think that there's a timing difference, and I know that you don't, and I know that you have reasons to not think that, because you found grip strength, like you didn't see a correlation between grip strength and comprehension. Yeah. Right. Yeah. Yeah. I mean, every method has its weaknesses. And I suppose there could be some, some, you know, groggy left hemisphere recovery that is underlying that ability. But I mean, I would expect more more deficits and more people. But yeah, be …. I think the task is important too. I, you know, like if you look at WAB comprehension, and my colleagues in South Carolina, when I tell them that speech perception is bilateral, they laugh at me because they see patients all the time who have dense single word comprehension deficits on the WAB. But the WAB is multiple pictures. There's semantic categories, and that's kind of bias into the left hemisphere more. So, you know, I'd have to look at these other cases in your new study that sounds really interesting what the task is. Oh, yeah, whether it'll ever be published, no, I mean, it's clinical data. So, I'm, it's just what we've seen, and when we looked into it. Okay. Yeah. I wish it was, I wish it was a new study. But yeah, no, the Risse one is the one that I, but like, yeah, like it's question, like they don't really, it's like an old clinical paper, right, from 30 years ago. They're not really like language neuroscience people, and they don't really describe their comprehension test in, in as much detail as we would like to, to have. So yeah, no, I think it's like, I agree with, I mean, I conceptually I agree with you. Like obviously, if you, if it was true that you could have anaesthetize a hemisphere and have single word comprehension proceed normally, that would be unequivocal evidence that the right hemisphere can do it. I'm just not sure that that's quite been empirically shown yet. Okay. Fair enough. And coincidentally, I'm also continuing to study this because yeah, we haven't convinced you and others yet. So, I'm continuing, but yeah, I mean, the good thing is it's an empirical question. And honestly, I don't care whether it's 80% lateralized or 20% or zero. I honestly don't care. Yeah. But the fact is that like you know, predicting behavior, aphasic deficits from lesion location is notoriously impossible. Like you almost never, I mean, exaggerating a bit, but it's very hard to do very, very, lesion. No, Greg, I don't agree. I don't agree. Well, I mean, but I'll tell you what, you know what the hardest deficit to predict is word comprehension. Comprehension, yeah. So word, no, word comprehension. Sentence comprehension is easy to predict. Word comprehension is very hard to predict. So I would, so I would definitely grant you that, but I do think that we can make, we can do better on a lot of other aspects of language. Yeah. So, what do you, what have you guys got going on to look at, look further at this question? We're trying to get funding. Who knows if we'll get it to recruit an unbiased sample of people with strokes left and right, give them a battery of tests and see where their lesions are. And then do the reverse. So, what we do in in lesion symptom mapping, of course, is identify a behavioral deficit and look for where in the brain we can predict that or correlate damage with that deficit. I want to do the reverse and I want to see given an area that we think is doing something for a function, look to see whether how many people would damage that area, like complete damage to that area actually have the deficit we'd expect. We tried that with a proxy of speech in our data set. First mapping the lesion location of apraxia of speech, which is like, you know, sensory motor cortex. And then looking to see how many people who have complete damage or a near complete damage to that area have it. And most do 70%, but 30% don't. And so my thinking is that maybe some of that variance is due to people who have more bilateral organization. Oh, I completely agree. And like we see basically the same thing with apraxia of speech. I mean, like it depends on what area you're thinking of like pre-central kind of issue. Yeah. So, I'd say in our data set of the people that have damaged there, probably about half or a bit more than half have apraxia of speech. And then a very significant minority simply don't. And including some people with massive lesions where it's really not plausible that it could be like an adjacent region or that you didn't hit the central thing. And so, and so I think I've kind of coming to the view that like actually there are people in whom speech motor control can be subserved by the right hemisphere. And like you said, 30% yeah, it could be about that. And I like the idea that you said before like the answers to these questions are not going to be like, oh speech perception is bilateral or not. It's going to be what's the distribution of the capacity of the non-dominant hemisphere across the population. And it's not going to be binary and it's not going to be the same for any individual. And that's what our answers are going to look like. They're going to look like distributions in unbiased samples. So yeah, we're also start recruiting right hemisphere people now. So hopefully we'll be able to add that to our left. Hemi people and have some, some answers that satisfy us all. But yeah, I don't I don't think the right hemisphere people have too many deficits. No, no, I know. Yeah, yeah, I don't we found subtle ones in our sample. As did Kathy and her group. Yeah, but pretty subtle, right? So, I don't know that I don't think we're going to be shocked by the right hemisphere data, but we I agree with you. We need to do the due diligence and recruit those people as well. Yeah, yeah. And the people with no aphasia. Oh, absolutely. So yeah, that's how you know, that's how our lab works. It's like your, your inclusion is based on your legion, not on whether you have aphasia or not. You have a left hemisphere legion. We're coming to see you. And good. And, and yes, the Lily who is the lead SLP in my lab is incredibly good at getting people to agree to be in our study. Like she takes it as a she takes it as a personal affront if anybody ever declines consent. And like you'll literally like it'll happen like once per year and you'll like, you know, lose sleep over it. And like why didn't they consent? Yeah. So fascinating questions. And I mean, you know, I think you you put forward a really strong hypothesis about the bilaterality of speech perception and comprehension. And I think this is a lot of truth to it. And and I think that the facts about, you know, the overwhelmingly good word comprehension in most people with aphasia is not one that should be like ignored. Oh, um, but yeah, we have to figure out the exact distribution of these phenomena. Yeah, I mean, we need to ask the different question is language left dominant or not? Is the wrong question is like you were saying, it's what is the distribution? And that's kind of very by function, very a lot in individuals. And that's just an empirical question that we that we haven't the field hasn't yet started hasn't yet answered definitively. You started in your paper. I think that was great. And I've tried, but there's a gaps in all of this. Yeah, no, there's absolutely gaps. Yeah, I'm looking forward to the next 10 years. I think we will, I think we'll get to the bottom of this in 10 years. Yeah, we will, we will, we'll be in complete agreement about whatever the whatever the answer is. Well, that might results with you just so we can I can get the other perspective. Yeah, or Julius or Argye or you know, or just about anyone else. Yeah. Sometimes I'm like, oh, I'm maybe in like too like, you know, fiesty with Greg and I'm like, no, you're you're hearing the same thing from your from your co-authors. So I'm not like too ashamed of it. And like I and like you know, like I said, you like a good argument. And like, so I could share with our listeners that yeah, like I was postdoc in your lab for about a year. And the way that it happened, the way that you invited me to be a postdoc was that you reviewed one of my papers and largely disagreed with it. But we're like, and then you just emailed me after it's like, I reviewed your paper. I said, I said except don't really agree with you, but would you like to do a postdoc? So, I think it's like, that's just the right like that's the right attitude to like, you know, um, finding out the truth, right? Is like you find somebody you don't agree with and you uh work with them. Yeah, totally. You need you need someone with a different opinion. So, I mean, there's nothing I have no stake in the outcome. I just accept for I want to know how it works. What's you know, it's if we if the motor system's involved in speech perception, so be it. Oh yeah. And well, it should be said that on the question of that paper, um, you, you were right. Yeah. Okay. Well, um, I've taken a lot of your time. And I think we've gone through a lot of the big concepts of the book. Um, I hope that all the listeners will want to read it. Um, I really enjoyed it. And yeah, congratulations on on on writing this. Yeah, really it's got I think it's going to be real seminal piece of work. Oh, thank you. I really appreciate that and your opinion means much. And I always enjoy talking to you and debating you and yeah, it's, it's a good time. Thank you so much. All right. We'll take care. Great to talk and, um, look forward to seeing the book in print. Coming soon. All right. Bye. Take care. Bye. Okay. Well, that's it for episode 35. Thank you, Greg, for the very fun conversation. I hope everyone enjoyed it as much as I did. I've linked Greg's book on the podcast website at langneurosci.org/podcast and in the show notes as well as a few of the other key papers we discussed. Like I said, I really highly recommend this book. Anyone who listens to this podcast is going to enjoy the book. All right. Thanks also to Marcia Petyt for editing the transcript of this episode and thank you all for listening. Bye for now. See you next time.