Gresham College Lectures
Gresham College Lectures
Brain Computer Interfaces
Our brains are computers. What if we could enhance their processing power? Medical technology now allows for brain signals to be read and translated to reverse paralysis. Deep brain stimulation is also used to treat diseases such as Parkinson’s. Neural interfaces are already improving lives. How do they work? What’s next for our physical connection to digital technology? And what are the implications of having new hardware in our heads?
A lecture by Victoria Baines recorded on 24 October 2023 at Barnard's Inn Hall, London
The transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/brain-computer
Gresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/
Website: https://gresham.ac.uk
Twitter: https://twitter.com/greshamcollege
Facebook: https://facebook.com/greshamcollege
Instagram: https://instagram.com/greshamcollege
If the brain is a computer and computers are brain-like what happens when you connect the two? Well, our exploration of this question inevitably will entail that we examine the structure of the brain and the latest developments in neurotechnology. So I think it's only fair to warn you that in this lecture there will be images of brains and footage of a clinical trial involving a non-human animal. But rest assured, I will let you know before we show. The latter is the brain a computer. 26 years ago, the incomparable Susan Greenfield was the Gresham professor of physics, and she stood right here and asked that very question. The focus of her lecture was consciousness. And in the decade since this has become increasingly topical, in light of recent developments in machine learning and generative artificial intelligence tools like chat, GPT now mimic cognitive ability. They appear to offer at least a semblance of s, however flawed. Now I am professor of Information Technology. I'm not professor of physics. So for those of you joining us live, I am delighted to say that we have a neurosurgical specialist here who will be able to answer any clinical questions you may have. So please don't be reticent, but even I know that there are obvious material differences. Our brains consist of soft fatty tissue. Computers like this supercomputer are made of silicon, plastic and metal encased in more plastic and more metal. But we can at least observe that brains and computers are processes. One might say that the brain is the central processing unit, the CPU of the central nervous system. An adult human brain has around 86 billion neurons. And when I see neurons firing like this, I immediately think of fiber optic cables. I can't help it. Transmitting electrical signals and data is no coincidence that some of the most significant advances in computing in recent years have sought to emulate the activity of the brain and the nervous system. Artificial neural networks, power machine learning algorithms by firing like neurons, transmitting data. When an output exceeds a specified threshold, the main functions of a computer to receive and process inputs and produce outputs mirror those of the central nervous system to receive process and respond to stimuli. The story of brain computer interfaces, also known as brain machine interfaces, is the story of what happens when we connect these two types of processor. It may sound something like science fiction, but it builds on a body of research and experimentation that goes back at least as far as 1924 when German psychiatrist Hans Berger first recorded brain activity using electro and graphy EEG for short, it relies on a fundamental integration of medical science and IT of hardware embedded in or overlaid on the human body with software running on machines. And while research and development has initially been focused on medical applications, that is considerable interest in the potential of brain computer interfaces, not only to repair humans, but also to augment them. Some interfaces, brain computer interfaces, bcis for short record brain activity while others stimulate the brain. Deep brain stimulation is used to treat a range of movement disorders, most notably to control the tremors experienced by those suffering from Parkinson's disease. Electrodes connected to a pulse generator interfere with the brain signals of the patient to disrupt the patterns that generate the tremor. Deep brain stimulation is also used to treat some people whose chronic nerve pain has not responded to other remedies. Responsive neurostimulation describes a closed loop system in which electrodes can both record and stimulate without external intervention. And this kind of response is particularly applicable to conditions where a fast time stimulus may be required, for instance, to interrupt an epileptic seizure. There are also auditory implants, electrode arrays that can be placed in a region of the brainstem called the cochlear nucleus. And these are sometimes used for people who can't have cochlear implants in their inner ear. Implants that record brain activity include micro electrode arrays that penetrate the surface of the cortex stent electrodes in the blood vessels of the cortex, individual depth electrodes and grid electrodes placed under the dura the brain's outer protective layer. But there are also non-invasive ways to record neural activity. Electrode caps can collect EEG data from the scalp and functional magnetic resonance imaging. Uh, FMRI is widely used to measure changes in blood flow associated with brain activity. An increasing amount of research is focused on possible applications of closed loop BCIS to treat psychiatric illness and the potential for non-invasive interfaces using FMRI to be therapeutic tools in treating disorders such as schizophrenia. Research is ongoing into interfaces that combat the effects of dementia and Alzheimer's disease, for instance, to improve neuronal plasticity or to assist patients with cognitive decline to communicate. And given that these are some of the leading causes of death in many countries, this is not only incredibly exciting but incredibly important for other patients such as stroke survivors. A restored ability to communicate has the potential to significantly improve quality of life by opening up opportunities, for example, to return to work. There have also been landmark innovations in Neuroprosthetics, which is the science of replacing non-functioning parts of the nervous system with devices. This gentleman is Heian Osca from the Netherlands and he was paralyzed in a cycling accident 12 years ago. Neuro Restore is a consortium of Swiss universities that has developed a digital bridge which bypasses the injury to his spinal cord and restores motor control to his legs. This is, I I think it's, it's fair to say an extraordinary mechanism, the benefits of which we can only imagine if we haven't ourselves experienced such an injury. And in interviews earlier this year, he Yan summed up the impact on his quality of life really quite effectively. He said he can now stand up and have a beer with his friends and my impression is that he wasn't being flippant in identifying that as important. Given the complexity of this task, it's not surprising that this digital bridge requires a lot of kit. This visualization shows all the hardware and software components required to make that work. It's not easy to see the details, so let's zoom in and take a closer look. Implants in his sensory motor cortex capture the brain signals, which are then relayed by the headset to a base station that is connected to a laptop on which the decoding algorithm runs a machine learning model. In this case, a recursive, exponentially weighted markoff switching multi-line model enables the decoding algorithm in the processing unit to recognize and accurately interpret the brain activity associated with leg control. Stimulation control software shares the motor intention output to a portable unit that includes this is rather lovely. A Raspberry Pi single board computer and in turn sends stimulation signals to an implantable pulse generator in the abdomen that stimulates the electrode array over the spinal cord to activate his leg muscles. The interface makes use of a chain of wireless communication systems including Bluetooth, infrared, and radio frequency bcis that read brain signals typically comprise three basic processes, data acquisition, data processing, and device control. So they function rather like the central nervous system. They rely on a combination of translational algorithms shown on the right of this graphic. Once analog brain signals have been acquired, amplified and converted to digital, they may be subject to pre-processing to clean them and to separate the relevant signals from noise and other artifacts. A process known as dimensionality reduction then reduces the data to a a smaller subset of the most relevant channels. Feature extraction converts the raw signals into features of interest and feature selection pinpoints the subset most relevant to the function concerned. Classification then sorts data to labeled classes or categories of information and those patterns are then translated into commands and controls for the output application or device. Different frequencies of brainwave are in are indicative of different states. Delta waves operating at the lowest frequency correlate with dreamless sleep. For instance, theater waves, REM sleep, alpha waves, creativity and flow state beta waves concentration and higher frequency gamma waves with higher brain functions such as memory and attention and different regions of the brain control different functions. The parietal lobe is vital for sensory perception and integration, but also involved in intelligence reasoning telling left from right language and reading. The occipital lobe is primarily responsible for visual processing. Then we have the cerebellum primarily responsible for muscle control including balance and movements. The brainstem controlling vital functions such as breathing, blood pressure, heart rate, and swallowing. The temporal lobe playing a key role in auditory processing, production of speech, recognition of language, and the formation of visual memories and the frontal lobe with key roles in emotion, personality, judgment and self-control. We also know that the left and right hemispheres of the brain are active in different cognitive processes with the left broadly associated with logical and linguistic processing. And the right, the more creative, intuitive and imaginative interference in or stimulation of a specific region, therefore results in impact on a specific function. A famous demonstration of this is the case of Phineas P gauge, an American railroad construction worker in 1848. He survived an accident in which an iron bar was driven completely through his left frontal lobe. Now physically he recovered to a great extent and in the remainder of his life he worked as a stagecoach driver and perhaps inevitably for the time he was a living exhibit at Barnum's American Museum, the most marked changes were observed to his personality. According to the doctor who treated him the equilibrium or balance between his intellectual faculties and animal propensities seems to have been destroyed. He is fitful, irreverent, indulging at times in the grossest profanity, which was not previously his custom manifesting, but little deference for his fellows. Impatient of restraint or advice when it comes to conflicts with his desires at times per tenaciously. Obstinate yet capricious and vacillating. I recognize a little bit of that in myself on a good day. While some of the contemporary depictions of his inappropriate conduct have since been called into question, GA's experience is nevertheless a striking example of just how localized the effects of a brain injury can be. And in one sense this is very good news for those of us who may be worried that brain computer interfaces will enable someone to take over our thoughts, central nervous system and physical movements all at once in practical terms that are much cheaper, tried and tested ways to influence our thoughts and behavior using technology. As we have seen in our earlier explorations of fake news and cyber crime, disinformation, propaganda, online scams and advertising are all flavors of social engineering manipulation of our thought processes and beliefs with the aim of causing us to take a desired action or to influence others for the time being. At least humans don't need to physically connect their brains to computers to engage in or experience mind control. But it is perhaps worth noting that in the emerging field of neuro politics, researchers claim to be able to detect with around 80% accuracy whether a person has liberal or conservative views from their brain activity alone. Now if you have heard of nothing else about brain computer interfaces, you will most likely have heard of Neuralink. This is Elon Musk's neurotechnology company. It's been the focus of considerable media coverage in the last few years and is one of the companies at the forefront of developing invasive brain implants. The N one implant that you see here is about an inch, just about just under two and a half centimeters in diameter. It has a biocompatible enclosure which houses a battery that can be charged wirelessly, a system board of low power chips that process signals and transmit them to a mobile app and an array of 1024 electrodes distributed across 64 threads. The threads are so fine that the company has had to build robots, especially to insert the implants. We're going to see two n one implants in action and if you don't want to see a monkey in laboratory conditions, I recommend you look away for the next couple of minutes. In 2021, Neuralink released this video featuring pager a nine-year-old Resus Macca. Six weeks earlier he had had a Neuralink implant placed in each side of his brain and in exchange for a banana smoothie dispensed through a metal tube, he has learned to interact with a computer using a joystick. The implants are sharing data with an iPhone via Bluetooth connection and as he moves the cursor to targets presented on screen, more than 2000 electrodes record the activity of pages neurons in the regions of his motor cortex that coordinate hand and arm movements. Neurons in this region modulate their activity with intended hand movements, so some become more active when his mo, when he moves his hand down and others when he moves it to the left. For example. The firing rates from these neurons are wirelessly streamed in real time to a laptop and that activity is fed into a decoder algorithm, which is calibrated by recording Paige's brain activity while he's using the joystick. Once that's calibrated, it's able to predict his hand movement in real time and the output from the algorithm can be used to move the cursor. So pager is now controlling that cursor using his brain activity alone. You can see the joystick is disconnected, he's moving it merely out of habit. Neuralink has stated that the aim of this research and development is to enable a person with paralysis to use a computer or foam more easily rather than using a joystick to calibrate the decoder, they would imagine moving their hands to the targets. But as you can see, or as you will shortly be able to see, it could also enable people to play games. In this case the classic arcade game pong. Just by thinking. So Paige is controlling his paddle on the right of the screen by thinking about moving his hand up or down and he's able to match increasing levels of difficulty. It's worth stating that this isn't the first time an experiment like this has been conducted on Resus monkeys. Arguably the real breakthrough here has been the compact design of the implant and in particular, elimination of the need for an external battery. Academic and commercial researchers alike see potential also for non-medical applications of brain computer interfaces. This is in neural lyn's mission statement, which is to create a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow. And as the trial you've just seen demonstrates brain computer interfaces may well have a role in the future of gaming. The internet is already a wash with tips for gamers to improve their reaction times and there are headsets already on the markets that combine electrode caps with gaming engines and mixed reality hardware so that our brain activity can contribute to making us feel more present in immersive environments. Removing the mechanical components of a gamer's response as we saw with Paige's disconnected joysticks does raise a further question though. Would someone playing a game entirely with their mind really be quicker than someone whose physical response relies to some extent on reflex as brain computer interfaces further develop? And if importantly, if they become more affordable, we may well have a chance to find out because the financial rewards of elite gaming are substantial and younger generations with a clear incentive may be rather less squeamish about the idea of having an invasive implant fitted. So we may see some demand. Military and defense institutes have also invested in neurotechnology, not just neuroprosthetics for injured veterans, but potential applications including control of aircraft simulations and drone clusters and the boosting of memory function and mood alteration. A report published by the US government in 2019 envisaged the cyborg soldier of 2050 as a human machine fusion with ocular enhancements to imaging site and uh, situational awareness, auditory enhancement for communication and protection and direct neural enhancement for two-way data transfer brain to brain communication. Non-invasive interfaces are already used in sports training, monitoring focus and alertness, reactions to various stimuli and identifying room for improvement. As in gaming, visual reaction time is a key factor in an athlete's speed of response. So then if athletes were to receive implants capable of augmenting their performance in real time, would this constitute an unfair advantage that would see them excluded from competing with UN augmented peers? You may recall the debate surrounding Oscar pistorius's prosthetic legs or castus EZ intersex status, which were considered by some to give them unfair advantage in the future, regulatory bodies may have to consider whether athletes whose performance is enhanced by AI powered implants fall into a new cyborg class of competition. Musical composition is a field in which bcis promise to democratize technical ability developed by a team at the University of Washington. The Ence phone translates signals captured from the scalp into synthetic piano music. Then there is the Brainy Beats project, which is developed by researchers at Tilbrook University in the Netherlands, which generates musical output from the brain activity of two users wearing surface electrode caps. The sound produced varies according to the degree of synchrony between the two users brain signals. So basically how how much they think alike, how well they get on both of these interfaces could widen musical participation to people with no practical, instrumental or programming skill. But in order for non-medical brain computer interfaces to benefit large numbers of people, they will need to be affordable and accessible. Otherwise they risk contributing to a widening of existing equity gaps. Seamless connectivity will of course also be key as wireless communication is a crucial component of most brain computer interfaces. Neuralink users Bluetooth to communicate to a phone or a tablet. And this prompts us, of course to consider the stability and security of those connections. Bluetooth is a technology that works at very short range. Two devices need to be in close physical proximity to each other to be able to communicate this dispenses with the need. For instance, for a brain implant to be connected to the open internet. Bluetooth is designed for devices to pair easily with each other. And this can mean that unless a user has restricted its discoverability, a device can be open to pairing and to sharing data with devices other than the one intended. Hardware manufacturers therefore need to be able to ensure that the the device pairing is secure. They can also encrypt the data so that even if it is intercepted, it cannot be unscrambled. However, this doesn't prevent access to the data if the other paired device say a patient's mobile phone is compromised. We also need to bear in mind that there will likely be cases where brain activity data does need to cover larger distances in real time over the internet. And we have seen this before. So in 2008, duke University and the Japan Science and Technology Agency collaborated on a project in which a monkey in the US made a humanoid robot in Kyoto Walk using its brain signals alone. Personal cybersecurity then takes on a whole new level of importance when your physical wellbeing depends on it. Whenever data is stored or processed, we have the considerations of privacy and data protection. Brain signals are some of the most medically sensitive and personal data available. They're both critical to our functioning as individual humans and potentially indicative of our most private thoughts. We might assume that regulation on the processing of medical data applies only to healthcare providers, but increasingly technology companies are collecting and processing this data. Neuralink is a commercial entity, it's not a hospital, it's not a university, and those of us who wear fitness trackers or who use apps that monitor our mental or reproductive health already share medically sensitive personal data with technology providers who are commercially driven to some degree. There is a parallel in the pharmaceutical industry where the service needs of patients and healthcare providers must be balanced against the commercial imperatives of drug developers and distributors. Regulation of medicines and medical devices is overseen by bodies such as the Federal Drug Administration in the us, the Medicines and Healthcare Regulation Authority, and the National Institute for Health and Care Excellence in the uk. So when Neuralink wanted to start human trials, it applied for approval from the Federal Drug Administration, the FDA, and it received approval just last month and has since issued an open invitation for people to join its patient registry. As cumulative developments in it have been incorporated into healthcare, regulators have also found themselves in the position of designating software, including artificial intelligence and machine learning as a medical device. Non-invasive hardware like electrode caps and biosensing circuit boards are available to purchase on the open web. And I was alerted to this by Ted Reese who kindly assisted me with some of the research for this lecture. So inevitably it prompted me to widen my search and I found that lo and behold, yes, you can buy portable EEG machines and surface electrode caps on eBay. Now it's tempting, isn't it? Then to react that neurotechnology should only be operated by authorized medical practitioners and scientific researchers. But we need to bear in mind that for common health conditions, we will need technological solutions that can be delivered at scale and that do not put additional pressure on healthcare providers. Headsets that collect EEG data to reduce stress and improve sleep or that use transcranial direct current stimulation to reduce the symptoms of depression are already commercially available and they're endorsed by some national healthcare systems. In other fields of medicine, patients have hacked devices like continuous glucose monitors in order to improve them. So we have both precedent and early signals that some people will want to operate their own neurotechnology and perhaps even tinker with it. There are, you will be relieved to hear existing legal regimes and ethical frameworks that separately cover the practice of medicine, medical devices, data protection, cybersecurity, and artificial intelligence. However, neurotechnology spans all of these at the technical level is clearly unreasonable to expect a neurosurgeon also to be a software engineer. So teams of people with different skill sets and specialisms are required to make a brain computer interface work effectively, safely and securely. But it also requires us to consider bioethics medical ethics, business ethics and data ethics simultaneously. Thankfully we can already see some alignment between these. For example, the Hippocratic oath stipulates that medical practitioners abstain from harm. And this translates quite neatly to the principle of non maleficence in AI ethics regimes enclosed loop interfaces where the processing of recorded brain signals may trigger brain stimulation without external human intervention, responsible development and use of artificial intelligence will be crucial. One of the most widely recognized frameworks for this is a set of principles issued by the Organization for Economic Cooperation and Development, the OECD. And they require that the use of AI is inclusive, sustainable, and focused on human wellbeing. That it's fair, transparent, and explainable, that it's robust, safe and secure, and that those using it are accountable for its proper functioning. Now, I would argue that we want all information technology to be inclusive, human-centered, transparent, robust, and accountable when technology operates inside our bodies, inside the powerhouse of our central nervous system and the home of our private thoughts. Clearly it should be even more so. And consequently in new European Union legislation, the AI Act, artificial intelligence deployed in medical devices is designated as high risk and is subject to mandatory safety requirements. And as we've seen with data protection and cybersecurity, when the EU makes laws on it, it tends to be that the rest of the world follows. We will also need to ensure that brain computer interfaces are safely maintained for the their entire lifecycles In the IT world. We've become kind of accustomed to the idea of startups with killer products going bust and quickly disappearing without a trace. Operating systems, web browsers, computers, mobile phone brands have come and gone. And rightly or wrongly, there's a baked in expectation that consumers will replace their devices every few years. But what if that obsolete technology is attached to you or even inside you? Well, we have already seen cases of this. In 2017, London-based company, cyborg Nest, launched North Sense, which was a miniaturized circuit board designed to be permanently attached to the body like a piercing by vibrating on the wearer's skin. It told them when they were facing magnetic north, the initial release sold out. And this prompted the company to describe this as the birth of a new cyborg community. Well, it's since been discontinued and when I searched the web recently, I was unable to find any information on how to maintain or even remove it. Spare athough also for more than 350 visually impaired people who received ARGUS retinal implants that gave them a degree of artificial vision. In 2020, these patients discovered that the developer's second site had run outta funds and that their implants were no longer supported with updates or maintenance. Three years later, the company has been acquired and there is hope that this will at least mean patients can source replacement parts where an implant is simply nice to have and relatively easy to remove. We may conclude that there is minimum harm done, but when people whose quality of life is significantly improved by an implant suddenly experience that functionality going dark, they may be at risk of serious physical or psychological injury. Even if we accept that many commercial en entities are not only in it for the money, the precarity of tech startup culture could risk those offers of restored functioning, being withdrawn with little or no warning. So ensuring that the digital components of implantable technologies as far as possible use open source software, open hardware and open standards is one way of increasing the likelihood that devices can be maintained even if a developer ceases to operate or to support them. It will be a consistent theme of this year's lecture series that emerging technologies do not develop in a vacuum. Rather they converge, they enable and accelerate each other. And this in turn will make possible scenarios for which there are only the weakest of signals now like buying something. As soon as we've thought about it home technology that responds not only to our commands but to our moods. In my next lecture on the massive internet of things, we're going to explore in more detail how technology will help the world around us be more responsive to us. And as artificial intelligence becomes ever more advanced, there is a possibility that our direct connection to it will take us closer to super intelligence, human intelligence that is artificially amplified. But in some respects, that is simply a logical progression of our current enhancement by devices that contain the world's knowledge and tools like generative AI that synthesize that knowledge instantly. However, imperfectly we will, I think all have our own individual comfort levels for invasive and even non-invasive brain recording and stimulation. Um, I'm someone who shutters at the thought of wearing contact lenses. So I don't imagine I will be among the first to volunteer for an invasive neural implant, but should my health deteriorates the prospect of reducing pain, restoring a function or improving my quality of life may well override that aversion. For many of us, the question whether we would have a brain implant for medical reasons is no longer academic. They are already part of our lives and the field of neurotechnology moves so quickly. You may have seen in the news just last week a new non-invasive technique of deep brain stimulation was announced that could be used to treat patients with early stage Alzheimer's, improving their symptoms of memory impairment. The possibility that we could slow cognitive decline and reduce distress for sufferers and their loved ones is both wondrous and truly tantalizing. As with so much in it, we're at an inflection point where we need to make really important decisions about how to maximize the benefit of neurotechnology to ensure people will flourish while minimizing the risks to health and wellbeing. There is still a long way to go. Interpreting the data of 86 billion neurons is no small feat, but with some developers predicting that invasive brain computer interfaces may be available to otherwise healthy people in just 10 to 15 years, we shouldn't be too surprised to find ourselves already on a journey to merge human and machine cyberspace and meet space. Thank you very much. And it's at this point that I'd like to welcome Dr. Aswin Shari from the Department of Neurosurgery at Great Ormond Street Hospital. Aswin, you have very kindly assisted me with some of the material for this lecture and you're going to join us so that if you have any medical and clinical questions, you can get some answers to them. Thank you very much . Thank You having me. Great. Okay. Wonderful lecture and um, I I'm get to use convener's privilege and just ask the first one. So when I was listening to you, I was thinking about co-processor. Do you remember when we used to add co-processor and how much I would love a co-processor, you know, uh, something that could do maths really, really quickly perhaps would be really useful for me. And then I realized I've got one, it's called a calculator <laugh> and I don't need a brain interface to use it. So have people thought about what things absolutely must be connected to the brain and which things can already connected using the visual system or the acoustic system. And and for me, I think it, the, the behavioral aspect to it from the IT perspective is, is precisely that, that, um, when you ride a bike, you are engaging in human augmentation. When you use a calculator, you are augmenting your intelligence and your processing. Absolutely. So, so a lot I think of of what we might envisage as being uses of of brain computer interfaces are already here advertising propaganda mind control. Um, that for me is, is it's almost a little bit reductive in a way to think that we have to make that leap in order to make use of that technology. But I'd love to know as in what you think about that Thinking whether people sort of ask themselves ethical questions, whether we don't really need to to connect, uh, this Brain. I mean my Perfectly well do it some other way. My IT ethics question that I always ask for everything in it is just because we can do it, does it mean we should Correct. Okay. Yeah. Right. My friends, I have quite a few questions here for you from Slido. I'm glad to take, let's start with this one from Ali os Gore who has, um, he's interested in consciousness, which is the, you know, always seems to come up in these contexts. And he said, how could BCI or AI integration affect our scientific understanding of human consciousness? bcis be, can bcis be sophisticated enough to simulate or duplicate consciousness? Now, I'm sorry to lay that one on you 'cause that's one of the more difficult questions on the list here, but, um, perhaps you could start with that one. Go for it. Yeah, a hard one to start. I think any opportunity we can use to understand the, the brain signals, um, aligned with consciousness, however you want to define that, and I'm not gonna go there today, um, is gonna be useful in terms of improving our scientific understanding. Uh, and that is the first step to then trying to improve people's quality of lives who may have disorders of consciousness, for example. Um, and so I think bcis provide a great opportunity for us to improve our scientific understanding. Um, I think there's scope to even do that today. So for example, so I'm interested in pediatric neurosurgery and in epilepsy surgery. Uh, and as part of the workup for epilepsy, we sometimes put electrodes into children's brains and adults brains, uh, to try and figure out where their seizures are coming from. Mm-hmm, <affirmative>. And in some of these people, the seizures will cause them to be unconscious at the beginning of that seizure and they will go on to have a seizure and in some instances they will retain consciousness while having their seizure. And therein is a great opportunity for under, for us to try and understand what are the differences in those two, uh, sets of signals from the brain between these two different situations. So I think, you know, it, it is a great opportunity. I think there are definitely ethical, uh, and regulatory considerations that we need to, uh, take into account when we're taking that to the next level of how can we use this for therapeutic benefit. Um, but certainly from it, from the perspective of improving our understanding of how the brain works, what consciousness is, um, it's a great opportunity. Yeah, that's a great answer. I was thinking, you know, when someone mentions consciousness, it's a bit like sort of tossing a hand grenade into the conversation, isn't it <laugh>, but you are actually measuring it in a very practical way, which is very sort of delightful thought. Now we've got a couple of questions here about the sort of environment in which these developments are taking place, and uh, one person who I think is remaining anonymous has pointed out that maybe, um, it is likely that some countries that have rather lower ethical standards in the west might be world leaders in this. But I think perhaps I could pick up one of the questions from, from Kai here, which is this, um, Britain has a good reputation since this lecture is taking place in Britain. Britain does have a good reputation in biotechnology. Uh, but it, the the assertion in this question is that, uh, we are not world leaders in BCI brain computer interfaces. The US is making the running. So have you thought about what it is about the US and the UK and other countries that make likely to make them leaders in this, uh, in this, in this area? Is it sort of ethical standards or is it, oh gosh, I'm, is it technology platform or, I mean, I'm going to revert to the money question. I'm afraid cash. Yeah. You know what, I, every time I do a lecture, I, I'm, some of you who join my lectures regularly will probably be playing mosque bingo, <laugh>, and I'm sorry for that, but, but you know that so much in the IT world is being funded by a small group of individuals who at the moment are based in the US and to a certain extent China. That's, that's where we are. We have a fantastic startup culture in the uk. What we have to do now, and this is a wider IT question that's kind of slightly separate from bcis, but what we somehow need to do is, is keep that tech here and make sure that, you know, these companies aren't acquired by big multinational US or Chinese based companies, and then we lose that tech and we lose that capability. So I, I, I think that's the gap for me from the IT side, but I'd be interested to know what you think Ansin about the, the biotech. Yeah, I think part of it is having the infrastructure to make all those groups of people that you mentioned come together in the same environment, exchange ideas, um, and really facilitate them taking their research forwards, at least from the research perspective. Um, and so we have recently started a group, uh, called the Impact neurotechnology network, which brings together neurosurgeons, neuroscientists, mathematicians, ethicists, excellent, um, to try and facilitate that, that cross cross, um, specialist dialogue, uh, and hopefully take these projects forward, um, towards patient benefit. Yeah, good answer. Right. So, um, let's talk a little bit about failure, right? And there's quite a few, there's a series of questions here on what happens when things go wrong and that some people have sort of listed some of the things that might go wrong. You might, um, make the wrong connections in the brain or the, the interface might fail, or perhaps even the, the inference you are making from the signals might go wrong. And so some of our audience are interested in how do things fail safe might be a question for you. How do you make your systems fail safe and is there any work on knowing when things are going to do something dangerous monitoring perhaps? Um, So I can take, I can take that, that question in probably two parts. Okay. Um, from, in terms of making technology fail safe, I think that is impossible. Um, there every operation that we do to try and improve someone's health is associated with risks. Um, and a key part of, uh, taking the patient through that journey is the informed consent process. And so as part of any neurotechnology, I presume there, there should be very robust regulation around what that informed consent process should be and therefore people will be taking on an implant, for example, fully knowing well that there are these certain risks that could happen. The second bit of it is about liability. And I think again, we require frameworks to kind of adapt to the speed at which neurotechnology is developing. So we are doing this in the sphere of, um, AI assisted surgery, not in the context of neurotechnology, but there's been a lot of talk of late of, say for example, there is a robot assisted surgery that is happening, um, and the robot malfunctions or, um, something goes wrong as a result of that robot assisted surgery. Is it the surgeon that is to blame? Is it the robot technology that is to blame? Is it the software that is interfacing between the surgeon and the robot that is to blame? Is it the hospital? Is it the licensing board that has licensed that technology for clinical use? Um, and so a lot of these questions are unresolved and really requires the regulatory framework to adapt quickly and to be agile enough to adapt at the rate at which the technological field is adapting. And, and one of the things we've spoken about before is, you know, the security aspects of that, who owns the security aspects? So the manufacturer absolutely should be responsible for firmware on the device. Um, but if the value chain includes a patient and their mobile phone, as we see with a lot of medical IO ot, medical internet of things, and as we will, we may well talk about a bit more in the next lecture actually. Um, if you haven't got antivirus software on your phone or you haven't got your updates on your phone, those of you who will have attended my cybersecurity lecture next year know that last year, know that there are basic things that you can do, but if you haven't secured your end device and your phone gets infected or your tablet gets infected, okay, it's not necessarily going to kill you. It's not about the kind of that, the upstream effect, but it's more that the proper reporting of how your device is doing is not necessarily getting back to your healthcare provider. So it's an interruption, it's a denial of the service in some form. So then we get to a point where we think to ourselves, well, how do we make patients aware that they need to keep their devices safe without scaring the hell out of them to the point where they don't want those benefits to their life? And the improvements in their quality of life awareness seems really, really important to me in that value chain. Um, but it's equally important to map out at every stage of that value chain who is responsible for the security and safety aspects. Hmm, interesting. Yeah. Um, so I mean, if I had a graduate student working on this, you know, the, I would say, well, you're building a control system, go away and simulate it, write a simulation. This Would make a brilliant PhD, Wouldn't it? So, well this is the question from one of our audience members, which is, so how far are, are we away from simulating the brain then?<laugh>? I could take this one. Thank you. So, so there's there, It's big and complicated. Yes. So, So there are, there are, there are millions and billions of pounds worldwide that have been, um, invested in trying to simulate the brain activity. And I think the closest we've got is from large, large consortia, um, who have been given the money and the expertise to run massive supercomputers that are trying to simulate brain signals. Um, and the human, uh, brain project is, is one big such consortium and they've got something called the virtual brain. Um, and this is where they model the brain as something like 200,000 different parts, each of which is governed by a mathematical equation of how it's running. And each of those is connected to all those other 200,000 parts by specific strengths of connections that you can put in your own MRI scan into and say, this is my personalized virtual brain. Oh, nice. And we are starting to use that now in epilepsy surgery, for example, to try and model where we think a person's epileptic seizures are coming from and therefore where in the brain we might resect or remove to try and stop those seizures. Um, and there's an ongoing trial in France at the moment to see if using this virtual brain technology is better than the current expert care that we have of using all the tests that we have and trying to estimate where we think the seizures are coming from. That's Fascinating. So rather than having to go into the metaverse myself, I could use the brain of my digital twin to power my avatar and I wouldn't have to do anything. That'd Be brilliant. Yeah, that's great. Yeah. And it leads on to my next question, which was, you know, um, illegal drugs and smoking and all sorts of things are very problematic to the, to, to humans, probably not 'cause of the effect on the brain maybe, but they have physical effects as well. So, um, what are the prospects for sort of digital heroin? You know, I mean, is that something that we could look forward to <laugh>? Um, or, or, or should we fear it? You know, um, That's snow crash, isn't it? So I'd sort of, you know, I'd zone out in the e Well, I'm thinking what you are suggesting is I wouldn't even zone out in the evening by connecting my digital heroine to me. I'd make my model a zone out in the evening to check. It was safe for me to do so. I mean, I I do think I want to go there.<laugh>, You are a train, you are a trained, you're a trained medical professional. You shouldn't aware that, but, But I, but I think the, the, um, there is a kind of correlate of that at the moment in that some people who have a deep brain stimulator, which Professor Bain showed you, um, for Parkinson's disease, for example, if the signals are too far in one particular direction, it can change impulsivity. Um, and so there are some patients with, uh, a Parkinson's disease d brain stimulator who suddenly feel that they are taking a lot more risks than they otherwise would. Really? Yeah. They're gambling more, they've, they've, you know, squandered their wealth. Um, and so this is, this is real. Um, and, and so the, again, you know, the ethics of, you know, or, or, or the kind of implications of how to not necessarily apportion blame, but kind of who is responsible, um, if that person then goes and commits a crime, is that their responsibility? Is that their stimulator responsibility? Is that their clinician's responsibility? Who knows? There's a really interesting coer in the online safety world where of course there's a lot of talk and concern around, you know, dopamine hits of using social media and getting likes and Yeah. And, and all of that, that there is a, a, you know, a, AAAA, a kind of neural response to, and a hormonal response almost to, um, that kind of interactivity. Um, so yes, I, I, I think there's certainly a lot of concern on the mental health side as to the effects of, um, our, our, our kind of connection to technology, even if we are able to put our screens down, um, that withdrawal. Um, we talk about digital detoxes, don't we? When we don't use technology. Yeah. In the future we'll have to unplug ourselves from Exactly, from From the web. Okay. Well, I, I see time is pressing upon us. I would say, um, if you're watching this, I've take, I rather regret taking us into some rather dark places, <laugh> with the questions. So if you're watching this and you are affected by any of the issues that you've seen, um, there's not much I can do with it. Turn off the TV now and go for a go for a walk. Yeah. And I, and I have to say, you know, in all of the, the research that I did for this talk and speaking to asthma and speaking to Ted, look overwhelmingly, you know, if you ask someone who is suffering from Parkinson's or you know, you have chronic pain, these are incredible things. So I don't want to detract from that, but I am, I do have a security background, so I am that person who's always gonna say, Ooh, careful. Now have you thought about the safety implications? That's just, that's my instinct, I'm afraid. So, so, such an interesting talk, I mean, really fascinating and we could have gone on for another hour, I think, with the questions I've got here. A special thanks to you to, not only for answering questions, but answering questions from someone else's lecture. Yeah. Amazing. Thank you. So, uh, fantastic job. Thank you very much. Thank You. Thank you.