The London Lecture Series
What is mental health? Can we make sense of psychosis? What’s the connection between mental health and concepts including race & evolution?
Explore these questions, among others, through the lens of philosophy at the 2023/4 London Lectures.
The London Lecture Series
Apocalyptic Technology: Naturalism and Nihilism, Mazviita Chirimuuta
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
This lecture in the series Philosophy in Retrospect and Prospect, is presented by Dr Mazviita Chirimuuta. Science assumes the universe is comprehensible to the human mind. AI tech casts doubt on this. So, should scientists give up on their goal?
Part of TRIP's Centenary Lectures 2025-6: Philosophy in Retrospect and Prospect.
Good evening, everybody, and welcome to this week's London lecture in this year's centenary series of London lectures brought to you by the Royal Institute of Philosophy. It's a great pleasure to introduce our speaker this evening, Masvita Chirimutha. Masvita has a PhD from Cambridge, not in philosophy, I believe, but in vision science. And after several postdocs, she got a tenure track position at Pittsburgh before moving to Edinburgh, where she's now a senior lecturer in the philosophy department. She's published two books, one on colour, and more recently, in 2024, her book The Brain Abstracted, which not only won the Lackartosh Prize last year, but also the Royal Institute of Philosophy's very own Naef Al-Rodhan Prize for Transdisciplinary Philosophy. So great pleasure to have you with us this evening, Masvita. Your talk is called Apocalyptic Technology. That's right. Over to you.
SPEAKER_07Thank you so much. It's a real pleasure to be here. Yes, it's called Apocalyptic Technology, Naturalism and Nihilism. Yeah, so there's so much so much chatter about AI at the moment. I've been part of these conversations, but I'm also getting a little bit tired of them, and I want us to be talking about other things, not only AI. So in preparing this lecture, I was conscious of wanting to broaden out the conversation to some of the deeper currents that I think are behind our current response to the technology, focusing in particular on how it relates to philosophy of science. And as I was saying to Edward just now, this paper was actually inspired by a conversation that we had the last time we met at the Royal Institute of Philosophy back in November 2024. So yeah, I'm very happy to be here and I'll begin. So this isn't about technology bringing the world to an end. Apocalyptic is meant in the original sense of bringing about a revelation, making apparent something that lay concealed. The revelation in question concerns the scope of scientists' understanding of the natural world. It's been common to assume that there is no limit to scientific understanding, that in principle, if not in practice, everything can be figured out. I aim to show how a particular technology, deep learning AI, used as a tool for scientific modelling, reveals certain limits to the intelligibility of the natural world. What this also reveals is the extent to which scientific understanding has rested on an article of faith, belief in the scope of human understanding. Amongst scientists, this belief is normally left implicit, rarely articulated, but still a precondition for their research. I'll say how the assumption played a role in my own experience as a young neuroscientist, and how the later introduction of AI tools drew my attention to it. This is also about naturalism, and naturalism is the denial of anything non-physical or supernatural standing beyond scientific explanation. Naturalism is also belief in the intelligibility of the universe. The assertion that nothing is fundamentally mysterious. As proclaimed on the desk sign of a neuroscientist interviewed in the journal Science, everything is figure-outable. If we call this belief into question, we then have the question of why scientists assumed this. To the extent that their assumption was made explicit, what justification was given for it? We'll see that back in the 17th century, during the so-called scientific revolution, the justification was theological. This brings us to ask, without God, without the theological justification, can we assume that the human mind has the capacity to make sensible the various immensely complicated workings of nature? Can we reasonably be naturalists? This is where nihilism comes in. Without belief in the ultimate intelligibility of the universe, does science devolve into engineering, the exercise of power over nature without the ideal of understanding it? We risk a crisis of faith for science itself, and I will show how Friedrich Nietzsche, self-declared prophet of our nihilistic times, diagnosed this tendency long before the arrival of today's technology. Drawing on examples in biology and neuroscience, I will then discuss how scientists themselves are reacting to the current predicament. What's most surprising is that Nietzsche anticipated two trends in the life sciences today, which are each in different ways responses to the problem of nihilism. But that is by no means to endorse them. In the closing part of the lecture, I'll present some ideas about the place of philosophy in this discussion. Though currently my academic home is a department of philosophy, I started out my career as a scientist in a lab. The research that led to my PhD was a combination of psychophysics, which are experiments testing visual thresholds, and computational modelling simulating the responses of neurons in the early visual system, primary visual cortex, which is this orange blob on the slide. And we wanted to see if our models could predict the thresholds we measured. My supervisor, David Tolhurst, had done seminal work in the 1970s recording directly from these neurons, seeking to establish the hypothesis that the basic operation of cells in this area is to act as a linear filter of messages, originally landing on the retina and passed up by the optic nerve to the brain. If the cells are linear, it means that you can predict how they'll respond to any image or pattern of light sent to the eye just on the basis of how they respond to very simple artificial stimuli like these ones here. If the linearity hypothesis were correct, it would mean also that the behavior of these cells could be summarized by what is mathematically a very simple equation with just a few variables. Already in the 1990s, it was clear that there were discrepancies between the data and the linearity hypothesis. In particular, it was found that the activity of any one neuron was influenced by the responses of other neurons to an unexpected degree. However, my supervisor and the other researchers were aiming to make a few tweaks and additions to the original linear model that would account for these discrepancies and ultimately allow us to predict how those neurons would respond to realistic images, just the kind of images that you would see as you look around a room like this. I was in the lab in the early 2000s and we had some degree of success with these adjustments. We published three articles on this, but I could tell that David was never really happy with the results and he kept tinkering with them. During my time as a student, I remember thinking how curious it was that we were attempting to encapsulate the workings of these neurons in models which were, to be honest, trivially simple. From reading the literature on visual cortex, it struck me that there was a great deal of intricacy here that was hard to pin down. And my supervisor had been studying these neurons his whole career and he knew tons about them, but he still wasn't happy that he'd figured them out. So if one cell is such a challenge, what hope would we have for the entire brain? And yet, if the linear hypothesis were true, there was hope, because in the face of all this apparent complexity, there would be one straightforward mathematical principle at play. So in our lab, belief in the intelligibility of nature, the belief in the underlying simplicity of the brain, was critically important. Otherwise, there would be no reason for the expectation that these kinds of linear models would work. About 15 years later, researching for a new book, I revisited primary visual cortex to see how the current predictive models were faring. Attending conferences, I noticed that many labs had introduced a kind of modelling based on deep convolutional neural networks. This is a diagram of them, and these are models used in face recognition and other applications. But neuroscientists were using them to predict the tough cases where those cells were responding to realistic stimuli. And the accuracy of these predictions was remarkably better than what we'd achieved before. At the same time, these models were not based on a linear hypothesis or any hypothesis about the kind of function that these cells were performing. And so the theoretical payoff was unclear. They operated more like a black box, an oracle, that spat out the right answer without explaining to you what was going on. If your motivation as a scientist is to figure these cells out, this is unhelpful. I emailed David about this and he shared the sentiment, yes, the new methods work, but it means giving up on science, he said. There appeared to be a trade-off. Either build a simple model that makes mathematical sense and gives you understanding of how these neurons respond, but only in a very narrow range of cases, or build an immensely more complex, mathematically opaque model that works accurately across the board, but doesn't give you understanding of the responses. The goal of science, as I'd grown up with it, was to have both understanding and the ability to make predictions. If you gave up on one of these, gave up on understanding, then what you're doing isn't really science, it's engineering. That was mine and David's reaction. In this way, I came to reflect on the significance of these developments, asking what these new AI modelling technologies had revealed. For one thing, they suggested that not everything is figure-outable. There may be some things in nature, like a single cell of a living brain, whose workings are so involved that to predict their responses, scientists need to use AI and build models of such mathematical complexity that those models no longer make sense to their users. This would mean that science, with its central goal of understanding the natural world, has some limits. But because of the bounds of how much complexity one person or one group of people can make sense of, no matter how intellectually gifted, some questions about the natural world might not have intelligible answers. The other thing it revealed is the importance of the motivating assumption that we'd had in the lab back then that the workings of nature were inherently simple enough for us to understand. Why should people, especially scientists, come to think this? It's not obvious when you just look at the world that this should be the case. Yet naturalism depends on it. And this is where God comes into the picture. The word simple derives from the Latin simple, simple, onefold. Complex means manyfold. Unity, harmony, and simplicity are characteristics attributed to the divine across religious traditions. Could it be that in past times when modern science was in development, a religious cultural background made it natural to associate simplicity not only with God, but with the world created by God and investigated by scientists? I think that account is basically correct. Numerous examples from the history of science can be given to illustrate it. I'll mention Galileo because his case is more often brought up to point out the hostility of religion to science. In a famous passage, he writes, philosophy, by which he means physics, is written in this grand book, The Universe, which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and read the letters in which it's composed. It is written in the language of mathematics, without which it is not humanly possible to understand a single word of it. Without these, one wanders around in a dark labyrinth. The metaphor of the Grand Book draws an obvious comparison with the other book, the scriptures, that God provided so that we should know him. In Galileo's dialogues concerning the two chief world systems, change that, we find a startling claim about the power of mathematical reasoning, that in its capacity to attain necessary truths, it is essentially like God's intellect, though limited in scope because we are finite creatures and God is an infinite being. With regard to those few mathematical truths which the human intellect does understand, Galileo writes, I believe that its knowledge equals the divine in objective certainty, for here it succeeds in understanding necessity beyond which there can be no greater sureness. A thing I should point out here is that mathematical representations, like the fundamental laws in physics, or the linear models I mentioned earlier, are the chief means by which modern science has achieved a simplification of nature. Mathematical representation is an abstraction. Away from the imprecise, hard-to-pin down shifting character of things as we encounter them with our senses. The abstraction yields a pared-down object which only bears exact measurable properties. Use of maths and science is indisputably convenient, but that by itself does not support the commonly held claim, like Galileo's, that a mathematical representation is more true, more close to underlying reality than a sensory one. Galileo's theology placed his claim on a sure foundation. Not only did God put mathematical relations into the architectural plan of the universe, but in addition, we are endowed with a godlike faculty for apprehending those relations. In a book, The Mind of God and the Works of Man, Edward Craig characterizes the 17th century as an epoch which deified reason figuratively and almost literally, as he puts it. He argues that this is what this was bound to have an impact on ontology, how people took things fundamentally to be, leading to the view, as he writes, that reality was an appropriate object for man's cognitive powers, that the world, in other words, was a thoroughly intelligible place. Nothing would be more natural, therefore, than to hope and expect that the universe was, in principle, intellectually transparent. End quote. So it's well recognized that Isaac Newton was heavily preoccupied with theological as well as alchemical pursuits. He asserts that, as he says, nature is exceedingly simple and conformable to herself. This notion of nature being conformable to herself just means that what's discovered in one place and time or scale is likely to be found similarly elsewhere. Newton uses this to argue that the rules of motion observed at larger scales are likely to hold for small things too. Thus we see that regularity and uniformity are important components of the idea that nature is inherently simple. Some degree of regularity is in fact a condition for the possibility of scientific knowledge. If things were always changing up on us, we couldn't rely on inductive reasoning, we couldn't use past experience as a guide to the present. The emerging scientists of the 17th century were confident that God would not have put us in that predicament. But what if God goes out of the picture? It's not coincidental that worries about induction were first voiced in the 18th century by David Hume, an atheist. How do we know that the sun will rise tomorrow? What grounds our confidence in the uniformity of nature? Hume, lacking the divine insurance policy, was the first to wonder. The term naturalism, like scientists, was coined by the Victorians, and it strikes me that those philosophers of the 17th century understood our current situation better than we do ourselves. It was then, especially following the publication of The Origin of Species in 1859, that an antagonism between science and religion began to form. It was then, and not in the Age of Galileo, that science came to be implicated in a crisis of Christian faith. What people noticed then, but seemed to forget later, was that the absence of God would make things tricky for science itself. Matthew Stanley describes how in Victorian Britain, and I quote him here, many scientists and philosophers concluded that uniformity only makes sense in a theistic world. Without an ordering force, i.e. God, one would expect the universe to be a mishmash of chaotic events. The only guarantee for constancy of the laws of nature was the intent of the lawgiver. And he mentions some people who wrote, on no other assumption can science proceed at all. End the quote. Along with Hume, Friedrich Nietzsche is one of the most influential naturalists in the history of philosophy. They each in different ways sought to rid their theories of knowledge of existence and morality of the theistic principles that had structured philosophies of the past. Nietzsche was never one to leave an erastic implication unmentioned. He observed how much the ethos of science, the pursuit of truth for truth's sake, no matter how upsetting, was indebted to religion, specifically Christianity. Even we knowers of today, he writes, we godless anti-metaphysicians still take our fire too, from the flame lit by the thousand-year-old faith, the Christian faith, which was also Plato's faith, that God is truth, that truth is divine. When Galileo and Newton justified their endeavours by claiming that there is a hidden, simple, eternal, and mathematically intelligible order to the universe, their theology was of this sort. We have seen that scientists carried on working on the assumption that everything is figure-otable, and that in turn rests on the idea that things are inherently simple enough for us to understand, even when they look horribly complicated and intractable. Whether the underlying simplicity consists of platonic forms or mathematical laws of nature, it's hard to see how science, the quest for understanding the natural world, survives without some such belief in the order and simplicity that transcends the manifest complexity of the world. Nietzsche didn't think it would. For Nietzsche, the death of God was no less the death of science. In his rejection of Christianity, Nietzsche was equally at pains to rid himself of the metaphysics that had motivated and shaped the sciences through belief in the intelligibility of the cosmos and the possibility of obtaining simple, eternal truths. All is complex, all is flux, including knowledge. Knowledge is power. Instead of science, we have technology, or rather, science reveals itself to have been technology all along. One feature of Nietzsche's account that I should mention is that the supposedly otherworldly ideals of Christianity and Platonism are unmasked to be contorted power plays. Will to power is the all-pervading force in human life and all life. Science could never have been the disinterested pursuit of truth. It only ever was a means for the acquisition of control. In other words, technology. So that was his take. So we've just seen that to motivate the quest for scientific understanding, one needs the assumption that everything is figure-otable. And that in turn originated with the idea of an alignment between a universe created by God and the godlike powers of our minds, between our reason and the suitability of the world to rational explanation. If, as naturalists told, we acquired our minds through Darwinian processes, there aren't grounds to expect by default that the world we evolved in should be intelligible to us. We don't need genuine truth in order to survive, only approximations and sometimes distortions. And so science seems quite insecure. Science, that is, as distinct from engineering. Science being the activity that yields accurate knowledge and understanding of our universe, not just technology. The clever tricks that make stuff happen and help us survive. Another point to recognise here is that in the absence of God, a divine being, to confer meaning and purpose on human existence, the quest for scientific understanding has itself been an important source of value. Think of Richard Dawkins, Sean Carroll, and other popularisers of science. A common thought is that even in a universe, even if this universe is by itself without purpose or value, the very fact that we humans exist and can collectively come to understand this vast universe, well, that in itself is a reason for living. And it gives humanity back a bit of the dignity that we used to take for granted. Lower than the angels, but above the apes. As Sean Carroll writes, we are small, the universe is big, and it doesn't come with an instruction manual. We have nevertheless figured out an amazing amount about how things actually work. So the nihilistic threat is that we cannot have even that. Not only is the universe meaningless, but we are condemned to flail around ignorantly within it. The Book of Nature is authorless and it is closed to us. That's the situation that Nietzsche prophesised 150 years ago. But he himself was not a nihilist. He set himself the task as a philosopher to overcome nihilism. His proposals were controversial, to say the least. In the absence of absolute values conferred by God and religion, it's left for us to inaugurate our own values. But most people are not up to the task. Only superior beings, the Übermensch, will be really capable. Nietzsche's ideas, interestingly, were informed by a theory of what life is. Life is implicitly normative or value setting. For a living being, things are not neutral, they are either good or bad, relative to that life form. Life is also driven to domination, he thought, to seek and manifest itself as strength. Life as such is will to power, he wrote. So he wrote, models are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. He continues. We can stop looking for explanatory models. We can analyse the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot. Anderson has certainly been criticized for overstating the case for science as data mining without the need for scientific expertise. That said, he hit upon a trend that would occur in some form. Not that computer engineers would replace scientists, but that scientists would take up the methods of engineers. An example of this tendency comes from Jim DeCarlo's lab at MIT. This group was amongst those leading the introduction of deep learning in visual neuroscience, as I mentioned earlier. In one study from 2019, they built an AI model of neuronal responses in area V4 of the visual cortex and used this model to devise stimuli that would maximally excite those neurons, driving them to produce bigger responses than when exposed to any other stimulus. Two neuroscientists commenting on the DeCallo project make some interesting observations, comparing the aims of this study with traditional work in which the goal of understanding required the reduction of a system into parts that are simpler and hence more intelligible. Baptiste and Kurding write that the ANN approach, as applied by Bashivan et al. advances a different conception of understanding. We understand a system when we can control it, that's the DeColo view. Most of us understand our cars in this latter sense, predictability and controllability, without really understanding them in the former, old reductive sense. The authors, by showing that they can steer neural populations to pre-selected states, have demonstrated that they indeed understand the visual system in the sense of controllability. We wonder, is the reductive notion of understanding even possible for a system as complex as the brain? Or might the I can control it notion of understanding actually be a more effective and relevant way forward for neuroscience? The upshot is that the traditional scientific aim of understanding nature has been replaced by the engineer's aim of building things that give us more control. More recently, scientists have taken up the idea of foundation models from the tech sector. This is the kind of modelling behind large language models such as Chat GPT, one that requires exhaustive amounts of data. Foundation models have been used to predict neuronal responses to visual stimuli, to emulate human responses and psychology experiments, and to predict disease onset and mortality from data recorded while people sleep. These methods have been shown to outperform most of the traditional models in terms of predictive accuracy. So scientists are using AI tools to perform tasks more akin to engineering, enhancing prediction and control while relinquishing the traditional scientific goal of understanding. But what has this trend got to do with Nietzsche and nihilism? Well, Nietzsche wrote it is a measure of the degree of strength of will to what extent one can do without meaning in things, to what extent one can endure to live in a meaningless world because one organises a small portion of it to oneself. So technology is a nice compensation for living in a meaningless world. One response to the nihilistic predicament is to go full throttle with the technology, to reshape things according to our wishes because, as we are living beings, our wishes and preferences are themselves locations of value. And without traditional norms about what's good and what's natural, the breaks on how we might seek to alter the natural world are released. Technology can be accelerated. Even before the arrival of AI, an important engineering trend in the life sciences was synthetic biology. The task is to engineer novel life forms. One approach is via the production of chimerical organisms, blending the genomes of unrelated species. Michael Levin is a well-known protagonist in bioengineering, and he makes clear that one of the attractions of chimerism is that it breaks down old ideas about what is biologically normal and possible. As he puts it, we can move past the parochial contingencies of our familiar forms. And of course, with synthetic biology, the impulse to alter the course of nature can be directed back towards the human body. Mortality is the natural state of things, but from the point of view of the living thing, death is bad. Why not seek immortality? Levin has spoken about this possibility, and his research has been embraced by transhumanists. Along with longevity and immortality, transhumanists, often themselves elites in the tech industry, promote exploration of all kinds of augmentation of the body, including cognitive enhancement and hybridization of humans with machines. In a Nietzschean vein, the ultimate goal is not really to add augmentations to standard humans, but to bring on the evolution of humanity beyond its current form. This is like the vision of the Ubermensch, an upgraded version of humanity to supersede the original one. Man is a rope tied between beast and ubermensch. What can be loved in man is that he is an overture undergoing under. Thus spake Tharashthushtar. AI, I should mention, is often described as the next step in the evolution of intelligence beyond its embodiment in life, a new post-biological intelligence. Amongst biologists, however, these kinds of claims have not gone unchallenged. While the previous group, the accelerationists, assimilate living things to machines, intelligence to computation, this other group, which we might call organicist, argues that there is a fundamental difference between life processes and non-living mechanisms. For example, research on immortality rests on the conception of organisms as machines. Machines don't die, they just break down. And in principle, you can always replace their parts and get them running again. If our bodies are like that, we don't have to die. But if we're not fleshy machines, if our bodies are on an inexorable trajectory from birth to death, there's no point pursuing the impossible goal of immortality. Now we saw that accelerated bioengineering makes sense as a response to a nihilistic predicament. There are no norms or values beyond us, only what we will. And technology unshackled from the scientific goal of understanding is a manifestation of the will. It is a pure ability to exert changes on things. This approach tells people, or really just billionaires, that their wildest dreams are on the horizon. It seems a bit demoralizing then for another group of biologists to come along and pour cold water on these hopes and aspirations, but interestingly, we'll see that the organistic perspective itself happens to be a response to a nihilistic predicament with many Nietzschean resonances of its own. According to this perspective, the central difference between organisms and machines is that organisms are autopoietic, which means self-producing. Living cells are not put together out of their raw materials by any external maker, like a carpenter making a table. Even an embryo in its mother's womb is somehow the author of its own existence in a way that a machine, no matter how sophisticated, is not. Life is unstable and precarious and is responsible for supplying itself with what it needs to persist and maintain itself. There are things that are inherently beneficial to a living being and inherently harmful. In a universe without value, without good or bad, before the existence of life, value can actually spring into existence at the origin of life. The autopoetic theory puts the Nietzschean idea that life inaugurates its own values on a contemporary biological footing. In a book which introduces recent research in biology to a general audience, Philip Ball relates, life can be considered to be a meaning generator. Living things are those entities capable of attributing value in their environment and thereby finding a point to the universe. This is what philosophers call a naturalization of value. Whereas the old philosophical tradition supposed the origin of value to be transcendent, certainly more elevated than to be within the reach of scientific investigation, naturalization brings things down to Earth. Given the prevalent skepticism about transcendent values in our secular age, this helps to address the nihilistic worry that there simply are no values, that everything is valueless. Similarly, the organicists offer a naturalization of agency. A familiar worry is that science supposedly tells us that everything that occurs, actions of people included, are determined by the laws of physics. If this is so, there is no genuine choice, and human beings are not agents but automatons, following the dictates of their hidden programming as much as any robot does. A recent book by neurogeneticist Kevin Mitchell provides a compelling alternative to this dismal picture. He writes, All living things have some degree of agency. That is their defining characteristic, what sets them apart from the mostly lifeless passive universe. Living beings are autonomous entities, imbued with purpose and able to act on their own terms, not yoked to every cause in their environment, but causes in their own right. Autonomous is the key concept here. We use the word informally just to mean free, independent, but more technically it means self-legislating, setting your own rules. The idea is that as soon as a life form establishes norms of good and bad for itself, it also establishes rules, i.e., that this beneficial thing is to be pursued. And what's more, with self-generation, autopoetic beings instigate a partition between the self and not self. The most minimal form of this partition is the membrane of a single-celled organism, such as a bacterium. It implies no consciousness or sense of self, but it means that there is a meaningful difference between causes that are externally imposed on the organism from without and causes that originate from within. Free actions are ones that involve this internal locus of causation. Agency is, according to Mitchell, the defining characteristic of life. As he writes, the story of agency is the story of life itself. Again, the Nietzschean theme comes up with this emphasis on willing. Life is agency, is what wills things for itself and seeks to make them come about. Life has a freedom unseen in non-living machines, according to this view. Biologists of this school have also offered grounds for thinking that AI in non-living machines will not be conscious or genuinely intelligent. It's argued that normativity, autonomy, and agency are essential foundations for cognition. This is because in an endlessly open and rich world, with countless stimuli that could be registered and data that could be processed, a cognitive system needs to decide what to cognize. For living things, some stimuli are relevant to survival and flourishing, and others are not. Biological normativity solves what is called the frame problem in philosophy of AI, the problem of discriminating relevant from irrelevant amongst all the different real-world events that could possibly be tallied. Current AI does not actually solve the frame problem because it's engineered by humans and relevance is determined by our wants. My purpose here is not to adjudicate between these two very different approaches within biology. The thing to note is that in addressing the question of whether there is any meaningful difference between organisms and machines, between biological intelligence and computer processing, everything turns on adherence to contrasting conceptions of life, whether mechanistic and endlessly open to engineering, or agential and endlessly liable to reorganise itself, setting limits to external control. As different as these conceptions are, they both lead back to Nietzsche and his attempt to be a naturalist without nihilism. While this may be more overt in the transhumanist corners of biology, if anything, the Nietzschean credentials of organicist biology are more convincing. Underlying transhumanist dreams of immortality and post-biological intelligence, we can detect the old yearning for transcendence that Nietzsche detested. What else would upload in your mind be but a secular replacement for ascent into heaven? Dreams of silicon replacement reveal an inability to accept perishable bodily existence. The opposing position which grounds agency and intelligence in our mortality is, in a way, more life-affirming. So, final part. When invited to present here, I was asked to say something about the past and future of philosophy. I hope to have shown how the philosophy of the past is truly relevant to our most current debates over life and technology. In a way, we have not escaped Nietzsche's thought world, as uncomfortable as that may be. What of the future? We've seen that scientists themselves are addressing ancient philosophical questions. What is life? What is mind, what are values, is there free will? One might wonder if there's anything left for a philosopher to do. As I told you at the start, my original training was in science. I crossed the border into philosophy later on because I was more concerned with the problem of how to interpret the results of neuroscience than with actually doing the science myself. Scientific results are often presented to the public as if their philosophical significance were obvious and unambiguous. The neuroscience of agency is a great example. This experiment proves that free will is an illusion. The headlines write themselves, but as Henning Schmidgen, a historian and philosopher of science, observes, the connection between will and knowledge, between power and reason, is by no means a merely natural historical one, but also a cultural historical one. The debates around free will illustrate this impressively. And I'd like to highlight those words cultural historical. I hope you would agree that the dispute between our two camps of biologists is quite odd and confusing unless we put it in a suitable cultural and historical frame. And that's a task that philosophy is able to take up. We saw just now that the accelerationists and the organists disagree quite fundamentally about what life is, and that the disagreement is not purely theoretical but reflects a difference in attitude or evaluative stance towards human life and its mortality. I'll elaborate a bit more on this. Georges Congulême was a 20th-century historian and philosopher of biology, whose most famous student, incidentally, was Michel Foucault. What Congulem took from Nietzsche is not that life is will to power, but that life is error. Life instigates norms, but in that very moment initiates the likelihood of failing with respect to them. As living beings themselves, biologists have no neutral standard starting point from which to begin to theorize life. A French philosopher of his era lets grant Congolene the idea that biology involves a little bit of existentialism. In the dispute over whether AI is genuinely intelligent, the tech-friendly accelerationists tend to accuse AI skeptics of putting human intelligence on a pedestal because they're attached to the idea of human specialness. The charge is that they're conceited about our human capacities. According to Michael Levin, the organicists lack, quote unquote, humility with respect to engineered constructs such as embodied robotics, software AIs, and language models. But the accusation of lack of humility could equally be leveled against those like him who assume that the products of engineering are those equal to the ones crafted by eans of evolution. There the conceitedness shows an acceleration at the capacities of human engineers. So there is arguably a conceit and humility on both sides, with a difference in how they're distributed, reflecting ultimately a difference in choices of values, that philosophical reflection can do more to analyze. An old accusation against those who attribute agency and goal directedness to all living things is that it anthropomorphizes the natural world. But the bioengineers who equate living things in machines can equally be accused of anthropomorphism for understanding the natural world only insofar as it's consistent with the design principles and processes that humans employ in engineering. In Congulam's view, it was inescapable that our human starting points and perspectives would shape the course of biological research one way or another. But since there is no universal human perspective, the ability to establish norms for research itself becomes a site of contestation. We need to ask whose starting points and whose interests are setting the agenda? The criticism of the organicist position is that in its reference to a substantive difference between living and non-living systems, it's invoking some kind of mysterious vital force and is therefore not properly naturalistic. Interestingly, Congulam wrote about vitalism in quite a sympathetic way, but did not treat it as a claim about special forces or as an ordinary scientific hypothesis. Its political dimensions, for good or ill, were undeniable. Congulam argued that vitalism cannot even be strictly defined because he wrote it is an exigency rather than a method, and a morality rather than a theory. By exigency, he meant something like a demand that the experience of our own lives places on us to acknowledge an inherent agency and spontaneity. But that's a helpful way to frame the writing of a scientist like Kevin Mitchell. His self-declared aim was to reconcile the most advanced biology with, as he puts it, and I quote, the fundamental truths of our existence and absolutely the most basic phenomenology of our lives, which is that we choose and we act. All this framing does not mean that philosophical reflection must remain neutral or non-committal regarding the proposals put forward by the biologists. The criticism of the misanthropy embedded in the transhumanist direction, warnings about unintended consequences, are important and perhaps obvious and have been voiced elsewhere. So I have a bunch of sources on this slide. That's a film that's just been released at Sundance Film Festival about the history of AI. It's a paper drawing out the connection between eugenics and transhumanism and the quest of artificial general intelligence. And also wanted to highlight the research of my colleague Shannon Baller, who heads the Center for Techno Moral Futures at the University of Edinburgh. So there's plenty of ethics. And people in the different parts of culture, like filmmakers, all discussing these things and criticizing some of the ideology behind accelerated synthetic biology. But so I'm not going to do that here. Instead, I'll conclude with a few comments directed actually to the organicist side. In the late 19th century, when Nietzsche was writing, an individualistic understanding of evolutionary theory was almost universal. It was all about the competitive struggle for existence. And this, of course, was linked to laissez-faire capitalism and social Darwinism. This understanding of biology was conducive to Nietzsche, who was an individualist and political elitist. Strangely, even though evolutionary biology is now very different, and in many fields of contemporary biology, symbiosis and cooperation are buzzwords, the autopoetic theory retains something of a fixation on the individual organism and its own unique ability to define boundaries, me against the world, and set norms for itself. There isn't time now to speculate on the reasons for this. However, this approach as a naturalization of value does face a challenge precisely on this front, the question of how it can ever account for collective norms and other-centered values. A new book aims to remedy this problem by supplementing the theory of biological autonomy with other directed autonomy. This is an ambitious goal, and it remains to be seen whether the autopoetic theory has the explanatory reach that is needed. Another observation is that people, individuals, are not actually very good at setting and following their own norms. As philosopher T. Newgen describes in his new book, The Score, value capture, the constant drift away from the values we consciously wish to pursue, is a pervasive problem in modern life. Perhaps humans, compared to other life forms, are uniquely inert at what is, according to the autopoetic theory, the basic biological function of normative autonomy. Though I should mention this phenomenon of value capture is fully consistent with Nietzsche's opinion that only a small minority of superior people are capable of establishing their own values. In sum, the challenge now for organicists is how to escape from the Nietzschean lure of individualism, embedded in the theory of biological autonomy up to now. The bioengineers and the transhumanists, I take it, are quite happy to embrace their inner Zarathustra, though, as I mentioned before, they need to do a bit more work to become truly Nietzschean because all of this fear of death is not that fearless and heroic after all. For the organicists, though, there's a lesson from one of their predecessors. In his classic book, The Organism, neurologist Kurt Goldstein, makes the interesting remark that it is a matter of perspective whether or not an organism is seen as autonomous and complete in itself, or as a dependent and incomplete appendage to a larger environment. Every creature is simultaneously perfect and imperfect. Regarded in isolation, it is within itself perfect, well organized, and alive. With regard to the entirety, however, it is imperfect to various degrees. It exists only as a being within the whole, only by support of the whole. Therefore, it is doomed to die as soon as this support ceases. Organisms are finite, bound to death. We saw that that is the view which sets organisters today apart from bioengineers, who instead considered bodies to be endlessly repairable machines. That life is another face of death and death of life was a theme of the pre-Socratic philosopher Heraclitus, much beloved by Nietzsche, and often invoked by organisters today who speak of life as a perpetual process and flux. But in an excursus on the philosophical background to his account of biological knowledge, Goldstein says that a different pre-Socratic is the one most relevant to him. Parmenides, the philosopher of stasis and changelessness. In Goldstein's telling, the philosopher of unity, of knowledge ultimately aspiring to an undivided vision of the whole. So it seems to get past Nietzsche, the task is to reconcile these opposites, Heraclitian strife and Parmenides harmony, towards a more refracted conception of life. And that will be have to be an excursus for another occasion. Time now to conclude. I began this evening by observing how technology, AI, used as a tool for modeling in biology, is revealing limits to the old aim of comprehensive scientific understanding of the natural world. I argued that this apocalyptic moment was envisaged by Nietzsche 150 years ago. He also overshadows biology today, where scientists like him attempt to construct a naturalistic worldview following the death of God. I hope to have shown that scientist's response to AI, to the performance of intelligence in computing machines, is in many ways a reaction to a wider cultural crisis of nihilism and anxiety about the origin of values and sources of meaning. At root, our responses to the abilities of non-living machines is a reflection of how we relate to our own lives and envisage our own deaths. To quote philosopher Giorgio Agambern, everything happens as if in our culture, life or what cannot be defined. Yet, precisely for this reason, it must be ceaselessly articulated and divided. Thank you.
SPEAKER_04First one, going right back to the beginning about naturalism. Yeah. So can you say a little bit more about the link between naturalism and intelligibility? I mean, somebody might say, uh, I'm a naturalist because I'm convinced that there aren't any supernatural things explaining what happens. But I don't, but I'm not bothered by the idea that I can't explain everything that happens. Naturalism's just the exclusion of supernatural explanations.
SPEAKER_07Yeah, so I think this question of how you divide the natural from the supernatural is the tricky thing here because at the back of most people's minds minds of what is supernatural, it's like something like completely mysterious that a scientist could never explain according to the known laws of physics. So something actually, so I do think packed into people's notion of the supernatural when they say this is implicitly some idea about what's inherently unintelligible.
SPEAKER_04Okay, thank you. Um next question. Um can you say a little bit more about um why it is that the way AI goes about explaining things is a challenge to intelligibility. Is it because we don't know how artificial intelligence works? Is it to do with people's worries about explainability? So say a little bit more.
SPEAKER_07Yeah, so it really links to this question that comes up in wider conversations about explainable AI. So when you have um models which are built to make um decisions about um credit worthiness or in the legal system, ideally you would want to know the reason the logic or the reasoning pattern behind that decision. And um as the more powerful and effective models self-organize, so they develop their results based on training data, so there's no principles or reasoning standards that are built into the models, and the person that's built it at best has to do a lot of work, reverse engineering, trying to figure out what internal dynamics are there which led up to the process. If you're taking a model like these foundational models that are being used in science today, uh the bigger they get, the more effective they get, but then that process of reverse engineering becomes harder. So there's this what I would say is an intrinsic trade-off between the predictive power and then your ability to figure out how they work.
SPEAKER_04So we can have prediction without any understanding of what underpins the reliability of the prediction, is that the thought?
SPEAKER_07Yeah.
SPEAKER_04Okay. Um also think about that. Last question. Um when you're talking about um the brain, the understanding of human beings towards the in the sort of first half of the talk, I was wondering, what is it that we can't understand about humans? So actually, we can understand quite a lot about one another without referring to science at all.
SPEAKER_03Right.
SPEAKER_04We could say, you know, she dodged the ball because she saw it coming through the air sort of person-level explanations. Do you think that in concluding that that bit of nature eludes comprehension, you're setting too high a standard, that as it were, you've got in mind some standard of intelligibility that implies a reductionism of a behavioural level understanding to a break-level understanding because we can't get that. Yeah, you say it's unintelligible.
SPEAKER_07Yeah, so that's an interesting question to put to me because I I've tended to keep my um notions about scientific understanding and how that works and the limits of that fairly separate from how we think about common sense or everyday understanding. And really, I'm an understanding pluralist, so I haven't I don't mean to imply that what we conclude about the limitations of scientific understanding transfers to other domains where we're not doing things like mathematical modelling and trying to figure out some basic causal laws going on. I don't have a theory of how understanding works in the domain like that, but I'm very open to other domains of inquiry, including, I don't know, say literature and the kind of understanding that you get about people from that being a source of understanding, even though it has nothing to do with scientific accounts.
SPEAKER_04So just because unintelligibility might be the right conclusion to draw in one area of thought doesn't carry over to every area of thought. Yeah. Great, thank you. Over to you. All right, you start. There's a there's a roving mic, so wait for it to reach you. And then we'll work our way backwards.
SPEAKER_05Thank you. Um regarding this uh idea of life as being self-legislating, I think is how you put it. Um just wondering like where where's that boundary um in the that like the self is not some brute fact, right? It's it's something that has a beginning point and a state sequence of sorts. Um like the the the distinction between like the the chosen or the legislated by some consciousness versus or or by life as such versus natural. I'm not sure where that lies. If I if I can give a quick example, like um we can say that well, the the the cows or the chickens that we have are are artificially selected, is what we say, right? But the way we we artificially select them is determined by our nutritional needs and our our ability to to you know rear them and all of that, which is ultimately itself like selected into us based on the scarcity of nutrients in the environment and everything like that. So, in a way, like those also are naturally selected. So I'm just wondering where we could draw that line of you know this this self-legislated versus uh non-self-legislated, I guess.
SPEAKER_07I mean that's that's a that's a really interesting question because you're definitely right that the boundary between what's natural and what's artifact in the world around us today, including the living world, is very unclear. Um, in terms of the autopoetic theory, it kind of takes a quite um minimal and kind of traditional view about what the boundaries of a living entity are, just so um it is any kind of individually persisting organism counts as the autonomous thing in itself. So if you are a chick capturing from an egg from a domesticated chicken, you are still a biologically autonomous being according to this account. So it doesn't matter that you have been selectively bred and there are all these kinds of other causal factors behind your coming into being, just in the same way that the theory admits or takes for granted that all beings come about by natural selection for forces that are not under their control, but that doesn't undermine their autonomy at their birth. So that's the picture. I guess it gets more blurry when you're talking about um organisms where the boundaries of the individual are not clear. Um, I don't know, symbiotic systems of cells which need like cells of different species in order to survive and persist. And this is a way in which I think this kind of theoretical biology is not yet fully incorporated to this more recent work in biology, which has kind of calls into question how easily you can individuate organisms.
SPEAKER_13Okay, two rows back.
SPEAKER_12With the pursuit of a continued Ubermensch and artificial cognition? Surely that as humans, we obviously have desires uh in life to seek uh experiences as was proved with not experience between experiments. With the development in artificial cognition, do you think that there'll be an audicure of the values that fit that humans seek, having non-artificial cognition and consciousness, and we will simply delve into a pleasure-seeking AI which would be a manifestation of NOTX machine?
SPEAKER_07Hmm. Yeah, interesting question. So it's the is the thought there, is the scenario there that we kind of hybridize with machines and have access to sort of virtual realities, and then that shifts our norms and values?
SPEAKER_03Oh yes.
SPEAKER_07Yeah, I mean, under those kinds of, like I say, to mean that's still now science fiction scenarios, you could see that would be the case, that's likely what to happen. I mean, if you think of a scenario like that, you're changing some fundamental things about what a human being is, say we hybridize with machines and values that we have related to our own current way of being humans, you would expect values to change as well. I think that's what would be concluded from both of those camps and biology that the accelerationist and the organicist would say that if you change fundamental things about this kind of biological being, the norms and values that it pursues will also change.
SPEAKER_04Okay, let's keep travelling backwards, go one way back. Okay. Well, no, no, no, go on. Come forward again.
SPEAKER_01Um when you were talking about the limits of scientific knowledge, you were then talking about uh the quote from the manic days about how we have all of this data and then AIs which are looking for patterns effectively which is happening now, um, and that we can't put that within a scientific method. Um would you say that the sort of comprehension that's giving of the world is an art rather than a science? And is that more of a supernatural or a natural understanding in that world?
SPEAKER_07Yeah, I mean using these kinds of methods, my take on them is that people are not pursuing understanding or explanation, they're pursuing effectiveness, like how how quick, how quickly can you predict patterns in these data for the purposes of I don't know what you whatever you might want to do with that predictive model. And some that some have obvious real-world unquestionable utility, like better weather forecasting models, that's something that Deep Mind is involved with. So I didn't in this short, well, it's a short what a long lecture, but it covered lots of topics, so I didn't have time to go into like all of the arguments behind all of this. There are people in the field of AI and science that do have the opinion that they yield explanations and understanding of different sorts, and um that reverse engineering is sort of more effective than probably I've made today. So I've given you my opinion on this, there are people that have more of an optimistic view about the ways that these can be used to foster scientific understanding. I mean certain things like hypothesis generation. So if you build a model that I know works really well with this data set and you're an inquisitive scientist, it would plausibly it could lead you to consider hypotheses and avenues of research that you wouldn't otherwise do. So I've given you something of a one-sided picture in this, just to acknowledge that there's a debate.
SPEAKER_04Okay, let's come back forward. Uh yeah.
SPEAKER_09Thanks for such a uh an interesting talk. Um I've got I guess it's quite an abstract question about kind of the the relationship between value and and science. Um and suggesting that if we kind of realize that something's uh scientifically unanswerable, this might lead to this kind of nihilistic thing. Um, so science is kind of a responsibility. It's a born of in a way, in a sense of our valuing the world, right? A sense of wonderment of kind of interest in the world and and it's it's working on the AI question, AI doesn't have right. AI isn't It does what we ask it to do. It doesn't have an intellectual materiality the way we do that. Um so even if we find that some questions can't be the answers that are on or the answers which are maybe there are comprehensive, we'll come up with the new questions that are, is that right? Because we won't just stop evaluating knowledge and uh the attempt to understand, even if it can't, you know, ever be complete. Um so yeah, but with that in mind, do you think the kind of the risk of you know, kind of annihilation is like how how serious of it do you think it is?
SPEAKER_07So I do worry that there is a kind of demoralizing um impact on you know, chatbots, LLMs, AI that's being sort of rolled out, especially on young people, because um I was reading a news report about how many high school kids are using Chat GPT just to ask questions that they would have in the past had to sit down a bit or at least like go and do a Google search and like read some things and like cross-correlate a bit more information. So it seems to be, and and just the very notion that, well, there's the machine that will give me all the answers without me having any effort, putting any effort into it, and I'm just this kid and I don't know anything, and the answers are already out there, it seems to be undermining their um curiosity and motivation to study. And this article is reporting that they're gonna see these dynamics themselves. So, I mean, your own experience of human curiosity is the product of a culture in which these technologies did not exist, and an education system which didn't have them. Um I'm concerned that it's less a stable human trait than we might expect, and there's something that could just be limited within further generations that yeah, just uh left with this thing that's presented to them as an oracle and as if it has all the answers.
SPEAKER_04Yeah, okay, let's go over to the other side of the room for a bit. So two row, one row from the front. Well, all right.
SPEAKER_06Uh thanks so much for the talk. Uh I guess my question is in relation to this intelligibility point. And I'm curious on your thoughts on whether you think that AI poses a challenge to intelligibility in like an inherent sense that AI is always going to have this by virtue of being an engineering tool. Or is it that um the current research paradigm that AI is based on is one that sort of has this result, but there are like historical or other ways to refashion it that could avoid that?
SPEAKER_07Yeah. So I mean the thought behind this behind this, like presenting this trade-off between prediction and understanding is actually based on sort of giving credit to what these models are doing in a way that maybe doesn't come across so clearly because I tend to say just negative things about them in these conversations. But I think what they can do really, really well and impressively is find patterns in very, very multi-dimensional data sets that human beings can't visualize and can't like see relationships with them like two or three different things, and then you go up to like close to ten, and then you don't see the patterns anymore. Um, so the point is that there is stuff in the real world where the patterns are multidimensional and somehow, given our cognitive limitations, not visualizable to us, we can't see all the relations. These tools can do that, but then if we hand over science and um innovation to them, then we're just going to be using stuff blindly and sort of stop even caring that we understood them in the first place. So it's not just a feature of how the technology is currently designed. I think there's something about biological systems in particular that confront us with this. I mean, one book I read recently, and I talked about in another lecture, was um Elusive Cures by the neuroscientist Nicole Rust, where she critiques um translational neuroscience. So so much of the research that's gone on in neuroscience toward developing cures for brain diseases and psychiatric diseases, they've all been based, as she puts it, on a linear causal model, like looking for the one originating factor and then seeing how like domino chain leads to the disease. And she just points out that we know that these are complex dynamical systems, they're highly adaptive. There's going to be so many different small causal um influences that vary from one person to another. It's exceedingly different, difficult to map all that and see how the relationships actually work. AI is just better at tracking those kinds of things.
SPEAKER_00Uh yeah, my question's about agency. Um a lot of the talk right now has been about singular foundation models. Um but uh down the line um it will be agentic, multi-agentic systems with multiple enhanced LLMs. And my question is whether um agency uh it seems to me that there's the general sentiment is that a key distinction between humans and these machines is our sense of agency. My question is down the line when these multi-agentic systems come about, will agency be an emergent phenomena or how how will this AI agency be different to our agency?
SPEAKER_07Yeah, I mean, I tend to think when people use the word agency in that um AI context, I think that's a good marketing ploy. I mean what they mean is bits of software which will run around your operating system doing stuff that you're not aware of and in control of. So you're not the agent, but I think the human that programmed it is the agent here. I don't think there's a meaningful reason to say that these things have autonomy in that biological sense. Um I know the other side of this debate is that they have emergent properties because obviously they can do things that the people that designed them did not build them to do because they self-learn from data sets, so not all of their capacities are predictable. Um, that's fine. They're complex systems, they have unpredictable um patterns. Biology is full of complex systems with unpredictable phenomena, but that doesn't mean that biology and AI are the same thing. So um I think that word agent needs to be in scarecot in the AI context. Yeah.
SPEAKER_08Sorry, I really enjoyed that. I wanted to ask if you could talk a little bit more about autohoeses and um the notions of the extensions of being. Um I find it almost implausible to think of ourselves as original in the original terms of autohoeses as auto-oeasing beings. Both because we we now know that we are and um mentally and emotionally the same thing. We are not able to be fully human but without other humans or other creatures around us. So could you just explain a little bit how that might fit into what you're trying to build here, the story you're trying to build here?
SPEAKER_07Yeah. So I mean, I think this really speaks to that point about the limitations of the theory. One of the things about the autopoietic theory is that this is work in theoretical biology where the uh core move was to create the most minimal model of a biological organism. So the single cell, and what are the minimal components that needs to go across that transition from non-living to living? And certain things need to be in place, this self, not self, but organism environment boundary, and then how that's maintained by extracting energy from the environment and um avoiding um harmful events. I think because the core principles and ideas, in order to get theoretical clarity, has have had to stick with these minimal systems and like most of the visualizations that you see in the autopoietic theory, it's just a single cell, and these are the all of the causal dynamics around it, and this stuff about closure constraints. There's a huge series of steps that would need to go from those minimal models to biology as we know it, which is highly interactive, highly interdependent. And so I that book that I mentioned, um autonomy, is by um a couple of researchers in this space, um, they're based in Spain, that are aware that the bigger ambition of the autopoetic theory is this wider naturalization of values, and there's a huge mismatch between um what is theorized about this individualistic organism and the rest of biology, which is not about individuals, but it's about interactions and then human culture where there's so much interdependence. So um yeah, it remains to be seen whether that work can proceed and kind of retain that conceptual clarity which you have in the most basic form of the autopoetic theory.
SPEAKER_04Okay, towards the back now on that side.
SPEAKER_11Thank you so much. Um I was wondering about um whether God can come to the rescue here as well. Um in that if one way of reading um the relationship between science and theology is that it's bad theology. Um so, for example, God's simplicity I think doesn't necessarily align with intelligibility, but might be um ineffable. Um and so the non-intelligibility of nature might somehow still correspond with something that could be grounded in divine ineffability. And you might look to other ways of motivating your investigations, therefore. I mean, obvious one would be the Platonic notions of what's good, beautiful, and true. Um but anyway, I just wonder whether there's a bit of theology that might be useful to bring in as well.
SPEAKER_07Yeah, I mean, certainly I didn't mean to present like the Galilean theology as the only way you can be a theist, but historically that seemed to be the dominant thing. Um, and I don't know whether it was just amongst um physics-minded people like him or it was just the the like the Catholic the brand of Catholicism that he grew up with, um, which didn't veer towards anything being ineffable, but yeah, certainly take your point that there are different ways of being a theist which would be consistent with the um with some things being unintelligible in nature, so yeah.
SPEAKER_10So far, so I was a bit late due to self-has came up elsewhere. Um can you tell us um information about um this um project we celebrated in 1925 to 2025 on um real physical forever one um project? Um do you think that um AI can help us um as a proper heritage to experience people to understand their needs like for instance like patterns and um nature and um some historical stories, moments. Um what do you what do you think can um heritage can come up with?
SPEAKER_07Yeah, um so I'd have to pass over to Edward about the actual banner there and the celebrating the hundred years. Um but just on that AI point, um, so I don't mean to say that it never has any uses or benefits. I mean it's not I'm not just saying like this is the chalice of poison and everyone throw it away. But that said, the um environmental costs themselves of building these of building these things and then using them as toys for something that maybe with a bit more effort you could do yourself and then you know utilize your own creativity or make friends with people with skills that you don't have. Um yeah, I'm still not seeing a massive use case for them. That's my opinion.
SPEAKER_13Let's wait for the mic.
SPEAKER_05Um actually this question about agency, uh this issue of predictability. Uh would you just clarify like we're talking about predictability here in terms of whether something is practical to predict with our abilities, or whether something is fundamentally unpredictable, as in like even if we even if we could like capture a stack trace to borrow a computing term, of like all the you know mechanisms and and interactions that went into whatever the phenomenon is. Um and also uh if you would also uh would you please um talk about whether predictability here is related to figure-outability, because I think they're kind of related.
SPEAKER_07Yeah. So as I framed it here, predictability is intention with figure-outability when you're dealing with very complex systems where you can't um um capture them in a it's the the case that I have experience with in neuroscience where if you built a simple, nice, elegant, neat model which allowed you um to it was connected with a nice clear hypothesis about what these cells were doing, it was predictively inaccurate, sort of beyond a narrow range of cases. Um, so I think there is a difference between predictability and figure-otability often in science when the science that we know and love they tend to go together if you think of Newtonian mechanics, like nice, elegant mathematical laws, allows you to predict lots of things in the macro space around us. The science we love they tend to go together. I'm saying we're reaching this frontier of science where they're pulling apart, and that's creating this tension between normal scientific work aiming at understanding and figuring things out versus prediction, which is more the engineering task, but it's also been a big task of science. But you seem to be also talking about um systems which are inherently unpredictable, and I wasn't going into that here, but I think this is an interesting question that also AI could hit a frontier here because AI is basically a very impressive induction machine. It takes data, past occurrences, and it uses that to make predictions about what will happen in the future. If the past is different from the future, then that breaks down. So it does need an assumption about the stationarity of the world, or like Hume's old problem about the uniformity of nature. And we know that there are things in biology which are inherently quite changeable, so that also puts a limit on predictability.
SPEAKER_13Okay, have another go, yeah.
SPEAKER_12Continuing with the previous husband's question about theology and religion, how is apocalyptic technology and particularly theories about cognitive development and all that sort of thing applicable to theology and the work of religious philosophers, particularly those of Abrahamic religions because quoting the for a human and the world are in the image of God. How can the development in apocalyptic technology and cognition continue to align with the religious view of the world while still continuing to be accurate in terms of scientific uh recognizability?
SPEAKER_07Yeah, um, interesting question. So this idea of the image of God was very much in the background of what I was saying about Galileo. This is precisely the theological assumption behind why, I mean, their reading, so their interpretation of their claim of the image of God was the basis for saying that our intellect is comparable to God's intellect, and therefore we can work out how things work with mathematics. Um, but what I would say about the current moment, I think it really needs to be um noticed how much transhumanism seems to be grounded in a basically religious impulse. Um so even what amongst these practitioners do it take to be atheists, there's this kind of goddate hole, and they are like doing it, filling this with okay, we are the new creators of the next big thing, the intelligence that will supersede us, or we like take our own evolution into our own hands. Um, so I said a talk yesterday by someone, a sociologist of science at Edinburgh, and he was talking about why bigger and bigger data centres are being built right now for diminishing returns in terms of how um powerful the models are. A lot of the motivation is that we just need to scale this up and then we'll get AGI. And a lot of the motivation for AGI has this like quasi-religious fervor around it. It's like we just have to build this thing. Once we build this thing, then we'll be heaven on earth because the super intelligence will be God for us, it will tell us everything we want to know, all our problems will be solved, we'll be able to colonize other planets, we'll get all mentality. It's like the whole list of everything that you could want from a superhuman, omnipotent being, they're gonna put on this AGI that they're trying to build. So it's not exactly rational.
SPEAKER_00Um it's a bit of a broad question, and it's vaguely along the themes of your talk. How can we as a species and individuals have a healthier relationship with technology?
SPEAKER_07Oh that's uh I think it's yeah, I think it's really difficult at the individual level because um so much that happens as by um I don't know, wider social forces around around us, and as a parent, you know, even if I set norms and limits on my children's technology use, seems that even their schools are like in, you know, saying, oh well technology is the future, so your kids need to be exposed to lots of it. Um so I don't think we currently have the right democratic mechanisms for actual collective decision making on this, which I think is maybe at the root of people's feeling of powerlessness and and all of that. I think people in the kind of tech um adjacent worlds that but not in the tech industry talk a lot about um just learning tech skills as um as individuals and groups of people so that you have choices about what kinds of technologies that you use and you not always depend on on the biggest companies. So I think that's a really important thing, and having thinking about different models of local organization and different kinds of communities, um, because yeah, if it's just left to each of us as an individual, it's very hard to to be in control of our own even our own uses of devices.
SPEAKER_10Go on making it quick because we've got only a couple of minutes. Second question, um can we can we um can scientists can scientists um can work out um um different ways to um um make things more professional and um and greater atmosphere of the Ways to um um movements of um faith and good.
SPEAKER_07I think a lot of scientists that write in popular science space, like Sean Carroll, I think he's a really interesting case, is precisely doing that thing of trying to make things clear. This book's called The Big Picture, it's trying to like show how all of these different um parts of contemporary science fit together and then and then how that fits into a larger worldview, which for him he's uh he calls himself a poetic naturalist. So a naturalist that still wants to have a big place for value and everything. So yeah, definitely there's scientists who care about that.
SPEAKER_04And there I'm afraid we need things to draw to draw things to a close. Thank you very much for your contributions, and thank you, Mas Vita, very much for your talk.
SPEAKER_02Thanks for coming.