From Therapy to Social Change

The Age of Humachines: Artificial Intelligence, Social Justice, and the Annihilation of Being Human. Michael D. B. Harvey in Conversation with Mick Cooper

Mick Cooper & John Wilson

What happens when the line between human and machine begins to blur? Michael Harvey, author of "The Age of Humachines," takes us on a fascinating journey through the rapidly transforming technological landscape where machines are increasingly humanized and humans face mechanization.

Harvey introduces us to the five categories of humachines reshaping our world: cognitive systems mimicking human reasoning, emotional technologies reading our feelings, relational machines transforming our communication, robots replacing human functions, and the mechanization of humans themselves through brain-computer interfaces and artificial organs. This isn't science fiction—it's happening now, with companies like Neuralink already implanting technology in human brains and predictions of 1.3 billion robots in our supply chains by 2035.

Behind this transformation lies a powerful economic imperative. As traditional growth stagnates, capitalism seeks reinvention through technologies that prioritizes capital while making human labor increasingly obsolete. The result? A steepening social pyramid with a techno-elite at its peak, equipped with life-extending technologies and enhanced capabilities, while the majority face displacement and diminishing prospects. This transition goes largely undebated, presented as inevitable progress rather than a profound choice about our collective future.

Most concerning is the widespread de-skilling of humanity. From wayfinding replaced by GPS to creativity supplanted by generative AI, Harvey argued that we are losing fundamental capabilities that have defined our species for millennia. For therapists and psychologists, this presents both a challenge and opportunity—to articulate and preserve the uniquely human dimensions of connection and relational depth that no algorithm can truly replicate.

So how might we shape a technological future that serves human flourishing rather than replacing it, and what limits, if any, do we need to put on AI? Harvey and Cooper explore the very real strategies and possibilities that can create a humanised, socially just future.

Mick Cooper:

Okay, so welcome everyone to this TaCS podcast. I'm delighted to have with us Michael Harvey Michael DB Harvey as his writing name, who's an organisational psychologist, political ecologist and former technology entrepreneur based in London. Michael's the author of one of my favourite books for people interested in therapy and social change Utopia and the Anthropocene but what we're going to focus on in this podcast is Michael's latest book, which came out in 2024, the Age of Hue Machines. I love that Michael's got some wonderful language that we'll come on to Big Tech and the Battle for Humanity's Future the Age of Hue Machines. Michael, tell us what the big idea is.

Mick Cooper:

What's the kind of, would you say, the main theme behind the book, and maybe what are shoe machines, just to kind of kick us off, and perhaps there's hardly ever been a bigger one in the history of humanity, and that is that we're going through a period in which machines are being humanized and humans are potentially being mechanized to a point where it becomes increasingly difficult to tell humans apart from technology.

Mick Cooper:

This is a process which has been going on, really, you might say, for at least the past 400 years, certainly through the Industrial Revolution, but I think really kicked off in the year 2000, shortly after the dot-com crash, when there was an awful lot of pressure put on technology companies to start delivering for capital and stop being simply research companies. And the most obvious one was Google, which suddenly exploded from being a research company which said it didn't want to have anything to do with advertising, to a company that's completely reinvented the whole notion of advertising, which couldn't be more central to what capitalism is, completely reinvented, the whole notion of advertising, which couldn't be more central to what capitalism is, and as a subsequent it become a $2 trillion company. So hue machines I really define in five categories in the book cognitive, relational, robotic, the mechanized human or mechon, and the humationated environment in which all of these different types of humachine come together in something which I think is still a way off, but it's a kind of bio-digital robotic reality.

Mick Cooper:

So, michael, there's so much there already and I just want to come in, take us through those again and explain what each of those different ones are.

Mick Cooper:

Well, let me start with cognitive view machines, because in a way, that's what people I think are most familiar with, because this is basically trying to find ways of replicating what humans do cognitively in terms of reasoning, decision making, planning, reading and so on. It's commonly known as AI, so AI is most people are now pretty familiar with it because over the past couple of years, most people are now pretty familiar with it because over the past couple of years, we've had generative AI, which is making its way into virtually everyone's life these days. Okay, so that was the first one and then the second one.

Mick Cooper:

It's a process, a kind of history, which started way back in the 1950s when a group of data scientists in America got together and said why don't we replicate the human brain?

Mick Cooper:

And the idea was being well, the brain is probably a bit like a computer anyway, so it ought to be quite easy to do that. They thought it would take a couple of months. It was a summer vacation and basically they're still trying to put that together and none of the tasks they set themselves have actually been achieved. None of the tasks they set themselves have actually been achieved, but that process is moving towards what's called AGI artificial general intelligence or super intelligence, and some of the leading people in this field think that this is just about to happen. It was for a long time this was thought to be something that would happen in the late 2040s, but now people like Ray Kurzweil, who's a leading visionary in this field, people like Elon Musk, are now suggesting that this point, where the sum intelligence of humanity is superseded by the power of a cheap computer, can happen in 2029, or maybe even this year or next year.

Michael DB Harvey:

RAOUL PAL. So that's the first one, michael, just take us through the other ones and then we'll come back to some of the others MICHAEL KRIGSMAN.

Michael DB Harvey:

The other areas are emotional machine. The other areas are emotional humor machines which basically are trying to read our emotions, essentially through eye movements, through stress, through biomedical data the kind of things therapists tend to be pretty good at without requiring any technology, but it's something which is getting more and more important. It's happening a lot in gaming, sort of total interactive systems. It's even said that this kind of technology will replace lie detectors and will be extremely good maybe 99% accurate in deciding whether we're telling the truth or not, and it's already being used in various types of job interviews or in border situations, interviewing potential asylum seekers and so on. So that's emotional hue machines. Relational hue machines are really much more familiar. In a sense, they include communicative hue machines like the kind of thing we're doing now. Again, a whole revolution which has occurred in terms of text data, emails and so on. Now it's moving on to AI assistants, ai agents who are going to be giving us advice, planning out our day. Potentially we're looking at AI newscasters incredibly convincing. So it's a whole kind of system which is basically changing in quite fundamental ways the way we communicate with each other. We relate to one another.

Michael DB Harvey:

That takes us to Hugh Machine 4, which is the big one, in some ways the most dramatic, I think. The first part, which is the mechanization of human beings. This goes beyond robotics Sorry, I've missed out the third one. I should put that in. So robotic machine is basically easy to understand. It's basically trying to replace a whole series of human functions, like in the way which we relate to each other in terms of work, in terms of how we care for each other, how we care for the old, how we care for the sick, how we care for the young, new forms of parenting, new forms of teaching, all of which can be done either by digital systems or by humanoid robots.

Michael DB Harvey:

And robots is the big thing, I think, which is taking off in what you might say the second age of human machines, the second quarter of the 21st century.

Michael DB Harvey:

There are predictions Citigroup has just come up with this prediction that by 2035, there will be 1.3 billion robots out there, most of these industrial robots involved in the supply chain, not simply in the factory system, where they've been for quite a long time, but in the entire supply chain, retail and so on. But also by 2050, the prediction is that it'll be something like a third of a billion humanoid robots, so this is where you're encroaching more and more into that kind of human territory. These are robots that not only can do physically the motor skills things that humans are good at, but actually have AI which enable them to talk, communicate, read our emotions and so on, and that's a huge area. I mean Tesla, for example much in the news at the moment has rebranded itself in some ways as a robotics company, and when you think about something like a self-driving car, this is basically a robot. It's a robot you can get inside.

Michael DB Harvey:

So, michael, you were saying about robotics and then this human machinisation. Is that the final stage that you're writing about?

Mick Cooper:

human machinization. Is that the final stage that you're writing about? Well, I I think, though, let me get on to the fourth point briefly, which which is, it is the most dramatic, because this is the point where, in a sense, uh, technology and the human brain and body come together. Um, the main idea for people like el, like Elon Musk, of how you actually create super intelligence or use it for human purposes is basically by putting it inside the human brain, and he already has a well-funded company called Neuralink which is already doing this, which is already doing this. It's enabled people, severely paralyzed people, to regain some of their functions, but that's simply the start of a process which is meant to put the power of increasing information processing actually inside our brains to the point where 90% of our brain is computerized.

Mick Cooper:

At the same time, we're looking at artificial organs. Again, this is something like a 30 billion industry which is developing. The artificial heart is already used by some people, not for very long, but for something up to about six months. People are surviving with artificial hearts, artificial kidneys, artificial lungs, artificial skin, and the idea behind this, with the theorists, is that medical death is unnecessary. Death is unnecessary. In other words, we ought to be able to replace virtually everything that we think of as the brain and body. We could have nanomedicine, tiny nanorobots actually inside our bodies, replacing and monitoring our health, and in in this way it should be possible to live for hundreds of years.

Michael DB Harvey:

So you know, that's like we can come back so we'll come back to some of the kind of challenges and problems. What's the fifth and the final form that you write about?

Mick Cooper:

I think the fifth is the hardest uh to to understand, because? Because it's trying to envisage a situation where all of these four elements come together, where you're living in an environment where you have robots, you have digital systems and not simply systems that are in your phone or your laptop but are built into the walls and ceilings of your houses through material science. Every surface, in a sense, is live, is electronic, is both watching you, interacting you, responding to you, and both in theory. So you want to change the colour of your car, you change it. It maybe senses your mood sufficiently to start changing, and we're seeing this in the smart house, the smart city, the smart workplace extending out to the, the smart universe and, ultimately, the smart space colony and everywhere in between. So it's it's, it's just about as big as you can possibly get thanks, michael.

Michael DB Harvey:

I mean it's really helpful just to kind of map out those different realms. I guess the key question then would be and so what? In a sense, like you know, you were talking about artificial hearts and artificial kidneys, which obviously I guess could be a positive thing. I'm also thinking that, say, from a kind of marxist perspective, the kind of the tools that we use in the means production. This is just another phase of development, of kind of social development in terms of kind of technological determinism in some way. But I know for you that you and I think you've inserted this already, and I guess it links with how you see the role of capital in this that you see this process of humanisation as fundamentally problematic.

Mick Cooper:

I was just losing you there.

Michael DB Harvey:

Sorry, I was saying that I think you you see the process of humanisation as fundamentally problematic. You don't see it as a kind of neutral.

Mick Cooper:

I mean, I think, in the end, what we're looking at is this extraordinary reinvention by capitalism of itself as something which goes beyond changing human beings in terms of the political economy, in terms of social relations, in terms of the way that has always happened over the past 200 or 300 years, but fundamentally to change the meaning of being human on planet Earth into something which is totally technologized, essentially a new kind of human race, sometimes called transhumanist, in which this is something that could be seen as presented as a kind of techno-utopia which could transform everybody's life and in theory could, if you believe in the total trickle-down theory, but it's much more likely to benefit those who already have the most already, those who are already investing heavily in anti-aging treatments and so on. So creating this kind of super techno elite which then kind of dominates to an even greater extent more, this new kind of social pyramid. And the key in all of this, I think, is well, probably one of the keys is job automation, is the way in which job automation, all of this AI, the robotics and so on is there to stimulate, supposedly, economic growth, and it is in a sense, I think, a recognition in some ways, that economic growth is fading. It is deeply problematic. Certainly, in Europe it's almost disappearing.

Mick Cooper:

We're seeing our current government in this country, the Labour government, virtually betting the farm on AI now as the only way in which we could possibly rediscover the economic growth which has more or less disappeared from our economy since the 2008 crash. So this is, in some ways, the logical way for capitalism to go. It is to totally prioritize the role of capital, so that labor almost becomes immaterial. It's something which can be done much more effectively and much more cheaply by machines, so that, in many ways, is the key area of what happens. To what extent is this accepted? To what extent do we progressives and others people working go along with this idea, which has now become, without any real discussion or debate, more or less accepted, it seems, as basically the next phase of capitalism and the next phase of neoliberal economics that are really being talked about?

Michael DB Harvey:

Yeah, sorry, michael. I mean, if you look at the work, I think it's Aaron Bastani on fully automated communism. He would argue that the AI and technology will free us up for leisure and it will allow, as you say, say, the development of some more utopian society and allow for the possibility of a fairer society. But it sounds like you're taking an opposite view, which is actually that the, the, the development of ai, leads to increasing infiltration of our very being by capital and will permeate us with a kind of the impossibility of ever escaping from that hierarchical, oppressive system. Would that be right?

Mick Cooper:

David Pérez yeah, yeah, I mean I am quite critical. I think of, oh yes, sort of fully automated luxury communism idea or or other people putting forward a kind of leftist accelerationism, as it's called. And you know, that's always been quite a dream in a sense, of the left, and yes, it's one of those things. If you could imagine a kind of totally successful, if you could imagine a totally successful, fully functioning socialist state which has solved all the problems that Soviet socialism had come across wasn't just a new kind of hierarchy, then you could say, possibly there could come a time when you start thinking, well, we've achieved so much in terms of egalitarian, we've solved the climate problem, We've created a sustainable world. How can we kind of add to that through machines? But we're not there.

Mick Cooper:

And I think the idea that somehow you can kind of ride shotgun on big tech and at some point steer it in a completely different direction, by which it has so much control over absolutely everything, I think is just pie in the sky. And I think Bastani in particular does swallow whole, I think, an awful lot of kind of big tech myths about free energy and so on, of what this kind of technology can actually do. I mean we know, for example, that AI is already consuming huge amounts of energy. The more we move to AGI, the more we're going to get massive amounts of energy which will be required just to keep AI going. I mean, we tend to AI tends to be thought of as something in the cloud that wonderful euphemism but what are basically? Huge warehouses with massive server, basically machines. You know, agi has been described as magic intelligence in the sky, because it isn't in the sky, it's grounded and it's being pumped out by machines.

Mick Cooper:

So I just take that as one kind of problematic area and I say another problematic area but I think maybe we can come to talk about, is this fundamental myth, the brain-computer myth which I think is at the heart of so much of this, which suggests that somehow it's unproblematic to mechanize human beings.

Mick Cooper:

Yes, if you think the brain is basically a computer, a piece of software which is running a body which is essentially hardware, if you think of the human genome or the genetic code as again being software, then why not reprogram it Because of personal eugenics?

Mick Cooper:

And human cloning is again part of this whole human machine imaginary, but that for me is a complete and utter mechanistic myth. Yes, it's got a long heritage, going back to Descartes and Hobbes and the beginning of the Western Science Revolution. But it is in fact, I think, a kind of terrible calumny against the human brain which we know. One of the things we know about it it's the most complicated organ, complicated entity in the universe and in fact what we understand about the human brain at the moment is extraordinarily limited. There's a marvellous book by a guy called Matthew Cobb which is called the History of the Brain, and he's actually a biologist working on a very, very simple kind of organism. He thinks that we won't understand the way in which even the simplest, you know, worm-like brains operate until probably the end of the century, until probably the end of the century, so you're saying that it's far too premature to be?

Michael DB Harvey:

yeah, using technology and ai as a way of emulating this human brain without understanding it's kind of imposing on it a particular structure. Do you see then, ai, as is part of your concern about a fundamental dehumanization of human being.

Mick Cooper:

Yeah, I mean, I think what we're looking at is both a kind of capitalization of everything, the kind of final frontier for capital to turn everything that humans are into a source of capital and we've already seen this to some extent with the datafication of everything another kind of humationation in which all of our data is kind of sucked out of us, used by social media, used by all of the systems that we employ in order to build up this data picture of us which enables big tech to more accurately predict our consumer and even political choices. So, yes, it's an incredibly powerful system for, yes, dehumanizing or dehumanizing us in a way which is powerful for capital, but at the same time, it is also, you have to say, a scientific vision, and it's two kind of things coming together. It's a tradition, mainly in physics, which goes back to Newton, goes back to Kepler, the whole revelation, if you remember, of Kepler and others thinking, hey, the universe is not some sort of ineffable, deeply mysterious organism presided over by a god you can never understand. But hey, it's much more like a clockwork and God is a kind of watchmaker. And on that basis, you think it's a clockwork.

Mick Cooper:

Well, we can take it apart and ultimately we can understand it and re-engineer it, and so this is a very, very strong tradition, essentially, that has been going in science for 300 or 400 years and it's only really now, in the 21st century, or certainly with the computer and computer science, which is the absolute heart of all that big tech, are doing, that you suddenly say, yes, it seems to be possible with the digitalization of everything, to mechanize everything by understanding and breaking it down into digital structures and then restructuring it.

Mick Cooper:

So in a way, it's more than just a problem with capitalism. It's a problem, I think, in science, with this kind of potential split, I think, between this physics-based ultra-science which sees no limits really to where it can go, you know, beyond what it is to be human, beyond the earth, beyond the solar system, into infinity, as a kind of duty of what scientists do. On the other hand, I think we have a much more biological tradition, life sciences, more a biological tradition, life sciences, a much more integrated way of thinking of nature, not as something there to be mastered and overcome, but as who we are, and again, not a dualistic split between humans and nature, which is very much part of that mechanistic physics tradition, but seeing, yes, we are all part of nature and how can we understand nature in a way that makes it clear, opens up better ways to be human than somehow to turn us into machines.

Michael DB Harvey:

And I guess what you'd see in that latter camp is some of our more kind of humanistic models of therapy, the work of Carl Rogers, ideas of actualisation and the organism kind of non-mechanistic, or the existential approaches which we'd see the human being in a very different way, existential approaches which would see that in a see the human being in a very different way and would be very wary. In a sense, your project is an existential one, like all great existentialists, of trying to kind of save uh humanity, humanity and the uniqueness and the, the uh variability of humanity from a kind of mechanistic, scientistic, deterministic worldview. But I wanted to ask you, michael, about, like, what do you see as driving big tech? And maybe it's an easy answer or an obvious answer, which is capital, okay, but what is it behind the capital? Is it individuals? Is it? What are the forces behind that? Do you see that as a kind of individual, social level? Um, yeah, what's your, what's your sense of that?

Mick Cooper:

well, I think. I think that there's a lot, yeah, there's a lot in. I mean, I think to some extent you can understand a lot of what's happening with big tech in terms of of computer science. Um, this discipline, which is now one of the most popular, I think it's become the most popular academic area in this country and around the world. It only started to exist really as an academic discipline in the 1970s. And, what's interesting, I think, if you look at all of the big tech titans people like Page and Zuckerberg, sergey Brin, elon Musk and so on and all of them have formally studied computer science. Larry Page's dad was one of the first computer science professors, was one of the first computer science professors. They've not only studied it but they've kind of absorbed it almost from birth in many cases, playing around with learning new computer languages for fun, playing with computer games and, of course, reading fairly obsessively in many cases, science fiction.

Mick Cooper:

Jeff Bezos, for example, is well known as a Star Trek freak. I think he even paid to be in Star Trek. He sees his whole mission not really anything to do with Amazon, but to do with Blue Origin, his space mission. That's what he's dedicating himself to do, and he said, you know, I see myself as a builder, essentially putting into practice all of those brilliant ideas that science fiction came up with in the 20th century. And so this is an incredibly strong imaginary which is kind of driving people. Yes, they want to make money, but some of them aren't actually that interested in money per se, they're interested in the power that comes with it.

Mick Cooper:

So you have this kind of scientific aspect where computer science I tried to explain, I think has always had that idea of sort of agi or artificial general intelligence being almost its mission. So you know, if you are into computer science, you're into this idea that it is possible and perhaps necessary to come up with artificial intelligence which, fundamentally, is thousands of times, maybe millions of times, more intelligent than anything that humans can come up with, and that, for me, is in itself is incredibly dangerous. It's sort of accepted of hey, that would be useful, wouldn't it? But I mean, what would that be like? How would we understand it? We wouldn't have a clue what an entity which is thousands of times more, millions of times more intelligent than us is doing and what it intends to do. Would it be like a benevolent God, a malevolent God, or what? And one thing that's sure about or isn't discussed in any way how would this be democratized? Who would own this?

Michael DB Harvey:

Of course, the answer at the moment is absolutely clear big tech, five or six companies which own the large language models which, increasingly, even academics don't have access to so, michael, can I just bring you back to something, because when you're describing this kind of drive is coming from science and imaginary and star trek, that's almost quite a kind of benign model. It certainly contrasts with a kind of drive towards capital, greed, people making you know it being about money and ultimately on a capitalism. I mean, do you see both of these forces at play or do you ultimately see that actually there's something more kind of psychological, more personal in what's driving what, as you're saying, whether it's dangerous or not, the intention behind it? Do you see it as benign or do you feel it is, in the way that Marx and other socialists have described it, ultimately this kind of steamroller force of capitalism that is underpinned by, maybe, greed or some other human need?

Mick Cooper:

Yeah, I think it is. Well, I mean, I think it is kind of the logical next step in capitalism to simply go beyond humans. Why? Well, because capital needs some way of creating economic growth, which up to now has been in many ways, I think, as a de-growther, I would say, has been very dependent on fossil fuel, on the extraordinary kind of expansive power of coal and oil and gas, and that, of course, has become increasingly problematic, both in terms of, well, to some extent, supplies running out, but also in terms of the devastating effects which using this fossil fuel technology has on the planet. So there is a very, very strong desire to have something which goes beyond this and, of course, the technology is not just based on humans, it's also geoengineering, it's fundamentally re-engineering the planet as well, I think, in terms of the ultimate destiny.

Mick Cooper:

I think, in one sense, I think the first 25 years of the AGFU machines has been probably more benign in the sense of its effects. There are some people who are suffering horribly, I think the so-called micro workers, people around the world who are paid tiny sums to work on data, Simply like you get paid one cent or something to identify. Is there a cat in that picture? And a lot of what is happening is based on this new kind of underclass, but I think the real devastating effects of this are now happening will happen probably in the second phase of the age of human machines over the next 25 years. The foundations have been laid as we see more and more job automation, more and more jobs disappearing, professional middle ranking jobs as well as lower paid jobs, more and more of a concentration of power just at the top among the tech elite. So that, I think, is where we're beginning to get the emergence of this kind of China-America superpower restructuring, essentially based around big tech, based around who owns the data, and I think this is one way in which we're going to see the splitting up of the world.

Mick Cooper:

Trump is already doing it. He's already warning Europe you need to go on your own, which is actually possibly an opportunity for Europe maybe to develop a very, very different way of doing technology. But the danger is, yes, that we will get these massive superpowers and we get two kinds of, in a sense, authoritarian American authoritarianism based on a complete control of people's lives through surveillance techniques. Again, this is already happening shutting down demonstrations by identifying people through facial recognition in a free speech area of people like Peter Thiel, who was one of the most powerful big tech people who maybe most people haven't heard of. He owns this company called Palantir, which is basically a surveillance data mining system which offers governments automated surveillance and administration systems. Pallanty also, by the way, now owns much of our data in the NHS because they managed to get the contract to do that.

Michael DB Harvey:

So do you think, michael? Sorry to interrupt you but do you think humanisation then pushes us towards growing inequalities? It sounds like you do, and, in terms of social justice issues, it sounds like you feel that it will lead to more and more inequalities between those who have and those who have not, and we've just seen the beginnings of that, and that growing inequality is actually going to get worse in the future until there will be an even greater divide between those with and those without a dramatic disequalization as we're getting.

Mick Cooper:

The 1% now, I think, owns as much as 95% of humanity and, where we've had any kind of growth, almost all of the proceeds of that are going to the top elite, and the top elite, now, increasingly, are big tech titans. I think eight of the richest people in the world are in big tech, and we're also getting seeing this tech elite of software engineers who themselves are earning huge salaries. So, yes, I think that's definitely the way the whole process will go, towards kind of shoring up this very, very steep economic pyramid and even perhaps going beyond what we think of as capitalism to just some kind of techno dystopia.

Michael DB Harvey:

What would that look like, Michael?

Mick Cooper:

that technology stuff, Well, I think it wouldn't look very pleasant because, apart from anything else, big tech is interfering with whatever climate action is actually taking place. It claims to be quite green, claims to be eco-friendly, but there are all sorts of ways in which big tech is kind of collaborating with what you might call brown capitalism, fossil fuel capitalism, which is actually going stronger than ever. It's certainly masking it, because if you buy the kind of technologism idea that technology can solve absolutely everything, why bother anyway about reducing emissions? Because you can geoengineer everything. So, yeah, I mean, I think that we are moving towards something which is potentially maybe post-capitalist, in one sense post-growth, but it will look pretty awful, I think, in terms of some kind of fortress state in which technology is used to control people, in a way in which, yes, kind of 1984 style, but 1984 style, with a technology which is absolutely unbelievable in terms of its intrusiveness, in terms of its ways of knowing exactly who we are, because everything in the smart home, let alone the smart brain, is going to be surveying you, is going to be checking you and is going to be making sure that you don't do anything to disrupt the system. Now, to some extent we've talked about the American version of that, the Chinese version of techno-authoritarianism, is already far further advanced in terms of a massive kind of surveillance system, internet censorship, this social credit system which, even if you get a parking fine or something like that, you can be banned, prevented from doing certain things like buying a train ticket or going to a concert or something, and, of course, any kind of political infringements that you might make or go to a demonstration or anything like that can have much more serious repercussions. So you can see that kind of world where, yes, if you play along, maybe you can get by, but for a lot of people it will be miserable.

Mick Cooper:

And the question, of course, is what's the reaction to that? Is there an opportunity in some ways for people who are reacting to this? And job automation will probably be the catalyst. That's where people now need to start thinking very, very seriously about some kind of resistance to what's happening, to the potential job. We're seeing it now Civil service 30,000 jobs are supposed to be going, or 30,000 jobs are going from NHS England, which has being wrapped up. We're seeing many more jobs which are now supposed to be automated or at least got rid of in the UK civil service. And of course, we're seeing Musk with his chainsaw eliminating potentially millions of jobs. Now, that's not necessarily automation yet. That's just cutting and it's also destroying the government agencies which make any kind of regulation possible. And the biggest danger in all of this is that we have the continuation of deregulation of big tech, which is what it's thrived on over the past 25 years.

Michael DB Harvey:

Michael, I want to come back in a sec to what we can do and what changes we can, what it means practically. I think it'd be interesting to explore that a bit. Can I just ask you something though? So in this kind of dystopian vision you have a technology. I was thinking about that, in contrast, when I did my book.

Michael DB Harvey:

One of the chapters in that is a kind of fictional chapter, this kind of utopian society where which is very technological, where things are wrong with technology and people have kind of implants where they're kind of communicating with each other and making decisions democratically about how things go in society. I guess I guess from your perspective you would see that as pretty naive and kind of missing where technology and that that's fine if it is. Perhaps it is very naive, but I think you would see more of a malevolent driver behind technology. Or do you think there is a possibility for technology? I mean, I guess you know somebody might argue that technology can serve both good and ill. It depends who's got control of it. Yeah, I think you would see it as inherently being run by big tech in the interest of capitalism. Would that be right?

Mick Cooper:

Well, I mean, I love your chapter, by the way, I think it's brilliant. And well, I mean, I think in some ways technology is neutral. Yes, I mean, I certainly don't buy into technologism or technological determinism which says, you know, there's nothing you can do, you just move from one invention to the next. You know, that's the biggest danger that we have. I think we always need to be thinking of technology in political terms. We always need to be thinking the technology we have tends to be controlled by the elite. Indeed, elites are defined almost by the technology they have, the technology they use and, of course, in some cases, the technologies they ban. They use and, of course, in some cases, the technologies they ban, because some areas for example, famously, the Chinese emperors in the Ming era banned fireworks from being used for anything other than recreational purposes because they didn't like the idea of all their nobles blowing each other's up, which is, of of course, exactly what happened when it went to Europe. So, throughout, we've always had this relationship between elites and technology, and before the agricultural, the first agricultural revolution, about 8,000-9,000 years ago, years ago, hunter-gatherer communities dominated with very sophisticated levels of technology in many senses and certainly brilliant skills of using this technology. But in many cases these were extremely egalitarian communities in which all decisions were made collectively, and you can imagine lots of situations where people said, look, you know, were made collectively. And you can imagine lots of situations where people said, look, hang on, there's that agricultural thing. You know, maybe we should do that. Or maybe we should do that full time, because some hunter-gatherers may have been doing horticultural or agricultural, you know, maybe in the winter season, but then, you know, people will say no, we don't want to change our lives, we don't want to be working in fields all the time, we don't want to be tied down in that way. So I think there's always been a sense in which human beings are potentially the people who decide on the technology. The trouble is we do have this pyramid society the people who decide on the technology. The trouble is we do have this pyramid society Ever since the agricultural era. We have the elite running absolutely everything, and even in democratic periods which we're in now, yes, it's better than them, but where is the democracy around all of the things we're talking about? Where is even the publicity? I mean this is one of the reasons I wrote the book is that these things are happening. This is a very powerful intentionality which has this extraordinary kind of compelling theory but also practical logic, and nobody's really talking about it. You'd think we're having these kinds of discussions all the time.

Mick Cooper:

And to go to your point about, yes, could these things be used benignly? Yes, potentially, but I think you've got to put the cart before the horse, or rather the horse before the cart. What we need is a different type of society. It has to be sustainable, otherwise we're totally screwed anyway, and that, of course, is a massive change. That's a massive change and that's the end of capitalism. I think it's the end of growth. It means a massive reduction in consumption and production. We need to create much, much greater equality. We need as you, I think, rightly write about in your book we need a much greater emphasis on well-being, and all of those things mean a complete jettisoning of our current neoclassical economic system of neoliberalism. That is a fundamental revolution. And then we then need to just this minor element as well. We need to do that on a planet wide basis.

Mick Cooper:

Now, having done that, or having created some kind of world which I think could be a wonderful world in which we're integrated with it yes, we have technology. It may not be that very much AI. The levels of energy that degrowth economists think about may be comparable to Western Europe in the 1970s, something like that. Who knows? There may be more, there will be the internet, but there probably won't be AI. Be AI, we will have a world which is much more egalitarian, much more diverse, where people, I think, still work. I'm much more in favour of the 20-hour week than maybe UBI RAOUL PAL.

Michael DB Harvey:

Just say what UBI is. Michael MICHAEL KRIGSMAN.

Mick Cooper:

Well, I think universal, well, I think universal basic income. So some people see this as maybe the best policy in relation to big tech to say, ok, let's have a robot tax. Or, if you want to automate, yes, you're going to have to pay for it, by providing much more money which will then be paid to everybody, everyone in society, unconditionally, so that people then have the choice if they want to work or work part-time, or simply engage in community activities and so on, and I think it's a great idea. I think the danger is it totally puts big tech in an even more powerful position.

Michael DB Harvey:

So where do you think people's focus should be? Do you think people's focus should be on the economic system, on welfare, for instance, or do you think the focus should be more in challenging tackling big tech? Do you think we should not be using things like AI? Is it about boycotting it? What do we need to do from your perspective?

Mick Cooper:

Well, I'm tempted to say kind of all of those and this is the difficulty, in a sense is the focus. I mean, I think there has to be a continued massive focus around climate and climate change, and I think big tech needs to be more and more held to account in terms of what it's doing, both in terms of how real it is Google or Amazon with their net zero plans I think some of that you have to look at with a pinch of salt but also in the way in which they're creating a new extractivist industry, looking for all the minerals and lithium and so on which is required for renewables. So we're getting, you know, it's like you know, fossil fuel extractivism all over again. But we've seen it in Ukraine, for example, that Trump suddenly getting interested in all Ukraine's critical minerals. No doubt Musk's prompting. So I think that sustainability needs to be worked on. I think equitability, yes. I think we desperately need to be starting to flattening the pyramid. Wealth tax is thinking more about maybe even wage ratios, the kind of wage ratios which say cooperatives. Some cooperatives have, I think, maybe five to one between the top owner and the bottom, as opposed to the hundreds and hundreds to one which we have at the moment. I think we need to be thinking.

Mick Cooper:

As an organizational psychologist, I would say this we need to be thinking much more about organizations which are non-hierarchical, which are cooperative In some ways. That's the biggest change of all and the most difficult one. It's the one. Well, I wouldn't say we've never tried it, because there have been more flatter organizations throughout history, but it's certainly the assumptions that any organization that's at all efficient is going to be hierarchical needs to be massively challenged, and having worked in dozens and dozens of steeply stacked organizations with hundreds of different grades, I know what that's like, we all know what it's like, but that, in some ways, is almost the biggest challenge to start thinking in terms of flatter, horizontal organizations in which we have democratic decisions. I think democratic decisions are what I call demo techniques for big tech, for all technology decisions. I think we desperately need particip. You know participatory democracy, you know assemblies, you know jury type systems where people are discussing these issues.

Michael DB Harvey:

And Michael. What about Go ahead? Sorry, michael. And what about our relationship to technology itself? Do you think that, in terms of our use, going back to the question at a very practical level do you think that, in terms of our use, going back to the question at a very practical level, do you think we should be very selective? Do you think we should not be using ai?

Mick Cooper:

yeah, well, I I mean, I think it's incredibly tempting and there's no doubt about it. I mean, certainly, things like summarizing, and increasingly it's getting quite difficult to avoid it. I mean google search. Now, you probably know, google search gives you, instead of just taking off to a particular website, a series of websites does it for you. So I think resisting some of that is important. But I also think we do need to start thinking in terms of campaigns.

Mick Cooper:

There is a campaign to stop AGI, to pause AGI. These things are important. It may be possible to pause generative AI, this predictive system, which has been going now for two and a half years, but it's already changing so much. We're beginning to see individual campaigns starting. For example, there's a campaign in the music industry which is quite well-publicized People like Paul McCartney and Kate Bush and many, many others saying stop using our material, stops sucking up all our data and then using it for apps. With this system which the government is proposing, where you know you actually have, as a creator, you have to opt in rather than automatically opting out. You know, I'm a songwriter, I've got some songs on Spotify and so on and I don't think I'm making any money, but I don't particularly like the idea that if I was to prevent that being used, I would have to actually, I think, probably identify all of these different companies and say oh, by the way, please do not use my data. These are small campaigns and they're starting.

Mick Cooper:

You might say that self-interested, professional ones, but I think there are bigger ones which need to be saying why don't we stop this entire development before we can discuss these things, before we decide on them? And that's going to be incredibly important, incredibly difficult, incredibly challenging, but I think we need to be looking at job automation certainly, which is the big one. This is the one where everyone is affected, and there are all sorts of myths, I think, being told about that. Like robotics, we're constantly being told oh, you know, the robots are only going to replace dirty, dangerous or dull jobs. Well then, that's true.

Mick Cooper:

That is not true. They're replacing every single kind of job you can think of. You know, do you think of parenting as a dull job? Do you think of teaching as a dull job? Or indeed, you know, being a therapist? Therapist, because you know. They're coming for every kind of job. Well, I was going to ask you jobs that don't even exist yet. So, yes, I think it's about trying to focus on the job you have and and starting to think how do I protect that? How do I prevent any kind of of elements? Well, let me.

Michael DB Harvey:

Let me ask you more, so I'm just gonna let my cat out, but do you I mean you said they're coming for all of us do you think that? Well, I had a couple of questions around, kind of therapists and psychologists. Um, let's start with one do you think that they're coming for psychologists and therapists, and do you think that therapists in in this, you know, are going to be replaceable in the way that, uh, others? So there was one question and then maybe we can come back to it. The other one is what you maybe think therapists, people in the psychology field who are interested in these issues, interest in social justice, particularly what particular role we might have in helping to develop a more socially just future, based on some of the kind of analysis that you very eloquently described.

Mick Cooper:

Okay, yeah, two really good questions. Well, I think in the first one, I mean there is no doubt that the robo-counselor is coming is already here. There are already a lot of online systems which do offer kind of therapy services and indeed, you know, I think you're seeing more and more of these kind of AI assistants and offering this kind of advice, and there are some reports that people you know actually find it quite helpful talking to a machine. You know whether you're actually talking or whether you're typing it or whatever. I mean, there are some actual robots, but you know that's perhaps going a bit far, but I think what you're going to see is this kind of interesting. You know this sort of ability to listen and you know machines are pretty good at that. You know they don't have all of those emotions and all of that ego interfering or and then giving advice. Well, you know some some of the things that you're looking for when you're kind of agitated or troubled, you know are pretty simple. You know they're not necessary. So I mean, even now I don't know if you've been on Bing co-pilot I must say I had a bit of a conversation and admittedly it was about my book, because I was interested how that came up and how did they link it to my songs and so on.

Mick Cooper:

And I have to say, after about 10 minutes of this, I came away thinking this is actually quite, you know, quite an interesting conversation, and it was. It was very sort of positive. It was positive. I didn't realize I was the same person as as I was talking about very narcissistic exercise, but it was interesting in. It was positive in terms of me as a writer and songwriter, but it was also positive in me in terms of me as a writer and songwriter, but it was also positive in me in terms of other things I can help you with. Is this the kind of thing you could be interested in? Would you like to brainstorm that a bit more?

Mick Cooper:

So, yes, I think there is a place already for a certain kind of online advice or therapy. Clearly, to actually load up a piece of software with all of the symptoms of OCD or whatever is pretty straightforward. Some people would say, of course, cognitive behaviour therapy lends itself very well to digital therapy. Some therapists who certainly in the past saw themselves as scientists, as almost white-coated experts, providing not a relationship, as I think we would very much think about therapy, but just scientific advice. Clearly, that plays into the hands of online. So I think it's happening.

Mick Cooper:

The question is but the question are organizational decisions. You know, how is the NHS going to react to this? You know, when you're looking at a situation where you've got I can't remember, but something like a million people, over a million people, on a waiting list for therapy, and you know where is the money perhaps less money for that, the temptation to be able to say, well, certainly, some of these will go online, a therapist after all. Well, not a therapist, but a machine, a digital system that's available 24-7, which is also quite interesting, that you're constantly potentially going to be referring to your online therapist. I think that's where you're going to see it. You might get a bigger distinction happening between private therapy, which I think is always going to be the most treasured type of therapy, but NHS therapy, therapy therapy or very low cost therapy.

Michael DB Harvey:

You know, may well it knows what all the different treatments are. What do you think that? Do you think that there's something that a human therapist will be able to provide that AI would never? And what is that?

Mick Cooper:

Absolutely Well, I think it's the human connection. I think it's, you know, the shared experience of being human in a human world, and some of that, of course, is some of the almost mistakes, possibly that the digital system can't make in terms of emotions are part of what's there in terms of a therapeutic relationship and certainly, as I remember, most of the evidence in terms of what clients value in a successful therapeutic relationship tends to be the relationship themselves. We, as therapists, get very obsessed with our models, um, you know, but actually I think for a lot of clients, that's, that's, that's irrelevant.

Michael DB Harvey:

To some extent, it it's, it's just, it's just a one-to-one it's the relationship and it's also a sense of being cared for which, by definition, absolutely absolutely and I think, yeah, I think that's incredibly important and I suppose that maybe leads on to the second part of the question.

Mick Cooper:

You know, in terms of what, how you can be, you know, relating to this issue, or perhaps becoming more aware of that kind of issue, more aware of you know, I mean of issue, more aware of, I suspect, more if you are clients will probably be coming up with that question should I be using an online therapist? Or I am using an online system and they say this and you're saying that, uh, you know it may be that that comes into uh, but but I think I think it it is an opportunity very much to be emphasizing, you know, the, the human aspect um, so I guess, michael, that second question.

Michael DB Harvey:

that second question was about what do you think that therapists, psychologists, in resisting in maintaining a more humanized and social, just world? What do you see us as having any particular unique contribution to make? It sounds like part of what you're saying is working with our clients could be around kind of reminding them of that human connection, that human possibility. Is there other things that you think we could be doing?

Mick Cooper:

Yeah, them of that human connection, that human possibility. Is there other things that you think we could be doing? Yeah, I mean, I think there is. I mean, I think it's partially drawing attention to this extraordinary choice which is now going on, which is, you know, are you human or are you machine? Which is now going on which is are you human or are you machine?

Mick Cooper:

And getting clients to think more about that, more about what their values are, and, to some extent, I think, trying to debunk, demystify this whole notion about technology. That technology, yes, does it all for you. It does it for all for you. But what is the cost of that? What is the cost in terms of what you're losing in terms of your personal skills? So, yeah, you start using AI to summarize Great and then, after six months, a year, you're asked to summarize without. Can you even do it? Do you know how to go about that? Do you know how to go about brainstorming something? And remember, of course, some people are losing skills they've acquired over decades. Other people never have those skills, so they're going straight to summarizing through AI. Or you know a whole series of things which AI does already, you know, which they've never learned, and they go straight to. So I think it's thinking about that. Yes, you know you might use, you know there are some fantastic. I have to say, if someone is interested in visual art, there are some extraordinary kind of you know image generators. You know which can give you, you know I don't know a picture of Hugh Machines as is done by Picasso or abstract expressionist style, and give me four of those in 20 seconds and they are extraordinary. But you've got to be thinking that's already eaten away, eating away at freelance visual designers. There's always something like a 30% drop in one survey in terms of people come. So see, I think it's asking people to be aware of the losses, both in terms of job losses to other people, because it all comes back in a sense. It may be a very different job to you, but it's the same kind of situation as your job but also in terms of the de-skilling, because this is one of the biggest problems. I think of mass de-skilling.

Mick Cooper:

Every time a job or a task of a job is replaced or partially replaced by an AI, human skills are disappearing and some of them are absolutely fundamental. I mean, look what GPS has done to the oldest quite possibly the oldest or one of the oldest human skills of all wayfinding, those navigation skills, extraordinary skills which enabled human beings to spread right across the world. You see it, in Australian Aboriginal people there's still incredible ability to read an entire landscape, which to other people, to moderns, is just nothing. For them it's full of history, it's full of narratives and so on. But what's happening to us?

Mick Cooper:

Even getting around town, in your own neighborhood, sometimes you can find yourself looking at the GPS. Where was it used to have maps? We now just follow this voice and the observational skills or the observation experience of actually going through a landscape or going through a townscape are dropping because all you're doing is focusing on the screen or focusing on the instruction. So you multiply that okay, maybe you multiply that across virtually every skill that you can think of and even competencies you never were aware that you had or anybody had. And again, I think that's quite interesting, almost that sort of itinerary, a logging of these skills, both personally and collectively. It's like trying to transcribe a language before it disappears. Languages are unfortunately disappearing all the time. These skills are indigenous, kind of indigenous, you know, human competences. They come to a great extent with being human, but the danger is they, they simply disappear and we we have no access to them except through machines.

Michael DB Harvey:

So again I guess sorry, michael I I guess one of the the unique human skills as a therapist and psychologist is about relating, and perhaps when we ask the question about what is the unique contribution of psychologists here, therapists is that we can hold that skill. We can remind people of the depth of connection and the importance of connection. We can be a memory for that incredibly important human capacity for deep communication that AI maybe can present but can never fully replicate by definition.

Mick Cooper:

Yeah, absolutely, can never fully replicate by definition. Yeah, absolutely, um, and I think I mean I think again, I suppose it does kind of come back to jobs as well, and you know, I think increasingly there's going to be a lot of anxiety. There is anxiety already about about, you know, jobs and what job should I go into? How do I hang on to my job? How am I going to cope if that job disappears? So I think there is a lot of kind of work around that which I think will become increasingly important. Become increasingly important, I don't need to tell you, but people suffering psychological troubles. It doesn't just come in some kind of box or some kind of category which you can easily define. It's a whole world situation, uh, and you know everything that's happening in the, the, the kind of political economy I think is.

Michael DB Harvey:

It is, I think, extremely yeah, but I think when I, when I was reflecting on this question, I did a blog recently about can, can you have relational depth with the ai, and actually, where I came to was the position that I think that, like you were saying earlier, maybe CBT, more rules-based, technique-based therapies, are replaceable. There are potentialities for AI computing to do that. I think, actually, the more humanistic therapies, the more relationship-based therapies. Humanistic therapies and more relationship-based therapies, almost by definition, perhaps have more security because they're not replaceable by AI in the same way, and maybe, as I was saying earlier, there's something about holding on to that, being able to promote that, being able to really articulate those differences that you're talking about, about kind of not allowing those human existential, relational qualities to be subsumed or presented by ai.

Michael DB Harvey:

As you know, something that that it can just be kind of created and replicated, but actually that there is something very unique there. I guess what we need to do, though, is we need to understand something more deeply about what is human connection. You know, what is relational depth? What is it beyond the series of behaviors or laws or processes that is always going to be beyond the reach of ai? I guess that's quite important academic work as well in the, you know, in the poetry and the philosophy of people like booba, I think, there's um, you know that gives points towards ways that we might be able to do that.

Mick Cooper:

Yeah, well, yeah, I mean, I think there's a huge opportunity in a sense. I mean, in some ways it even relates to what some people in AI are trying to do, because for a while, people have been saying the only way people in AI are trying to do, because for a while, people have been saying the only safe way to have AI is to imbue AI with human values, which, okay, sounds quite plausible. But what are those human values exactly? Which is one question. And I suppose the question is if it came up with a definition of human values, would that necessarily ensure that AI was going to be benevolent rather than malevolent? In other words, some of the most terrible things that go on in the world go on on the basis of committed by humans who have particular belief systems which would seem to justify that. You look at the news, look at what Trump's doing, look at this terrible massacre that's happened today in Gaza, these are all come under the category of human values. But in terms of having that discussion, which we're not having, and I think it's almost like the AI people have invited us in here to have that role, it's an ideal role for psychologists generally, I think, to take that role, but I think also it's the wider issue of a new political economy based on a much closer ecological connection, based on a much greater community connection.

Mick Cooper:

And again, I suppose that can come down to issues around jobs. But I suppose who can afford not to be In the sense of how do you protect yourself? People are asking me increasingly that question Are there any jobs which are protected? Are there any jobs which are protected? Are there any jobs which and I think it's human to human jobs are much better? But I think I will say to people I think you need to start thinking, not necessarily in terms of content of a job, but a form, in other words, what kind of organization do you want to be involved in? Because if you're stuck in any kind of hierarchical organization, I think you're going to be increasingly precarious. There are going to be more and more people at the top saying how do we cut that job, how do we cut that down? And nowhere is more true of that than supposedly the most prized elite jobs in companies like Meta or Google, who are constantly cutting jobs.

Mick Cooper:

I think we should be thinking of what kind of organization, maybe thinking it has a much flatter organization, an organization which may be doing all sorts of things, maybe thinking it has a much flatter organization, an organization which may be doing all sorts of things, but it's doing it on the basis that worker ownership, where leadership is co-leadership and where you do have the ability actually to do one of the things that still human beings are better at than machines, which is a whole range of different tasks. That's what AGI is supposed to do. The final puzzle it's supposed to solve, it hasn't. Yet We've got lots of AI that does lots of individual things, but you ask it to do something else, it doesn't have a clue. So we're still very good at generalizing. We're still very good at working across a range of things. So, in a way, I think it's trying to get people to think, as I say, in terms of a different kind of structure to the pyramid structure, which plays completely into the hands of capitalism, and I think what we need to be doing is thinking, not helping capitalism reinvent itself, which, of course, is exactly what's happening. You have to hand it to it. It is brilliant at it. It's brilliant at coming up with new ways in which it can perpetuate itself or seem to perpetuate itself, and what it's particularly good at, of course, is taking criticisms and turning them into liabilities and turning them into assets. So I think this whole issue of you know, how do we stand up to what's going on, shouldn't be a question of how can we work the system, which just plays into the hands of big tech. You might be made redundant from it. The practical issue is, you're going to get a lot of situations where automation comes up.

Mick Cooper:

What do trade unions? If you've got a trade union, what do they ask for? What do they demand? And increasingly it's oh, we want training, first class training for other jobs. Well, what jobs exactly? You know, that sounds very, very plausible. But what job? What job is going to actually, you know, be around in five or 10 years?

Mick Cooper:

Most people say, oh, we'll go into computer science, we're going to go into STEMs. But everyone's going into that and the reality is already. A gen. Ai is not only very, very good at writing songs, unfortunately, or painting pictures. It's very, very good at writing software. You could be training as a software engineer and thinking this is an absolute, guaranteed job, only to find that, unless you have some real talent in this, machines are going to do it better. So I think you need to be thinking in terms of a different kind of demanding. If you're in that position, well, I want to be trained in organizational skills. I want to be trained in how to, if you like, horizontal forms of entrepreneurship. Entrepreneurship, and ideally, I'd like the capital for for a, for a startup of this kind.

Michael DB Harvey:

Yeah, you know if I'm going to give up my job without a fight. Yeah, michael, sorry to interrupt you. I'm just aware of the time and we're coming towards an end. But I mean, maybe what you're saying is there something about the also the potential for relational skills, and you know, I think our conversation has come back to the importance of that. I mean, it's been fascinating what you've been talking about. There's so much there that I think that we could go on to unpack and I'm sure people will find it really interesting to kind of listen to that and stimulate that. So do let us know your comments, michael. Thank you so much again, and I've really encouraged people to have a look at your book, the age of human machines. It's a really fantastic piece of work and you know, I think people have heard how much there is um to take away from that. So thank you, michael, thanks towards you.

Mick Cooper:

Thank you, mick, and good luck with that with the tasks podcast. Thank you, okay.

People on this episode