AIAW Podcast

E169– Homo Roboticus: Behind the Scenes with Alexander Norén

Hyperight Season 11 Episode 10

 In Episode 169 of the AIAW Podcast, we’re joined by Alexander Norén, journalist and economics correspondent at SVT, for a behind-the-scenes conversation on Homo Roboticus—the newly released second season of SVT’s acclaimed documentary series that began with Generation AI. While the series dives into how artificial intelligence is transforming work and society, this episode goes further, tackling the political and economic implications of automation that are often left out of mainstream narratives. From Nobel laureates to tech CEOs, Alexander reflects on the conflicting visions for AI’s role in our future: will it liberate us from drudgery, or quietly reshape power and inequality? A candid and provocative discussion about jobs, basic income, AI anxiety—and the uncertain future we’re all stepping into. 

Follow us on youtube: https://www.youtube.com/@aiawpodcast

SPEAKER_01:

A lot of promos today, Alexander. So what have you been you know which promos have you gone through today?

SPEAKER_02:

Yeah, well uh I I'm I'm coming uh directly from Rix FM who were interested they were interested in the Robo series.

SPEAKER_01:

Running down and sitting down in the chair exactly a minute.

SPEAKER_02:

I got a coaxer there, but I I I really appreciate your alternative here.

SPEAKER_01:

Uh hold up hold hold that thought. But okay, so Rix FM and what else did you do today?

SPEAKER_02:

Uh this morning I was uh up early to uh join the uh morning show, the breakfast show on SVT, um where we broadcast this uh to talk about the the series Homo roboticus as well. Yeah, and then also Piet, I guess. Yeah, studio at the the afternoon uh current affairs show.

SPEAKER_01:

Yeah, we know and Anders asked the question before, but I steal it now. You know, which one was your favorite today?

SPEAKER_02:

Well, actually, I think that uh the one that I came from, uh Urix Effam, because uh the questions were very uh close to uh what people might be thinking uh generally about. Oh, AI, is it dangerous, or robots are gonna take our jobs? These straightforward questions that I think preoccupate a lot of people uh generally.

SPEAKER_01:

And and uh what what was the format? You said something like three times three minutes.

SPEAKER_02:

Yeah, it's it's uh it's uh there's a lot of music, so it's like this general uh adult contemporary music uh radio uh channel. Uh so uh some music and then you talk for three minutes and then some music and you talk for three minutes and so it's it's a nine-minute interview? Yeah, more or less.

SPEAKER_00:

Could you give a highlight of a question you got that you really appreciated?

SPEAKER_02:

Well, the last question was good. It was okay, you've done all these series about AI, and and when I watch them, I kind of get scared because what's ahead? Is it gonna get so intelligent and it's gonna take over everything and we're doomed? That's a very straightforward question. And I said, okay, you know what? Uh tomorrow's Friday and the weekend is coming, I think we have time for some Fredos Muse before that and and and have a nice time. Uh you still have some years to go. And even if it even materializes, uh, because there are some really clever experts that are doomsayers, and there are some really clever experts that say that's bullshit. And who am I to choose which one to believe? I just probably would say those are outliers and and uh out of the spectrum. Outliers of the edge of the spectrum. So and and I said so. I think probably somewhere in between that's a boring answer. But let's the more we know about it, the more we follow the development, the more we can be in power of maybe uh steering the development. Because if you just leave it to Silicon Valley's tech bros, if if they're left to their own devices, maybe they will not steer it in a way that is good for the many for ordinary people. Uh so we have to to have an awareness to embrace the possibilities as well as steer away from from from the well from crashing in the ditch.

SPEAKER_00:

But it's a famous thing there, right? If you want to predict the future, the best way to do that is to build it.

SPEAKER_01:

Yeah, but then you're on to it. Then you know where it's heading. And and of course, I need to do a shameless plug. We we've been doing the spectrum question: where on the spectrum do you fall? In the last hundred episodes, I guess. So we have we have we have a good research sample of people being ready to do that. We could we could we should do an AI analysis on it, you know. So, how do people respond from dystopian to utopian? And how to think about that? And you gave I think one of the textbook answers. The other answer that comes up more and more that I kind of like is that we will have both. That in reality, in real world, we will have some parts is uh utopia, but it's also gonna be bad and good things. So it's not either or.

SPEAKER_02:

Yeah, and for me it's very much a question of time frame as well. Because what I focus on in this series is how how how will it play out on the labor market? How will that change uh how we work and what jobs are available for humans? And I'm pretty convinced that in a transition phase, there will be losers, and there will probably be more losers than winners in the transition phase. And then tomorrow, whenever that will be, we'll have new jobs that we can't imagine today. Uh, and depending on how the riches, the the profits that come from the productivity gains using AI will be shared. Maybe that will be a future that's good for many people, or not so many. And that depends on the decisions we take today.

SPEAKER_01:

But I think this is the perfect segue. And therefore, properly welcoming Alexander Norin to this very special podcast episode on the 13th of November, where we actually have poured ourselves champagne in the glass because we are celebrating. And what we are celebrating is, of course, that today Alexander launched uh together with SVT uh at SVT Play the new episodes that premieres on AI: Humus roboticus. And we're gonna go all down the rabbit holes of that show. And uh this is the segue, even to say that the first episode will be sent tonight, 10:30.

SPEAKER_02:

Yeah, it will be at uh 22:30, 100 hours on SVT-1. Yeah.

SPEAKER_01:

So, first of all, let's have a big cheers.

SPEAKER_02:

Cheers. Wonderful. Wonderful.

SPEAKER_00:

Yeah, how long time have you been working with this episode?

SPEAKER_02:

Good question. Uh, when I do these things, I kind of uh collect um stuff all along. I do interviews here and there whenever I get an opportunity. For instance, when when um Nvidia CEO Jensen Huang was here in Sweden uh a couple of months ago. Yeah, then then I oh he's here, oh how can I get hold of him? How can I get an interview? And uh and we were the only ones who got an interview, so I'm pretty happy about that. And then I just sit on that. I take out the little piece on a news angle that might be fitting in the exact moment, and then I the rest of the 13 minutes I keep for later, for instance, for this. And then I have these bits and pieces that I am ass during uh half a year or something, because you can't get too old. Uh and then now I have enough, now I have to finish it. I need these pieces pieces in the puzzle, so I go out and hunt them, and I ask some boss, can I get some money to do that, some time to do that? And since I've already done half of the job, it's difficult to say no because I've done that on extra time, uh, my my own time or something like that. Uh then it usually takes about a month to put the the pieces together.

SPEAKER_01:

And and for everybody who doesn't know Alexander Norian, so we are talking about you in the context. Uh you work for how many years in SVT now? Swedish students. 20-something. 20-something.

SPEAKER_02:

20-something, which means that I'm not 20-something. No, I just turned 50. You turned 50. And I don't have a problem with that at all.

SPEAKER_01:

No, but what is your I mean like you've been a reporter or journalist, you've been in the morning sofa of the year.

SPEAKER_02:

Yeah, I've been a program host for many years. Uh and uh then and lately I've I've turned from be uh being the uh the anchor of the evening news or morning show to uh focusing on kind of returning to my roots. I I I I'm I'm not trained as a journalist. I didn't study journalism at um I I I went to the business school. Uh so I'm an economist. And then I went to my roots, and uh for a couple of years I was the tech correspondent. That's right. Uh because I think that what's happening in tech uh has I mean, there's an economical perspective on that, which uh uh has a clear angle towards how is it changing society and and the jobs and and and how uh riches are are being uh uh shared or or or not. And um and also to understand it, you have to have uh understanding for and for for how how how for business, uh business models, and and uh why are companies doing what they're doing? Why are they pouring billions in into certain things and not to other things? And that moved me over to focusing in uh entirely on being a business correspondent, uh, which I've been for the last couple of years, but still with the mandate or even the the the wish from from from hires uh from my bosses to to continue covering tech. Tech. We need a c someone who has that. Yeah. So I did that's kind of my need. So now I entitle myself Senior Business and Tech Correspondent SVT. Yeah. And you did the first was it four episodes? Uh actually it's six episodes about two years ago, Generation AI.

SPEAKER_01:

So Generation AI is already two years old. Oh my god, you were here then when we and we talked about that show. And I remember when we turned off the camera, uh, we were we were sitting here. How should you pitch the next season? It should come in six months, remember?

SPEAKER_02:

And and actually we wanted to do the season two, but uh uh we didn't get uh green light for season two or something like that. So instead I pitched episodes. Uh and the next one was Fake Factory focusing on how AI is used in disinformation and information wars. And that came out That came out just before the uh American presidential election. That's right. That's good timing. Yeah. That's why I pitched that way. And then the second one, the third one was AI war. Such a huge chapter that we just felt we couldn't just do a single little episode of 15 minutes on. Uh so uh Marco, my my my wingman here, he he went to Ukraine and and and went to the front to see how they are using drones, for instance. And then we also did some uh how how is the Swedish defense forces and and Swedish uh military industry uh and which that came out when was that was about half a year ago, something like that, AI war and now uh homo roboticus.

SPEAKER_01:

And you used to ask you put us as a fly on the wall when it pitched Homo roboticus to your bosses. How how was that pitch?

SPEAKER_02:

That pitch was okay, we got two big trends in AI now. We have physical AI, these humanoid robots that are on the verge of taking place on the factory floor or might even take place in your home. And then we have the other big buzzword AI agents, and there's a connection between the two. And they are probably the best representations of how AI can transform the labor market. So it's a pretty important thing to cover. Let's do it.

SPEAKER_00:

We had the pleasure actually to review a bit uh the episode in advance. So well done and really well produced. I'm really impressed with the quality of the production that you have there. So I really recommend anyone to watch it later tonight or later on SVT Play as well. But perhaps you can share a bit about how do you actually do this kind of productions?

SPEAKER_02:

You said basically you're collecting information from also try to wait since TV, uh you can't just have talking heads. Yeah, you gotta show and tell. So one important thing was to uh find a humanoid robot that's actually uh in production or or or pilot phase or something.

SPEAKER_00:

Because you have one episode about robots and another one about agents.

SPEAKER_02:

Exactly. The first episode is called uh the robotic age, and the second one is called agent AI. And so uh that case for the robot we found in Zurich, where the Swedish or Swedish International Company, Hexagon, is is uh has one called Eon that's supposed to be deployed in, for instance, uh car factories. Uh, and we talked to Arnaud Robert, the the head of robotics there, about what are the limitations, what are the possibilities, and why do you even do this? What's the market for this? Uh so what are the the driving forces? And then for the second episode, talking about agentic AI, which is so abstract, how do you do TV out of that? I felt, okay, we've got to have a case where someone that experiences some sort of problem uh at the office gets help with an agent. And let's explore what that agent can do and what it can't do. And uh we had kind of I looked inwards because also for production reasons to be able to uh who's close, just walking down the stairs from one level at SVT to the other. Uh and Sarah, who is a production planner, you can say, of the camera photographers at SVT, she felt that okay, if there's something in my work that I don't want to do, is to take care of the hassle uh to change the job planning. For instance, photographer A is working 18 to 17 today, and tomorrow he's working morning and then evening. And then he calls himself in sick. How should I replace him? Well, I can't replace him with just anyone. I have to replace him with someone who didn't work late last night and hasn't expressed a wish to not work in the morning, early mornings for some reason. Uh and and then she just has to call around and check who is there, blah blah blah. Uh and she wants to do more not uh put uh exting extinguish small fires like that. She wants to work more long term and do planned uh uh jobs and think about okay which photographer is best suited to go to the Ferry Islands for two weeks, who has the uh the knowledge to and uh for filming this kind of thing and talk to the reporter, etc. Okay, let's see then if there's a case. This is a case for an agent. And we took the help of of uh Niels Jance, who is an agent expert, uh, and and uh they kind of uh worked together with Sara and gave her sort of an agent platform to work with to herself, you know for her to self-be able to train the agent. But uh it was required to tweak this very much. So I mean it wasn't an off-the-shelf product that you could just plug in.

SPEAKER_00:

Yeah, no, it didn't work the first time, right?

SPEAKER_02:

Yeah, no. She had to iterate, she had to see, okay, but now it gave me a strange answer, and then she had to give it feedback and say, okay, you can't do that kind of decision, you can't pick that kind of photographer.

SPEAKER_01:

So it was a process. And I need to link here that for anyone who's curious about what Alexander is talking about now, we happen to have Henrik Nieberry and Abandli here on the pod where we dissected this working process for two hours. Cool. So basically, the the way Henrik would describe what you said now is think about as the intern where you need to coach it and feedback. You give it a small task, and then you instead of doing something elaborate technically, no, you give it feedback, how it should work and what what it doesn't work. And over time you build the agent to work as as you want it to, as an intern that has no clue when you get in.

SPEAKER_02:

Yeah, and Henrik, you can see it in the in the episode. He stands next to Sara and coaches her uh how to give the feedback to this intern so that it gets smarter and doesn't do the mistake again. Yeah. And that was that that that was quite telling because uh uh I've come to understand it as a as a process. You you don't these agents uh, unless they're very simple, like uh book a meeting, look at my Gmail calendar and send an email. That could be an off-the-shelf thing. Um what up for what I know it's probably in co-pilot. But when you have these kind of uh situations, then you obviously still need to um to tweak things in a process.

SPEAKER_00:

Yeah, yeah. I got into a discussion actually today at some conference, you know, and the person came up to me and she was really strong-minded in saying that AI doesn't work. Okay, and I said, Oh why? I tried to ask this question and it failed. Okay, so so what was the question? And she asked that, or tell told me that and said, No, you you can see it doesn't work. Yeah, but could it be that you don't actually understand how AI works and it doesn't have access to this information? And if he doesn't have access to it, then it doesn't work. No, how how could AI don't understand this simple question? But no, the problem is so many people I think don't really understand what AI is good for and also bad at. So they try to do one-off kind of experiments and then they get put off and then say, hey, it doesn't work.

SPEAKER_02:

Maybe they did that six months months ago, and the same tool now has another output.

SPEAKER_00:

But but I guess it was the same for you. I mean, you had to iterate a number of times, you know, it didn't work the first time, and and you have to understand and iterate and come up with a proper way to do it.

SPEAKER_02:

Yeah, and and I I'm I'm coming to realize that we're in the phase where where that getting AI to work in a good way, to its full potential, needs some work. Yes, it takes some time. It's not just plug and play.

SPEAKER_01:

But but if we used to take one step back, this is so obvious that it has very little to do with the technology, even and how messy the human life is and our workflows are. If you think about what is really in detail process, what is in systems, and how much is tacit knowledge that sits in our heads. So if you want a machine to go in and work next to you in the human environment, we are we are not limited to one pure software instruction, but they should do a work which is messy, right? When you look into that, of course, there's a bunch of feedback loops going on that we might not even think about because we do it on the phone, like what you explained Sarah doing. And for me, that becomes obvious then that you you are building a system when you're putting together an agent because you're needing to secure do I have the right information, do I have the right feedback loops, all this. And of course, that is a process to get anyone into the job they're doing, if it's a human or if it's a machine. So it is not so rocket science. If you think if you stick one step back, why is this a process? It needs to be.

SPEAKER_02:

And I'm thinking you need to have that agent or robot next to you for quite a while to give it time. Yes. To to see the world like you see it. Yes. To see what what you usually do, to learn what is your um, for instance, me as a journalist, I'm thinking if I if I am to pair myself with a good AI, it has to learn what I prioritize, what is my voice, um, what kind of feedback do I want, and not be too uh too too uh too soft on me, because for instance, when I ask for feedback on something, um I I want it to be frank. So, well to train it. Yeah, exactly. Uh so I think that one key to success is to to let it in. Yeah. And learn what are you doing? Who are you? What are your what is your context?

SPEAKER_01:

So as a coworker, there is an onboarding and learning process, human or machine. Yeah, and then what's the culture here, for instance? Exactly. So, but maybe go a little bit back to the fundamentals here. I mean, like, so did you have any specific ideas of rabbit holes that we is a favorite word here that you wanted to cover, or did they emerge uh through through the work process and what were they?

SPEAKER_02:

Well, I wanted uh one thing is I wanted to understand really what is an agent? Is it just a buzzword? Is it something fundamentally different from from from the chatbots and the generative AI that we've seen? Is it just a chatbot deluxe or is it something else? So that was one thing, kind of to to decompose what is it?

SPEAKER_00:

Let's get back to that because that's an interesting question.

SPEAKER_02:

And and and my my conclusion is it's not binary, it's not like here you have not an agent and here you have an agent. But we're always helped by metaphors. So why not? Uh that was one thing. And the other one was obviously okay, all these cool videos of humanoid robots kickboxing, being kickboxed and standing up and dancing. How much is that hype fake and how much is reality? Perfect.

SPEAKER_00:

I would love to actually go into those directly.

SPEAKER_01:

I think I think we this is a good um this is painting a picture of like these were the two simple, very you know, these we can make TV out of. You know, these are fundamental questions.

SPEAKER_00:

Okay, so Alexander, what is your preferred or what did you hear other people prefer as a definition of an agent?

SPEAKER_02:

Well they all say it's something that has access to different tools and that can act uh more or less autonomously. So that's a fundamental different uh difference from chatbot. You ask a chatbot something, maybe you give it a sequence of of questions or instructions, and then maybe it delivers something, but it kind of quickly gets messy. So that's more uh reactive, and and uh uh an agent is more I don't know if it's proactive or something more that it takes initiatives. Yeah, exactly. Uh so uh that we should see it as uh just the intern metaphor is is is helpful, and also that it's not uh black and white, you can become more agentic the more you give it access to tools, the more uh you design it and trust it to be autonomous.

SPEAKER_01:

Like an intern turning into a full employee, turning into a senior employee. Yeah. In some ways, you start with the basic stuff, and in the beginning you give a task to an intern, and she does the task, and what should I do next, and then you make the task longer and you make their decision-making frame bigger.

SPEAKER_02:

Exactly. And and the other uh buzzword or or uh word that comes back all the time is routine tasks and boring tasks. Uh routine because then you have some sort of template that you usually use, other mental template or an or uh some other something that that's very standardized that you use to process data. And uh so the more routine there is in the task, the easier perhaps it is to use an agent for it. And the other one is boring, well, because then there's a low-hanging fruit for someone wanting to implement it, and that also could free them.

SPEAKER_00:

Could I actually challenge that a bit? Because, you know, some time you know, previously before LLMs and uh Chat DPT, etc., uh there was a very famous saying that you know AI will be really good and then like three-minute tasks or these kind of routine tasks. But then it's and I said, you know, the thing that it can never do, or very long time before it can do, is more creative tasks. But in some way it's turned out that actually it's the opposite in some way. Meaning it actually can be very creative and can actually do things that more white-collar jobs potentially is than rather big blue-collar jobs. And especially if you come to robots and physical work, that's certainly something that is far away from happening. So, in some sense, the routine task, at least in physical, but I would argue even in digital space without having physical environments, it's also the case that routine task is not perhaps the best use of agents. Would you agree?

SPEAKER_02:

Well, um, if I answered yes to that question, I would uh start uh devaluating myself and uh get rid of myself uh at work. So it's I don't want to say yes. But but I do agree that I'm on the I'm I'm on the side of those who are saying, yeah, it can be creative, it can do a lot of creative stuff. So I would I would probably agree with you.

SPEAKER_01:

But let me try to nuance that um by using uh the metaphor or the way Henrik Nieberry explained it on the pod. So so he basically understood it like he he he draws a spectrum, you know, he painted the spectrum sitting in your chair. On the one hand side, if you have a very concrete repetitive task that are very procedural, you should probably even code it the traditional way. You should you should really codify, you should have a business rule and should just get on with it. And we'd call that uh RPA ten years ago. We've done it in different ways. And on the other side of the spectrum in an enterprise is the absolute messy reality where no one than a human can can survive. And it's a little bit like uh the code is efficient and really fast and can do super fast. The human is messy and super slow but can deal with complete chaos. And then he said it like this: but then we have in the large enterprise, we have a lot of jobs that sits right in between here. You know, like it can't really be coded deterministically anymore because it's too many feedback loops for Sarah to deal with. But at the same time, you know, it it is really not uh chaotic enough that we really need a human to deal with it. And that's a fantastic use case for agents.

SPEAKER_02:

What do you say about that? So I think it speaks very much to the case of Sarah's the the part of her job that she didn't really want to do, that was kind of time-consuming, that needed her to contact people, to check their availability, to check with the with a manual. Oh what are the rules for for working? What hours can I even call that person? So yeah, I think it speaks to that.

SPEAKER_01:

Because it speaks to the this speaks to the work that is not as easy as we think, but at the same time not as valuable as what we want to put our brain on.

SPEAKER_00:

Could it be actually there is better ways to categorize what AI and agents are good at compared to humans? You know, instead of routine versus creative, I think potentially uh it could be the knowledge management capability here. So saying that actually humans are horribly bad at some things, especially in memorizing things. If you read a book or read something, you can it will be really hard, at least for me, to recall most of that content um quickly. But if you take an agent today or even an LM, you can put like 10 books in the prompt of Gemini and it can recall more or less perfectly any content in any page in those ten books. And which human could ever do that? So another way to phrase you know the type of task would simply be if there's a lot of knowledge management needs for it, then probably AI can do it well. Otherwise, if it's more reasoning and adjunctic uh capabilities, I would argue that humans actually can do that better.

SPEAKER_02:

So, in what what cases do you mean that humans uh are better apt at doing it?

SPEAKER_00:

I usually I can go into rabbit. This is a rabbit hole. It's good. But I usually use the kind of open AI um uh AGI pyramid. So they have this pyramid about five levels uh to reach AGI. And the first one is knowledge management, what they call it conversational. So that's the ability to handle a large amount of text or or data in general. And then they have reasoning and then autonomous, and then it's uh innovation, and then at the top of the layer is organizational. So in these kind of things you can easily see, I think, that even today AI is much better than humans on level one. But in all the other four levels, I would argue that humans are better than AI. So for reasoning, we can see some levels of reasoning, yes. But uh if you go any deeper in some kind of a more advanced task like your work with an SVT to produce a series like this, actually you outperformed an AI significantly today, I would argue. And even agentic tasks, um, because if you think really the only way an AI can take you know actions today is to use like pre-programmed APIs or MCP servers that are written for it. It can't use the normal kind of interfaces that we as humans do because it's too stupid today to do it. So it's very inflexible, and the amount of action-taking capabilities that they can have is actually very, very poor, I would argue. But that will change in coming years, but still I think this is actually a rather good metaphor to use. So these kind of five levels we see AI is good in level one, but uh humans are better at at the higher levels. So humans will move up this kind of pyramid as years have come, and AI will continue to try to catch up a bit, but uh today it's very clear that AI is only better at level one.

SPEAKER_01:

And you can use that pyramid then to start dissecting which types of tasks is uh agentic ready. Okay, so a lot of knowledge management in this, but the reasoning is not that hard. It's about booking a stupid photographer. We understand the reasoning and and the reasoning constraints is manageable. I I I can see and and then you can slowly then we start here. And then as as we go more mature on how we build scaffoldings and do this, it will just go further and further.

SPEAKER_02:

You could apply it to how we produced the series. Yes, most likely. You have like 20 hours of interviews with 15 people. In the bad old days, I had to listen to all through all these interviews, typing down what they say. Obviously the first step was to transcribe it, thanks to AI. And now I have a PDF, it has a timestamp and everything. Very practical good. And then with a tool like Notebook LM, I can just put them in and then I can talk to them to the material. And I can even say, and this is how I use it. So that's knowledge management. I remember someone saying something about the limitations of AI agents. Now I'm in the part where I want to talk about the drawbacks. What quotations do we have on that and who says it? Perfect. And then in one second, okay. This guy said that, and this person said that. Maybe this is the best one-liner that you would like to recall in AI speaking. Yeah. And that speeds up my work very much. So that that that helps. That's the extra brain that has the memory, obviously, uh in knowledge management. And then I ask myself, okay, could I use AI also to suggest how am I going to put this piece together? How am I going to do this documentary? Yeah. Maybe I I haven't actually pushed it to its limit and tried this yet. Maybe because I don't want to leave it to some to AI, because this is so fun doing. I want to emerge. But I want to be into the I want to swim in this uh sea of the it's so fun to do. So I I think that's why I haven't even tested to see how good or bad is it at doing it. But I think that's what I wonder.

SPEAKER_01:

But this is where the spectrum and the gray zone that we talked about, what is an assistant and a genetic, I think is the best way to treat it. So if you start in the way that it's an assistant, is it and you sway and it does the work that you ask it for. Step one. And then you move up when you've done, when you master that level together with it, you will seamlessly move up in abstraction level in what you think is interesting and what you can give them next. So I think there is a I think it's a very healthy way. If if you look at how they implemented it at rearing's council, yeah, you know, they here we have all the different workers, all the different uh what do you call it? Uh the civil the civil servants, they're all working on very specific reports and stuff like that. And they need to have accountability. So they only go out and do fetch things for the one person who then has full control the whole way. But you can see when you are used to that, when you've done that for a year, that you you start trip feeding in that again, you know, do that as well. And actually come with a recommendation here, or you know, and then and then you're on the way. I think it's the safest, smartest way to do it, anyway.

SPEAKER_02:

Something that I'm thinking a lot about is uh can how not to become too lazy. Uh and if I rely too much on it, if if I take the shortcut of uh going from give me feedback and then it rewrites something and I just accept it, that that's being lazy. Uh or oh, give me a good idea for a topic for a podcast on AI agents. Make the manuscript. Uh, this is the cast. Well, it could be it could be fair. It could be.

SPEAKER_00:

No, you touched a sore point because we actually use agents for that specifically.

SPEAKER_02:

But uh and and and I and I would I would use it as a starting point. It's something that if I'm if I'm I have a brain freeze or if I'm in a hurry, it at least gives me a good starting point. Exactly. Yeah, yeah.

SPEAKER_00:

And that's how I use it. And I think you know, one way that I usually speak about this is if you take a use case where AI actually works really well today, it's actually AI for coding, software engineering. And uh in that use case, actually it works not only in the knowledge level kind of way, because it can you know read all the source code and the documentation very, very quickly and efficiently, but it can actually reason rather well and it can even take actions autonomously rather well. So in that case, you know, it actually does work better than I think for most other use cases, at least yet, um, in these kind of bottom-free layers. But still, you know, when I do that as a coder, I still become this kind of agent manager. Uh, you know, I I see they providing some recommendation, I want to change it this way, I want to add this kind of code, I want to run this test. And I no, no, no, no, that's wrong. I still have the overview in a way that AI does not. So it becomes really clear, I think, to me at least, that AI for coding is really like a like a window to the future where we can see how we as humans start to use agents to do more of um like a manager kind of view for producing podcasts or other tasks like coding more and more. So I think actually that that's a kind of fun uh example to see how we as humans start to use uh agents more and more, um, both in level one to three, but we still as humans have the overview.

SPEAKER_02:

But then that means that you still then have to have the basic knowledge yourself. You you have to understand coding in order to have an overview overview.

SPEAKER_00:

You have to understand the general concepts, but you don't need to exactly understand how TypeScripts uh kind of blocks is written like that or specific details. That the specific details AI does really well. But the general constructs and how you build it over that's but let's pause on that reflection.

SPEAKER_01:

I think it's profoundly important that we understand where AI is right now. Because the logic we are talking about here is that ah, we will simply move up the abstraction level and we will have AIs agentically doing things on lower abstraction levels that we are too bored with, or we are not managing this. But then someone, this is also from the pod that yes, we might get there in 10 years or five years, but where we are today in relation to having this safely done, we need to have probably human in the loop in relation to the actual work done. Uh, not only to let it be, you know, you cannot just drop it in the lab and then use it. You need to oversee it. So you have a quality assurance aspect of this that means that we are in one way going up ourselves in our tasks, one abstraction level, but we are still working in quality assurance on the lower levels, which then means you need to have domain expertise in whatever you're putting the AI on. Journalism, coding. And I think that's the truth right now.

SPEAKER_02:

So as it gets better and better, you might need fewer people actually doing the grant work, but you still need some people to know how it's done so that you can see that.

SPEAKER_00:

But I don't like that phrasing though, because it says that we need you said needs fewer people. You can you can twist around and say we can do more things. Yeah, sure. Right? Then you still need the same amount of people, it's just you can do 10x more things.

SPEAKER_02:

Exactly, right? And and uh you are so right, Anders. And and and that's also my conclusion that you can go both ways. You can use it as a tool to need fewer people to do the same stuff, or you could do more stuff.

unknown:

Yeah.

SPEAKER_02:

And I think the interesting question when it comes to economics and and the future of labor is how will it be deployed? And who who decides that? And what factors decide how it will be deployed? I think it's a matter of whether the the this AI revolution comes in a time of growth in the economy, or if it comes in a time of recession. That could tilt it in either way.

SPEAKER_01:

But I I think we we we have a couple of big rabbit holes that we need to cover in this in this. We need to cover the robotics part as well. Okay, okay, so so then then we're parking this because this is about how we understand the future labor market. And I I I'm really I have a really big rant on using the replacement rhetoric and what augmentation rhetoric means in reality. Is that we need to talk about. But you're on to it now.

SPEAKER_02:

Start ranting. No, no, no. No, we don't talk about that.

SPEAKER_01:

But we we parked that for a minute, and now we've gone down and starting to build a oh, we have so much to talk about agentic. Big little bit of a then we now let's do the same thing for robotics, and then we go down a couple of rabbit holes. So let's just flip it now. Now we ended up hardcore in the agentic second episode. So let's talk, let's do the same thing, you know, priming and and putting up the cliffhangers for the robotic episode. What are some highlights from the uh robotics episode?

SPEAKER_02:

Well, one highlight is when I talked to um KTH Professor Danika Kiragić about uh and and I and I gave her one of those good questions that I were was asked today. I I I asked the question that I think many people ask themselves will robots uh take over? Yeah, like a Terminator kind of way. Yeah. I said, no, I don't think so. I think we'll integrate more and more. And that opened up an alley that I hadn't really thought so much about. Uh and that alley has Neuralink uh walking down the street, of course.

unknown:

Yes.

SPEAKER_02:

And how what does that imply for for us? I mean, this uh merging of human and artificial intelligence, that I think is super interesting. And that's also why we called the series Homo roboticus, like it would be uh an evolution stage in human history. So homosopians into homo roboticus, yeah, more or less. And also we we came to talk about what does it mean for how will it affect us if we have a robot in our homes? Once they are capable of being useful more than we are training them, we'll get there. How will we regard them? Uh will we see them as a family member? And if so, or if a butler, who are they going to be loyal to if we have an argument in the family of how we should, for instance, I mean one thing that I argue with my money and wife all the time is how how do you put the dishes in the dishwasher? I obviously do it the wrong way, you know. So now who signs the butler? This is a very how should the butler do it? Who is he or she loyal to? And also it's a 24-7 surveillance camera. Even if they tell us, and we believe it, that everything is stored locally.

SPEAKER_01:

Can we trust it?

SPEAKER_02:

Can we trust it? Uh well, maybe or even if we trust it. Will we change our way of behaving? Will we become other uh will we have fights and argue are arguments, calling each other bad things over the dinner table when someone is.

unknown:

Exactly.

SPEAKER_02:

So that that was kind of uh it awoke some thoughts.

SPEAKER_01:

But and and did you sort of follow some of those thought through? Do you start finding what's your point of view on some of this stuff or theory?

SPEAKER_02:

Well, on the one side, as I said earlier in the beginning of this episode uh of this podcast, I think that for them to be really useful, they have to be our 24-7 companion, or at least 18 24 companions. When we are like awake, yeah, yeah. Yeah, and preferably sleep more than six hours. Um to learn who we are and what our preferences are and and uh what our context is. On the other hand, that means that they are actually uh watching everything. And and uh well, I haven't reached a conclusion whether I think that is creepy or fantastic.

SPEAKER_00:

I it could be both, it could be both, right? Isn't that the the famous um uh philosophical um argument about the Panocticon, I think it's called, is basically the trying to actually literally do an ex experiment, uh a research experiment to see how people change when they know they're being uh watched versus they are not. And of course they do change the behavior. But wouldn't you still say that you know we are surveilled all the time, all anyway? I mean, we use Google, we use uh Facebook, we use Instagram, and so many more, and they have data about every move we make already to think that uh Facebook is listening to you.

SPEAKER_02:

Uh each other. Yeah, because they get this perfect ad that couldn't have been a good idea.

SPEAKER_01:

I had so I have an anecdote from this weekend that was so weird. I mean it's and it happens all the time, right?

SPEAKER_02:

Yeah, so I I get your point, Anders. But I think it's an extra level of having someone that actually listens and sees everything you do.

SPEAKER_01:

I I think the problem is also that maybe we we are we even if we knew this uh even we working on it, we we think they are, but it it's just a me it doesn't feel that way. I don't I don't have that relationship to my computer when it sits.

SPEAKER_02:

So so I mean do you want the robot to tell you haven't you had one glass of wine too much this evening? It's a good one.

SPEAKER_01:

Would you want your robot to say that or or not? No.

SPEAKER_02:

Uh or now you're late to work again. Yeah. May I come with the humble suggestion that you put your uh wake-up call maybe 10 minutes earlier? Where's the hammer?

SPEAKER_00:

Well, those are kind of nice. I mean, would you have the robot staying in your bedroom together with your wife?

SPEAKER_01:

No, but they are nice, but at the same time Honey, why is he standing there?

SPEAKER_02:

Well, I'm thinking about improving our love life.

SPEAKER_01:

Uh he won't he won't interfere, he's there to give me feedback afterwards. Okay. But a more important question to me is actually: did you have any conversations on you know where does this take off in in industry, at home, what types of robot comes first? Is that an obvious answer to that question?

SPEAKER_02:

One realization was that it's super difficult to get a humanoid robot actually to behave like a human. First step was to teach them how to walk without stumbling. And and and the next big thing was dexterity, which we're excellent at, but they have had a steep learning curve. And now it seems like uh the big challenge is to understand uh how the physical world world works, so that you can uh so that it it well putting dishes into a dishwasher comes naturally for us. But if the uh if the um glasses look different, this is a champagne glass. We understand that this is not as wide, so you can't put it at on this at the same way as you put a big drinking glass. But for the robot, maybe first this is look, this is a glass, I'll just do it the same thing. Oh s it got broke. So, understanding these things, I I understand is This is where we have to find it. Yeah, they still have to learn a lot.

SPEAKER_00:

But if you were to compare a bit the you know performance or where we are with robots compared to agents, um how would you say they perform in terms of like usability today, if you were to compare more the physical versus digital kind of AI you see cases?

SPEAKER_02:

Then I think agents is closer to being useful in daily life. At least if you get some help or you figure out how to tweak them so that they actually work. Um so that's a matter more a matter of uh managing the technology to work at its full potential.

SPEAKER_00:

And it isn't isn't it surprisingly hard to do these kind of things that humans find very easy to like walking? I'm not sure. Did you see the Russian kind of explanation?

SPEAKER_02:

Yeah, but we should talk about that one. My first thought when I saw this video of here, Russia is presenting its humanoid robot and it falls down like a drunk drunkard uh off the scene. He's obviously drunk, it's a Russian robot. And it wasn't working very good. Well, it was hard to begin with. Uh my first thought was that must be AI, an AI-made fake video to discredit the Russian technological capability. Do you think it was? I'm just so skeptical nowadays. I'm just a skeptical. It was almost too funny. Yeah. Exactly. It was almost too good to be true. Yeah. So but I haven't fact-checked it. So uh I don't know. Maybe it's true. Or maybe not. I don't know.

SPEAKER_00:

But I think it's it's interesting here because it goes back a bit um to what spoke a bit about before, you know, that some things AI is really good at and humans are are actually bad at these kind of things. And and this kind of Moravex paradox basically saying that. That things that you humans do well is actually very hard for AI.

SPEAKER_02:

And just moving about like a robot is very easy for a human, but super hard for Don't they say that well, what's easy for us is difficult for a robot, and what's difficult for us is easy for a robot. Yes, yeah, exactly.

SPEAKER_01:

That's what's what do you call that?

SPEAKER_00:

That's a paradox. More a vex paradox, yeah, yeah. But okay, but we are seeing a lot of improvements, of course. In China, they are actually moving really, really fast ahead. And uh I I saw this kind of video from Unitree, you know, where he did the kickboxing, and it was amazing, right?

SPEAKER_02:

Yeah, and and and what I'm thinking when I see those movies, uh those videos, is how much is actually happening autonomously as it's supposed to be, and how much is there's a guy with a remote uh outside of the or a VR headset actually or with with good gloves uh doing this? I I don't know. I don't know. I'm maybe I'm too skeptical.

SPEAKER_01:

But it's there's this continuum here with someone is actually with the remote control, and then we have uh the traditional robotics approach, Boston Dynamics, which is quite uh procedural, programmed, and really advanced programming. And you know how how it's an industrial robot program today, and now we're getting into deep neural network driven all the way through through learning, actually learning how to walk the same way as we are learning how to walk, learning how to handle the glass by doing. And I think this is also very tricky when you look at a video, it's different it's fundamentally different techniques, how they have reached something very cool.

SPEAKER_00:

But where do you think we are then? Given that you have spoken to a lot of people here now about uh yeah, companies that do robotics really well. Um how much is really teleoperated and like just demos and versus how much is actually robots and humanoids um getting rather intelligent and I think that the the the most uh breathtaking videos that you see of humanoid robots uh folding the the the shirts and and uh uh serving you coffee in bed, etc.

SPEAKER_02:

are probably the best shoots of a bunch of shoots. Uh and some of these uh Chinese robots dancing and kickboxing. Well, um I don't know. Uh I have seen some cases where there's well, you actually see a guy with a remote doing this, and some cases I think is more uh convincing that no, they're actually doing this on on their own. But then again, these videos are so well edited, so they could have let's say okay, it's they do it autonomously, but it was on the tenth time that they nailed it. That's my impression.

SPEAKER_00:

Who do you consider to be the leading humanoid robot manufacturer today? Is it US China or some specific company?

SPEAKER_02:

Well, uh one easy answer is to say that I think China uh has uh um a good place here, uh uh at least when it comes to uh number of companies that are pretty good at this. In the US, well, you have uh Tesla, you have Boston Dynamics, uh uh here in Europe. Well, we have some. We've mentioned two, uh, 1x Norway, um the Butler robot, and then the one the one that I met, hexagon uh industrial uh humanoid robot. And and talking to them, it was clear that there is a lot of work to do to make them as autonomous as you would like to. Uh they get they need this training. But it was interesting, their way was to train it at work so that you show them with it with a human uh do it performing the task. And then with with with Govs and VRs set showing the robot this is how you lift a champagne glass, and this is how you take it to your mouth. But maybe and then if you do that, I guess a couple of times and you do it virtually as well, of course, a million times, then maybe it will uh drink a sip of champagne in a natural way in the end.

SPEAKER_01:

But I think we we maybe just take a short breath and just define what we're talking about here as humanoid robots, and you reflect on that because of course we have seen different styles of industrial robots for many years, and we have seen sort of the uh robots in the Amazon warehouses which are amazing in doing different things. But here, what's the distinction now? They're looking humanoid, or they are robots that are made to be designed to be able to go into a human workflow without customized.

SPEAKER_02:

Exactly. They're not making them human-like just to scare the shit out of us or to make us feel uh well amused, but because they should be able to work in a human environment, a factory that was built for humans, a home that is built for humans with tables at the certain height and and cupboards at a certain uh height. So you have to have an arm that is approximately this long in order to work be able to function human life.

SPEAKER_01:

And the and so so the definition is not that it has to look like a human, but it needs to be able to work in the human environment.

SPEAKER_02:

I mean it's a distinction. Maybe it would be even more practical if it looked like a spider. Yes. You could maybe w walk easier and and and and crawl under the sofa to to fetch something that is stuck in the case. But I think this is an important detail. But you wouldn't want a spider in your home. No? Spider robot in your home.

SPEAKER_00:

What do you think about the market implications of this? So imagine if we compare it a bit to agentic work, we can see of course there will be some kind of market opportunities for that. And of course for humanoid robots as well. If you were to compare the two, what do you think will have the biggest market impact?

SPEAKER_02:

Well, first I would think uh what is the cost uh and and what is the target group? And it looks like well the Chinese robots will probably be the cheapest ones in the end because they have scale and well just like electric EVs. Uh will will they be the best ones? I don't know. Uh they'll they'll probably be very competitive. So I think that speaks for uh a probability that uh it it it will get out there on the market and not stay in a niche thing for for too long. Agentic well that's a more wider definition. It could just be the the agent that goes out uh finding good deals for me during Black Friday, and and that's such a such a huge number of kind of agents that we will be using without thinking that they're actually an agent. So I think that is probably the one uh side that is going to uh be um get mass market uh applic um. So just thinking of adoption first.

SPEAKER_00:

Like a startup now that want to think, you know, I want to make as much money as possible. And then they can choose, let's say they could choose to either go like I want to go fully in a Gen TIC and have a super generic kind of agent that can do a lot of stuff in a digital space, so to speak, versus going to the physical space and building a human and robot and assume it works perfectly. Well, then what do you think they would make most money from?

SPEAKER_02:

Then I would talk like a VC and say stay out of hardware and do software. And I would, speaking from my own experience as a user, say please do an agent that is easy to use, that has a great user interface, yeah, so that I don't have to get stuck in trying to tweak it. And also I would kind of go for a certain um sector, a certain vertical. Yes. I was talking about Black Friday. For instance, I think these price comparison sites that you have. It becomes pricing robots for price. They're not very good anymore. Uh I I want something more. I want to actually say to them, okay, I found this winter jacket. I love it. I don't want to pay six thousand krona for it. I can do with four thousand or even better. Just tell me wherever, whenever it's there. And by the way, actually you can buy it for me because they often go like this when it's uh 30% reduction.

SPEAKER_01:

So I want to put uh put it up as uh this is uh when you're you're free to shop at this price.

SPEAKER_02:

And when someone puts that on the market with an app that actually works, then I'll download it and they'll probably take five percent and they'll get rich.

SPEAKER_00:

You know, Chat TPT desk laws, the instant checkout that actually have agents in it and and collaborating with the Shopify and many other sites to actually do the shopping for you, and you don't even need to have to leave a Chat TBT website to do the shopping?

SPEAKER_02:

Do you think and vice versa when you are with their browser now? When you're out browsing, you could connect this. That's super smart that they actually went that way as well.

SPEAKER_00:

What do you think about that? As an economist as well. I mean, do you think that will have I'm and I guess it's positive for the users in the end, because it will really simplify and make the product comparison and everything so much more efficient. But what happens to all the other companies that do this, right?

SPEAKER_02:

I think that if you have to get into Chat GPT to sell, then obviously they are a gatekeeper. Yes. They will cut in between and everybody won't be there. So I'm much more interested in the other way around. I have this browser that is uh AI. That's browser. Yes, that sees everywhere I've been to every shopping site I've ever been, and and and kind of figures out itself. You still haven't bought that winter jacket, or are you still in in for it? Uh do you want me to ping you when I when I find it, or should I buy it myself for you? Okay, yeah. Please do that.

SPEAKER_01:

But but and here just to confirm with you, we will see a different buying journey, shopping journey, or the way people buy and search product is about ref profoundly being changed. We if you live through the years and we had been the search economy now with Google, and now this is something I'm certain.

SPEAKER_02:

Um a shocking thing that came up in my Instagram flow. It was uh a mountain guide company uh in Loufutan. And they said since generative AI chatbots, we've had uh like 50% uh reduction of website visits. Before we had we did SEO optimization. Well that's a tautology, but anyway, you know. Then they were on the top five uh uh search results when you look for Loufouten. And they had a boom, good business. And then the traffic just went down. People went to ChatGPT or whatever, and I said, Oh, what to do in Loufooten if I have two days? And then it scrapped had scrapped some other sites that obviously were not as good even, and gave them that uh recommended them to go to some other place to buy a guided tour that also was much more expensive. Uh so they were kind of lamenting the development and clearly pointing to how that hit their business. Aaron Powell But it's this is the thing, right? You need to understand how And it's going quickly, more quickly than you think. Yes. I can talk from the media perspective. Media houses, they see this happening also. Traffic going down. And I think obviously it must be that we're starting to get our news through these chatbots as well. Trevor Burrus, Jr.

SPEAKER_01:

We are simply going into the topic in question through a different door in a different way. And we any business needs to figure this out fairly quickly. Otherwise, you are not on the market anymore.

SPEAKER_02:

If you're a news organization, you should have a news service, not a news website, not a newspaper, not a news TV. You should have a news service that you can talk to that that is based on your stuff if you have a trusted brand that people want to talk to.

SPEAKER_01:

But we have to kind of We need to figure that out. And also you put it in the point. Maybe we optimize for search with search engine. Now we need to optimize for AI agents coming in and looking at your site. MCP enabled.

SPEAKER_02:

Yeah. So that's a whole new market for consultants.

SPEAKER_01:

It's time for AI news brought to you by AIW Podcast.

SPEAKER_00:

Cool. So we normally take a small break, not so small sometimes, but we try to keep it short with some recent news that we all have read and would like to potentially share. Alexander, do you have some news that you'd like to speak about Wamsday?

SPEAKER_02:

Yeah, this week I I was I can't say I was shocked, but I kind of very much stood out that uh the news that Jan Lequin, uh the head of AI, so to speak, at Meta, said that I'm leaving you guys. I'm starting my own business, just like all these other people, uh big brains leaving AI, tech giants starting their own business. But when you see the context of it, it kind of makes sense. Uh I see him very much as a research person that he kind of doesn't get so is not so interested in just using AI to optimize how you place ads on Facebook. Uh and also doesn't really believe in this AGI is gonna kill us, or even AGI is coming the next year. Which Mark Zuckerberg obviously thinks is his destiny on this planet to bring us AGI, and now he's doubling down on that, and also doubling down on application that can be a quicker return on investment. And that doesn't seem to be Jan Lukan's plate.

SPEAKER_00:

Yeah. No, but we speak about, and perhaps I speak a lot about Jan Le Kun as well, and uh I see him a bit as a hero as well. And um, you know, I feel a bit sad because you know, Meta spent like hundreds of billions of dollars to get you know new new type of people in, and even had this kind of hundred million signing bonuses for people coming in and starting this kind of new AI lab in Meta called Meta Superintelligence Lab. So I I you can see that he's been surpassed now. And it's kind of sad because you know they basically failed with their latest AI model, the Lama 4. It didn't really perform at all and even cheated to get on top on some leaderboards. And that's horrible. I mean, how that's a certain intelligence. So I mean, they failed in that aspect in many ways. Um, but I still feel bad for that because, as you say, he is mainly a researcher and he thinks very long term.

SPEAKER_01:

So I'd just like to give a few minutes to to explain why and please also no butt uh let me do a segue so we uh and I'll bounce back to you because now we're sort of now you took sort of the Mark Zuckerberg way. Ah, you you didn't deliver well enough on this. If if I now go put the Jan Le Kun hat on, he's been one of the mean, and this is the segue back to Anders. He's been basically telling the whole AI world the path you're on with LLMs and auto regression type auto-regressive type systems will not lead where you think it can never work. You need to go another route. And he published the Jeppa paper and that whole architecture, which you know. So if I'm sort of if I'm looking at if I'm trying to make Jan look good, he says, Well, you know what? I'm leaving the sinking ship. You're all doing it wrong, and I'm gonna start my lab that is doing it right. If you look at from Mark Zuckerberg's way, ah, your lama four wasn't that great. And then for anyone who then saw these signing bonuses, and then they saw that they organized these people not under the helm of Jan Lee Kuhn but on the side. This was a given six months ago that this was in the cars. And now the thing is at this table we think Jan is right. So let's what you know, what what why you know if we think about his leaving not because he's got fired, because he had he has other ambitions. You know, let's let's go that route and please elaborate Andresh a little bit.

SPEAKER_00:

I mean we know basically how these kind of big generative models work. You know, they have this kind of very simple simple and uh and general kind of objective, meaning predict the next token or predict the next patch of an image. And that's actually really cool and actually very value-creating for a lot of businesses because it means it can be used in a very general way and you can simply prompt it and you don't have to fine-tune it as much as a previous type of AI has to be done. So that's really powerful. But if you then scrutinize that a bit more, you you have to realize it has to go through these kind of billions of parameters in a single pass, you know, forward through the network to get a single token and then go through through the full pass once again for every single token. Extremely inefficient and obviously not how the human brain works. So, what he's trying to say is that we should have some kind of latent space reasoning, and that's what the Jebel paper is about. So instead of having to go from a token space into some kind of latent space in the middle of the big Chat TBT or Gemini model and then back to a token, moving from syntactic to semantic to syntactic, as then syntactic, semantic, syntactic, and then do that for every token is obviously, as I see it, very, very inefficient. And that's what he's speaking about in the Jeptola. Now, does he mean that there's a way to stop halfways so you don't have to go all the way?

SPEAKER_01:

So the the idea then is like what the the the beauty of the transformer is how fundamentally simple the logic is that you're just predicting the next word or the next token. But that beauty also means that you the way you are lighting up the whole LLM for any work is completely inefficient.

SPEAKER_00:

Yeah, so the idea is to take a shortcut and not have to go to the syntactic space.

SPEAKER_02:

You shouldn't be uh have to light up the entire factory just to fetch a screwdriver that is next to the entrance door, you should perhaps light just a couple of bulbs.

SPEAKER_00:

Yeah, more or less, yes. Um so to avoid having to you know decode and encode tokens all the time and just move in the in the semantics. If you think about it in the human brain, it's easier perhaps to understand. We have the the cortex that we are thinking in. So if I'm playing chess, I'm not like moving the places around all the time. I'm actually thinking in my head a number of steps back and forth, and then I'd make a move. So similarly, an AI model could think a number of steps without having to produce a token or take an action, just think through inside the conscious part of the brain or in the LLM. And that is basically latent space reasoning then. That's what Janikun is speaking about. And now we've seen a number of papers, just in the recent one and two weeks, we've seen like three major papers, one actually from Meta, going in this direction. So there is one called, uh let's see if I can get it right, looped language models. So looped, instead of traditional language models that go one pass through the whole model, it's looping inside. Right? So it's going into the model and then looping around there, doing a set of inference steps inside the model and then doing the decoding. It's exactly this, right? This actually did not come from Meta, it actually came from China and ByteDance, you know, the TikTok company. So they had actually really good progress with this with the looped language model.

SPEAKER_02:

Does that mean that you need to use less energy, hence it will be cheaper? Yes. Yes, too. And you might not have to buy so many of the latest Nvidia chips. Yes, yes, that's why it comes from China.

SPEAKER_00:

Very good point. They actually only have 7 billion parameters, which in instead of having trillions like GPT has, so it's at probably at an order of a hundred or a thousand smaller models. So it's faster and cheaper, but they actually perform as well as 100 times bigger models.

SPEAKER_01:

So we are now talking about fundamental ways that the architecture needs to be reinvented one more time. So the architecture is not done yet.

SPEAKER_02:

Yan Le Ca and and these Chinese uh guys they've figured out that you don't need a Ferrari to go to the grocery store to fetch some milk.

SPEAKER_01:

Yes, yes, well said. In in a way, it's that simple, right? And and in a way, it means also you know that you're this is how we do things. We learned what we have done, but we need to figure out other architecture to be more memory efficient, more reasoning efficient, etc.

SPEAKER_00:

So another paper then is actually actually coming from Meta. Uh, but in this case, instead of producing every token each time, what they actually do, they call it um LLMs with future summaries, meaning they predict like a summary of what's coming instead of having to do one token at a time. So it's it's like they they produce the end result in one shot instead of having to take one token at a time all the time in the classical kind of auto-regressive way. So it's also exactly what Jan Le Kun has been speaking about. It's not coming from FAIR, you know, Jan Le Kun's lab in Meta, it's actually from another part. But also one direction that they're going in this direction, it's no question in my mind. And then a third one actually coming from China as well, and Tencent, uh, you know, the the WeShat kind of company. So they had a model called continuous auto regressive language model. So it's they call it auto-regressive, um, I think it's questionable, but still. Um, so they what they do um they uh produce instead of a token, they produce a vector. So it's a kind of embedding vector. So uh they don't go to the syntactic like a language, a token, a word, they instead have just a vector representation, basically a set of numbers. So this is a latent space. So uh they predict not a token by token by a set of uh vectors that is more semantic, not syntactic, to uh then further um predict what's coming. So obviously that's much faster as well, and you don't have to do the encoding to uh to the the English language anymore. You can just think inside your human brain or in the semantic space uh as they call it. So it's called con or continuous auto-regressive language.

SPEAKER_02:

Would you say that these kind of different ways are more uh human-like, brain-like ways of processing and yeah?

SPEAKER_00:

I mean, given the brain is basically operating at like 20 20 watts in in inference mode, and um and these kind of LLMs, even even in inference mode, they're probably a thousand times uh less energy efficient. So they they use like a thousand times more energy probably than uh the human brain does, even for uh pure inference mode, not even counting the training that happens there. So, yes, uh this is significantly.

SPEAKER_01:

And you you can you can you can you can Kahneman talk about system one and system two thinking, right? So we have some things that are optimized for this and some that is optimized for that. So literally, we don't have one big monolithic thing in the middle. We have different parts of the brain that is that is doing different things in orchestration, and we are lighting up reasoning space when we need it, but we are not lighting up reasoning space when we're walking. So some something here is about architecture, yeah. And I think even in the original Jeppa paper, you know, he's he's using more what's the wording space type one and two in that one. But it's energy space, yes, but it's it's still not possible.

SPEAKER_02:

As I've understood it, we actually really don't fully understand how the brain works. No, we don't. But do you think that we will figure it out the back way, trying to figure out how to do it? I think we can still work.

SPEAKER_00:

I think we can still be inspired, but you can also say that it's perhaps not best to just emulate exactly how the human brain works, just as we are flying airplanes, we don't simulate how the bird you know flaps the wings.

SPEAKER_02:

They did that at the in the beginning. It didn't go so far.

SPEAKER_01:

But but I think I think we are we are uh the expert, I think, is quite uh on the same page that it's a different type of intelligence. So AI and the way it works and what it's good at and how we can understand intelligence, it's it's really dangerous to look at it with our eyes or uh you know from the human perspective.

SPEAKER_00:

But just to give an example of that, you know, we know AI so much better than humans in knowledge management, it would be stupid to remove that. So obviously, we need to keep the part of AI that is really good, but then we need to figure out the problem or some solution to the bad parts, which is the extreme energy you know inefficiency that we're seeing right now. And this is basically what all these papers are doing, and this is also what Dan Lee Kuhn, I think, is partly trying to go to. So I think still he's right. It's it's a bit sad that he was pushed out more or less from metal.

SPEAKER_01:

But just to get out of the real rabbit, Holy, how is this useful or relevant for normal people or for Alexander? I mean, like, I think the bottom line is that we are so early into the architecture of this work, so it's a little bit like we figured out uh how to do a steam engine locomotive, and someone said, Oh, we're probably not gonna have steam engines. We we I think we should have diesel. And we are right about that cusp where architectural fundamentals are the same, but we're still figuring stuff out.

SPEAKER_02:

It's funny you mentioned the steam engine because I've just been diving into that as well, the Industrial Revolution and the parallels to the AI revolution, because I met uh this year's Nobel laureates in economics and they talked a lot about it, and they have very interesting and strong opinions about whether AI will kill the jobs or or let's get back to that because we are holding it or be a great future for us.

SPEAKER_00:

Yes, but let's keep that discussion because we really want to go there very shortly. Perhaps just one more news article.

SPEAKER_01:

Um we should keep going. Uh which news article did you want? Oh, yeah. Yeah, we need okay, 5.1. Okay.

SPEAKER_00:

Do it very quickly. Do it quickly. It's not a big do that while I take one of these wonderful little munchies. So yesterday, you know, OpenAI released ChatGPT 5.1. And um then you can think about what that was. And I would say it's it is a point release, meaning it is an incremental release, but I think it's so it's just improving a bit the styling of the what extra do we get?

SPEAKER_02:

Exactly.

SPEAKER_00:

So they claim you know they don't care about leaderboard scores anymore. They want to have a more natural type or style of language, and that's what's it's improving in this way.

SPEAKER_02:

And how do they do that?

SPEAKER_00:

Yeah, who knows? They haven't been saying that. I think it's it's rubbish because they have been trying to get in the top of the leaderboard for a long time, but more or less all of this year they've been trailing behind. If you go to the LM Arena or other type of leaderboards, uh OpenAI is nowhere to be found in the top scores anymore. So uh for images, obviously it's Google and Gemini and the nano banana, and uh couldn't you please just give me the quick one according to you?

SPEAKER_02:

Which is good at what?

SPEAKER_00:

Yes of the top three four. Okay, so there's a number of categories here. Images, obviously Gemini and the nanobanana one, I would say. And and if you go to Elementarine, etc., they they are in top of the board in a number of these kind of image and video kind of benchmarks. For coding, no question that uh anthropic and claude is the best one. And they're also leading in all these kind of benchmarks as well. And you can feel it when you do AI for coding. You know, Claude is awesome. Then for more textual-based actually, open A sorry, Anthropic and Cloud is gaining there as well. Usually, actually, JetGPT was one of the best ones. We're talking about text. Yes, yeah, just mentioning text. Writing whatever you want to do.

SPEAKER_02:

Yeah, I do that a lot. So I've testing them on on I'm writing a book, for instance, I'm trying to get help.

SPEAKER_00:

The strange thing is that if you look at the research results and the leaderboards, you know, GPT is not performing well. But I must say, from an anecdotal point of view, when I just compare Gemini or Claude and OpenAI, I still prefer Chat TBT. I don't know why. I know it shouldn't be that case if you look at the research results. Um, but in some way I've gotten a bit used to it. So this is per anecdotal, though.

SPEAKER_01:

So we we said it before. The benchmark says something, but then it's something about what you are using it for in specifically your context. You get a preference, what gels with you?

SPEAKER_02:

Yeah, exactly. And I wonder this benchmark. Does it kind of say something about how good it can it be? Uh how fast can you drive this? Yeah, this Porsche goes Porsche goes 320 kilometers an hour. Okay, but normally I just drive like 80 kilometers an hour. What is the nicest feeling when I drive 80 kilometers an hour?

SPEAKER_01:

So, I mean, like so that's why benchmark is very, very useful to get some sort of scientific view on it. But I I actually think anecdotal is important.

SPEAKER_00:

But still, Elon Marina is not doing it in the normal kind of accuracy sense, it's actually having humans doing blind testing on it.

SPEAKER_02:

Okay.

SPEAKER_00:

So it is actually preferring the style in this case. So you can actually measure this also from a more subjective point of view, I would say. And even there, uh GBT is nowhere to be seen. It's nowhere. I mean, it's in top three to four. Yeah.

SPEAKER_01:

But it's not, but it it was quite it was so in the lead if you go back two years ago.

SPEAKER_00:

That is yeah, yeah, it was dominating. It was dominating like hell. So I've I've been saying, you know, beginning of this year, this year in 2025 will be the year that OpenAI started decline and not be the frontier lab anymore. And I think we're already there, and uh, I think that will continue.

SPEAKER_02:

Now when they're investing more and more and more, and more and more other companies are investing in the fact that they will be the leader.

SPEAKER_01:

And and they are the leader in terms of normal people using a chat type LLM. I think they are by far the market dominant. Yeah, but isn't Gemini catching up pretty fast? It's hard to it's really hard to read.

SPEAKER_00:

It's dominating in enterprise though.

SPEAKER_01:

In enterprise and in coding. If any coders we had, I mean like anecdotal. For real software engineers, it seems very clear.

SPEAKER_00:

Just a last question to you, Alexander, because I'd love to hear you. I'm not sure if you met the spoken to Alan Sandman at any point.

SPEAKER_02:

No, I've just been one meter from him. Passing me by uh in in Davos at uh when he sat there with uh Satya Nadala and had a fireside chat or whatever.

SPEAKER_00:

Because it was this kind of um court case where Ija Suchkever, which was one of the founders of OpenAI, revealed a bit what happened when they fired Sam Altman two years back. And they were not holding back in how they really distrusted Sam, thought he was really uh illoyal, he was lying, he was trying to manipulate people in the board, etc. It was really, really damaging uh words about Sam Altman.

SPEAKER_02:

Yeah, I read or I didn't read, I listened to a book that tells the story uh of OpenAI and Sam Altman. Uh it's something about Empire, uh it's it's by good uh American journalist, uh Empires of M AI or something like that. That kind of get gives you that picture that uh he's a super smart and also very emotionally intelligent person that knows how to play the game. Yeah.

SPEAKER_01:

Anyway, we'll see. We need to go back uh so 5.1, yeah. Okay. Let's go back to the real uh questions, and I think we should jump as far as we can into the real meat of things.

SPEAKER_00:

Uh where do you want to go first? Because I mean, let's go that we're spoken about uh or is the job market or is it which one did you think of?

SPEAKER_01:

I mean, like the the whole job market thing is very interesting, but I think we can even get into that better by actually you you have been talking to Nobel Prize winners. You know, give us this sort of this give it to us. Well, I want to be Star Strike. Who did you talk to?

SPEAKER_02:

Me be uh wanting to jump into AI rabbit holes, have been very lucky uh with the past couple of years of uh Nobel laureates in well, economics. Um it's actually called the Swedrix Bank's Prize in Economics Sciences to Adrian Nobel's memory, whatever. Um last year you had uh Daruna Gemullo and Simon Johnson as two of the three who are ha happen to be very much into researching about AI and the job market.

SPEAKER_01:

So when you are asked to do a Nobel Prize interview, you put some AI player in this.

SPEAKER_02:

And I actually uh Simon Johnson is is in this uh Homo roboticals uh series. Uh but also this year then, uh Philippe Aguillon, Jewel McCure, uh and and and Peter Howard, where may maybe uh Philippe Aguillon is the the star here who has a research center focusing on AI and how it's transforming the economy, and also Gerald Mocure, which is the uh the portal figure of how to understand the industrial revolution and its implications for the economy and why it even happened, is really uh enjoying talking about the parallels and to this industrial revolution with AI and what we can learn from it. Okay, so they're full of ideas. Uh so it was uh really uh interesting year as well for the Nobel economics.

SPEAKER_01:

Let us let you so we now have these guys.

SPEAKER_02:

Okay, I can be the spokesperson translating what they say. Okay, Philippe Paguon. Uh he thinks that uh oh, don't worry about the jobs, so many more will be creative, we just don't know which they are today. Uh it has happened many times before. This time is not different. Okay, that that philosophy. But hey, it might be a transition period that's not so easy when old jobs get displaced and there are new not any enough new ones uh having emerged yet. So we need flexicurity. Flexicurity. The Danish model, the Danish uh work uh labour security model. Is that a word? Yeah, security. Here in Sweden, uh they protect your job. It's difficult to sack you, okay? So uh a business that wants to move direction will not do that easily if the workforce doesn't have the competence to go in that direction. In Denmark, uh the difference is okay, flexicurity is okay, we don't protect your job, but we protect your income. So, okay, hey businesses, get sack them if you want to. But the government, the state is gonna hold your is gonna give you uh a good uh unemployment benefit for a rather long time. And you're also gonna give you some uh some some some coaching uh for his skilling and finding a new uh place to work. And that seems to have caught his attention to the extent that he mentions it in every other sentence when you talk about the uh challenges to the job market coming with AI. So that's one thing. And the other one, okay. Yeah, he's very preoccupied with Europe's uh with Europe lagging behind when it comes to uh Europe. Why don't we have any tech stars like the US or China? Why do we have just a couple and they're not as big?

SPEAKER_01:

And does he have an analysis for simple answer?

SPEAKER_02:

Or well, his quick answer is the Draghi Report. The Draghi Report. If you just do what Draghi says in this famous uh EU report about how Europe should regain its competitiveness and growth, uh well, one thing that he says there is okay, we have a lack of capital market, we have seven different ones, and there's not enough of money in uh in one pot here, so that if you grow, you have to go and find the VCs in Silicon Valley, and then you get uh quoted on the American. Yeah, on the American stock market, and then you're just American. Um or or at least you you you you move there. And the other one is okay, we have to get the the different uh markets together as one true common market, because today, if you're a tech startup in Sweden, uh you'll probably have to understand how the German uh system works when it comes to different regulations, if you're in FinTech or whatever. So you do the market and all that stuff. That's what he talks about. That's the general report. Exactly. And then Joel Mukir, who is the grand old man, the history teacher par excellence, who has read everything there is to read about the Industrial Revolution and has written everything there is that is worth to read about the Industrial Revolution and everything that has to do with it. Uh he says, Well, you know what? I'm gonna be so worried. Uh this is just uh the same thing. Um it will take some time, but it will sort itself out, sort of like that. So two two Nobel Prize or two real experts who really says Yeah, well they they're calming the people who think that uh this will lead to technological uh mass unemployment. That's their stance, anyway.

SPEAKER_00:

Yeah. And I know you spoke to Simon as well.

SPEAKER_02:

Yeah, he's a bit it's interesting. You can just because you're a Nobel lawyer doesn't mean that you you you visit the truth because the truth isn't out there, it's not binary, it's not like a white correct. Simon in the other corner now. And Simon Darrenu happened to become uh be part of the other fraction which is more worried about the development. And they say, okay, yeah, fine. I I read you. The industrial revolution, yeah. Before there were 40% peasants and farmers, and now there are like four, and we are not we don't have 36% unemployed people. So you don't you don't miss the gas lightlighters or or or the uh elevator operators. But it took like 50 years for the Industrial Revolution transformation to pan out, and actually, eventually, thanks to uh labor movements uh asking for rights, asking for raises, asking for better working conditions to become something that was good for the many.

SPEAKER_01:

That's their stance. So the stance is here is that it's all about perspectives and time horizon stance, because that's a good conclusion. 50 years of pain. It's not nice. That's a lot of fact. That's a generation that's wasted. That's a generation waste.

SPEAKER_02:

Even if the makro macro. But it could also be more impactful in less.

SPEAKER_01:

Because in 50 years it's slower to happen and it's and it moves on slower. What would happen if that same thing happened in five years or 15 years? Isn't that what leads to revolution and war? Dictatorship or revolution of war.

SPEAKER_02:

But then populism would grow exponentially. If you if you do what happens to the world, because you would have social unrest.

SPEAKER_01:

Social unrest take the same change in 50 to 5 years. You have so much social unrest, so you have a growth.

SPEAKER_02:

Are you having but then but then at least we have time to figure out how do we transit in a sensible way?

SPEAKER_01:

No, but the trick the the tricky point is that when it goes in five years instead of fifty, you're so dizzy, and you're like, you know, so when you wake up, which one is up or down less or then someone ran with it like earlong?

SPEAKER_02:

I don't think it's gonna be five years. Because that's another thing that I've come to think about a lot, that we it will be much slower than we actually uh think many of us, and definitely slower than the techno-optimists, and definitely slower than Sam Altman and that kind of people.

SPEAKER_00:

Elon Musk says AGI this year.

SPEAKER_02:

Yeah, sure. And he said that we have would have self-driving cars for like 10 years. But my my point is human beings and organizations being composed of human beings are actually quite slow. To change your way of working. Yes. Don't do that overnight. Path dependence.

SPEAKER_00:

So it's the difference between technological progress versus societal difference. It goes so much faster.

SPEAKER_02:

And and now you're back to And that will actually pop pop the AI bubble. Ah, this is international. Answer one.

SPEAKER_01:

So the when the discrepancy has gotten too big. There's an interesting topic of uh on um I had the conversation on bubbles. Good bubbles or or bad bubbles, or good bubbles or real bubbles. This is good bubbles. Yeah. The ones in my glasses. No, but the uh the analogy was that some bubbles we uh you know which we which has crashes, but it built there there was infrastructure and things being built through the bubble that we now then reap the benefits out of 20 years later, like the railways and the fiber cables, you know, and and and the data centers with the GPUs that will be worthless in three years?

SPEAKER_02:

I don't know. Yeah, but the algorithms will stay.

SPEAKER_00:

But didn't Simon Johnson say something else? Uh if I'm not mistaken, he spoke a bit about the risk of concentration of power a bit as well.

SPEAKER_02:

Yeah, and and and yeah, he had a funny anecdote. He he uh he has met Sam Altman.

SPEAKER_00:

Yeah.

SPEAKER_02:

Uh he was invited to some round table with Sam Altman and company. And Simon raised this preoccupation about well, will this really be good for the many? Uh are we doing this so that uh people uh can have a decent living? I mean, are we deploying AI to enhance humans instead of replacing them, etc.? And then according to Simon, Sam Altman said, Oh, don't worry about such worldly things, kind of. You know, once we have kind of reached AGI and we've reached these grandiose targets, uh then nirvana will be here. It will be so much, it will be so good, it will be so much riches that are uh released that there will be plenty for everybody. And I think that is a very interesting perspective. And when you well, I've heard Sam Altman in several interviews getting this question about okay, but well what if AGI or when we have that, uh what will be the left for humans to do and what will what will they make a living of? And the answer is universal basic income might be a good idea.

SPEAKER_03:

Yeah.

SPEAKER_02:

Okay, but have you really thought that through in all the steps? Do you know how much money would would be needed for people to have a decent universal basic income? And the next step, do you know how much taxes we would have to tax you as a person and your business to get that money, to reap that money in? And will you stay in our country then or will you go to the Cayman Islands or whatever place that wants to be the new island for the world? I I don't think they've thought it through. You don't think so? And and maybe also have they thought about is that what we want?

SPEAKER_00:

You don't want universal okay. If we're going to the final question, let's skip the final question.

SPEAKER_02:

Good luck segueing back now.

SPEAKER_00:

No, no, no, no, we're we're here now. We're here now, we go we we stay with the few stuff. That we actually have a world of abundance, meaning that the cost of products and services go towards zero, meaning that you can live for free, you can get food for free, you can more or less, you know, have energy for free. Uh of course it won't be free, but it's getting significantly cheaper potentially, because AI is doing most of the work. Wouldn't then uh UBI potentially, without having to be that expensive, still cover the.

SPEAKER_02:

You mean that then we wouldn't need so much money?

SPEAKER_00:

Yeah, UBI wouldn't be as costly because the cost has gone down.

SPEAKER_02:

Very good um replication. Um very very good um argument back. Um well that would need for things to be so cheap then, right? And uh if everything was were to be so cheap. Well, actually, in in my mind, this would mean uh an enormous deflation. And it would probably be the result of not having growth in the way that we are thinking about growth.

SPEAKER_00:

Could be a Star Trek, Star Trek in the future, right?

SPEAKER_02:

Yeah. Uh actually, Anders, it's so mind-boggling that I have to think this through in steps to give you an answer. Um I'm not sure. I don't know. Let me get back on that one.

SPEAKER_00:

But it's an interesting uh to go. But just to continue also, on some rant that we have been doing for many, many years now in this podcast is the uh AI divide, as we call it, the concentration of power. And some people that are a bit more scared about AI are claiming that you know whoever gets first to some AGI or even ASI, meaning superintelligence, will have an extreme power. Perhaps that's why they're investing like insane amounts of money now in infrastructure, because they realize you know AI will be used everywhere by everyone all the time in the future, and they need to have the infrastructure for it, and the one that actually controls that will you know make a lot of money. I mean, it's an insane amount of investments. Like Stargate is$500 billion, it's it's an amount of GDP in Sweden, right? But before we get there, um if we believe that this could be the reason that they are investing extreme amount of money now for some tech companies, it could be that they believe that it will lead to a lot of return. It would be strange otherwise, you know, right? So they should believe that this will result in a big return and that they they will have much more power. Wouldn't that then be really scary that it will be a few set of companies that will have an extreme amount of concentration of power?

SPEAKER_02:

Yeah, I agree with you. Uh and whenever you have a concentration of uh power, uh you have less innovation. Whenever you have You mean the market. Yeah, through a market perspective. If you just have like three tech giants absorbing 80% of the market, yeah, every new startup that will kind of kind of uh um threaten them will probably be bought up by one of them. Yes, which is happening more than that. And then you kill competition, then you then then then there's not enough innovation, and then they will be more preoccupied with maintaining their power, refining, doing a GPT 5.1 instead of doing a Jan Lukun, whatever that kind of twists everything around.

SPEAKER_00:

Because the big tech giants, you know, they are acquiring companies as soon as someone in Europe or in Sweden is getting some kind of success, they usually get an exit by being acquired by a tech giant. So we're almost there already.

SPEAKER_02:

Well, I think it's interesting uh to look at France and what they are trying to do with Mistral. I met them uh a couple of weeks ago when I was in Paris to meet uh with Philippe Aguillon, the Nobel Are. Um and I I realized I hadn't really understood their business before. Uh I thought that they were kind of, oh my prejudice was, and it was reflected in my first draft of questions. How are you ever going to be successful with a tenth or a hundredth of the money that the American tech giants are deploying on this? But then I did some research and I realized, okay, that's not their game really. They are more into business-to-business and making, well, actually making the AI agents but big fancy ones work. So they're kind of a consultant as well, as well as developing their own model as a foundation for this. And they seem to be driven by this European independence digital sovereignty thing as well, which I think is is really uh a worthy endeavor. Because geopolitically, it's not wise to be dependent on one or two other nations, benevolents. However, are we going to be truly independent and have politics that uh cares more about our good if we are in the hands of others?

SPEAKER_01:

But we we have done on this part a couple of times where we are trying to look okay. We're not we are trying to pull the curtain away from all the what is talked about in the media and trying to understand what why are we pulling the strings in a certain way? Why are we doing this? So and we started here now with the uh Anders going into the insane amount of money being poured in. What is your understanding of the real drivers for this? I mean, like, and I think there's a couple of things in motion at the same time. Let me let me start uh us off. I mean, like, so on the one hand side, there's there's some sort of economic flywheel going around where we're investing in each other's pockets, used to raise our each other's stock price and make ourselves extremely rich. Think about open AI Oracle, NVIDIA, which is which is which is actually this is just inflating a bubble in order for a few people to get super rich. If I'm super cynical, this is one angle. Then we have the angle, of course, like uh you know, supremacy of AI. Uh I mean like Putin is the one who says it most clearly, it's it's the most uh important way to have power in in you know power in war, you know.

SPEAKER_00:

His quote is the one who rules in AI will rule the world. That's from Putin.

SPEAKER_02:

I think I I actually have that in in one of the yeah, in the AI war documentary.

SPEAKER_01:

And then there's another one, uh then it's another way of looking at this, and maybe this is also even down to Elon's strategy. That if you think about how do you win a war without fighting a single fight, it's by winning the semantic war, it's by winning the ideological war. And by simply having the the ideology indoctrinated in everything we do, we're actually winning without winning, without fighting. So it's an ideology war here in terms of you know, what's the ideology of the the big open LLMs? And they are very they're they're they're uh it has a term, right? It's it's the white, it's it has a very clear profile of the world in terms of values and ideology compared to the rest. So so there's an ideology war we can talk about, there's a real physical war we can talk about, there's there's a you know, you're just getting a fucking rich scheme. Is a Ponzi scheme? Is it pyramid scheme at scale? You know, so so what is your take on all this in terms of trying to sort of sort out what are the real drivers, you know, if if if we if we take off the makeup? What what what is this we are seeing? Why?

SPEAKER_02:

Well, I think uh most of the big three or four tech giants there thinking it's uh it's m it's a winner they're playing a winner takes all game. Uh so we don't want to be number three because that's not where we're gonna make money. It's worth it if we are the ones who reach the outer frontier limit first and we get us uh a foothold somewhere where the others won't be able to go for several years. So I I actually think that they they are doing this calculus. Uh risk reward, okay, the reward might be huge, so it's worth taking more risk. Let's pour more money into it. But it's dangerous. What I think is dangerous now is when you see how leverage is coming into this, how how um how they're actually taking loans to finance the building of the AI factories. And not only the I've learned a new term here, neo clouds or whatever, it's not like it's not Open AI building it, it's not even uh Oracle building it for open AI, it's some subcontractor building it. And the subcontractors are financing their building with loans. And now even Meta is taking loans to finance their paying of the subcontractors who are taking loans. And now we're getting very close to how the housing bubble was built up, right? So it's uh it's a property uh subprime problem. Yeah, not subprime particular, but it's uh it's a property uh it might end up being very much like a property bubble. But we're not there yet. But I think it's a warning sign.

SPEAKER_03:

Yeah.

SPEAKER_00:

The time is flying away here, and I know you met some really, really interesting people that I think a lot of people, and including myself, would love to hear your view on. And that includes um the biggest, the most valuable company in the world these days, Nvidia and uh Jensen Huang. But also in Sweden we have Klarna and uh Sebastian Sietmakovsky, and it would I would be really interested to hear what how was the meeting with them?

SPEAKER_02:

Okay, Jensen Huang is an American, he's a CEO of a big American company, he's slick like hell, he knows how to deliver one-line errors, he knows how to do the talk, you know. Uh so to get beyond that, you have to ask some follow-up questions. Uh so uh but even then uh it's difficult to surprise him.

SPEAKER_01:

Goran, did you get drunk with him? Then maybe he said the the naked truth, right? Let's not go there. But but it was but but it was more slick and more but could you get behind the scene or could you could you sense him in or could you he's so professional?

SPEAKER_02:

It was it was difficult to get him unbalanced, and I didn't even try because I had not a notion of time, and I had like I want to get answers to these three questions basically. So And what what was the story what was the main part of the conversation? I mean actually No, but uh one was why are you uh why do you think that this physical AI thing and human robots is worth uh investing in? Why do you really think it will work? That was one. Uh and the other one was actually uh agents. Um to what extent will it actually uh play out? Yeah. And so that was Jensen. Uh Sebastian Szemantovsky is also very interesting. He he seemed to be either very frank or has decided that his message track is to be the truthsayer. Because he he packages his vision or his worries about how AI would affect the job market. Uh, as um, okay, everybody else says uh this is not gonna be a problem. Uh there will be so many new jobs and so on. But hey guys, I don't think that's really true. I think it will be a tough time, at least getting there. Uh look at me. I sacked almost uh uh 40% of my staff, or he didn't, but they they have shrunk in the workforce and they have not recruited as much as they otherwise might have done, given their growth. At least is that's what he says. And uh I think it has uh that it makes some sense. So uh he he he uh he kind of likes to go the other way, uh I think that and and and and not as polished but uh well thought through anyway.

SPEAKER_00:

Interesting.

SPEAKER_01:

I mean, like so if you if you wrap up all the different people you talk to now, was it anything that stood out as a big aha moment or the most weirdest, or oh I didn't expect that, or something that really made you pause and think on the way home or when you were editing?

SPEAKER_02:

Hmm. What a great question. Uh no, actually, uh my my no uh I my my biggest my biggest epiphany was probably to realize that one humanoid robots are really on the verge of becoming something, so C C C3PO is getting real, it will get there, but they have a lot of training to do. And the other one was also, yeah, a AI agents that can be pretty cool, but you know, the really cool stuff isn't off the shelf. No, you have to work for it. So it was not one grandiose enlightenment, it was more like okay, it's a work in progress.

SPEAKER_01:

But it's also something that tells you something about how to go about this and build projects around this, and I get so frustrated with how naive Swedish industry is when they're trying to implement and adopt these things, and when they are making naive bets that it obviously won't work. How are they naive? I I don't think they I think they I think they're a little bit too much on off the shelf, too. They don't understand uh you know how to make an agent work properly. You need the right context and data and et cetera. So all the stuff that you said for your personal agent, imagine doing that in an enterprise. That means you technically need to connect it to all the different things.

SPEAKER_00:

You have to call the plumber, you need to call the plumber and the electrician. Cool. From all of these kind of deep dives, and and and I would love to have some kind of short answers now, but I'm sure you won't get there. But well, let's try. But I would love to hear a bit more, perhaps. What do you think about potential political kind of implications here or societal kind of implications if these kind of prophecies go into force, so to speak? Do you have any thoughts about that?

SPEAKER_02:

Yeah, well, you have a good case scenario and you have a bad case scenario. The good case scenario is that this will unleash so much productivity that finally Europe and the part of the world that has been lagging might catch up and have have a growth that will make more people get them more wealth and will have less to fight about. That would be nice. The dystopian scenario is that hey, this will kill more jobs than it will replace or find new ones, at least in the short and midterm. So we will have some sort of technological, if not mass unemployment, increased unemployment. And and and and those who are hit are uh everybody uh under the upper class, um, at least those who have to work for a living. And we will get more social unrest, more populism, and uh less uh acute future in the couple upcoming decades. That's that's the that scenario?

SPEAKER_01:

So the two scenarios, once again, a spectrum and uh any you know anything that leaned in one or other direction.

SPEAKER_02:

I I wouldn't dare to say, but I can say one more thing. I think that notwithstanding whether we are going in either of these directions, you have uh an open goal for a lot of politicians in power and and governmental agencies and and and businesses to do to to uh pick a lock a lot of low-hanging fruits using AI in a sensible way that can give us a lot of good benefits here today, and that can build a stepping stone for a better educated population and more uh competitive businesses here today and and and and tomorrow as tomorrow, not in five years. And I think this is and and those low-hanging fruits, many of them are still unpicked.

SPEAKER_01:

And it's and I think you can build on that to what you said right in the beginning. It's also about us getting in there and shaping this and not letting the tech pros shape it. So, in the same stance, go for the low-hanging fruits, get yourself uh blood on the shirt, go in and get there, learn about it, and then ultimately understand what you think about it in order to influence it in the right way. So there's both a low-hanging fruit angle, and it's only if you're not doing it, if you're not working your agent, you're not gonna learn shit about it.

SPEAKER_02:

No, and if you don't dare to to to to fail, you won't uh you won't learn anything. Uh so the the the the bitter truth is experiment only then and only by that you can you can know what works and not doesn't work for you in your profession, in your isn't that the fundaments of democracy?

SPEAKER_01:

If you don't participate, if you don't engage, how can you be part of it? How can you shape it? So I I think it's I mean, like I'm I'm I'm doing that as a metaphor or analogy. Like by standing on the sidelines, it it's it's not good for any purpose to stand on the sidelines. That's what I'm trying to get.

SPEAKER_02:

I don't think you have to put that democratic layer over it, everything uh over that. I I I I just pure and simple. If you want to be if you want to to be able to use see, cut through the hype and see what is really useful and what is shit, there's only one way to discover it by yourself. Yes. There's not a manual you can read, there's not a course you can go exactly.

SPEAKER_00:

But you mentioned an interesting point, and I'd just like to understand a bit more. You said potentially it could be an equalizer for Europe. Uh, that is of lagging behind, of course, to US and China. And perhaps we can even think about other developing countries and see of course they are behind, and perhaps it could be an equalizer, not equalizer in terms of dancing Washington kind of run, but but equalizer in terms of actually people catching up. Do you think that can be the case actually?

SPEAKER_02:

Well, uh what do you need then? You have to ask yourself, what do you need to to use these tools in uh in a good way? Um well actually I think um I I lean more towards it would be more of an equalizer than not, because just getting the the first leverage, the first benefit of using these tools, I mean they're not very costly. Uh you can you can do very well with with many basic versions. Yeah so if you're just entrepreneurial enough and if you're just creative enough, um you could start a business with very little capital that also could go global. So but I don't know what what do you need to you have since you are work then on a global market, you have to have something very, very unique to bring to it. And uh well, I don't know why. Should they some developing country be more poised to succeed than some other maybe they if they're just not less uh prone to succeed um from the start, then we're at even starting blocks, and that would be probably helpful for them.

SPEAKER_01:

I have I have an answer to that. We we talked about we've been touching it today, but you can make it even sharper. We have an invention technology curve which goes faster, and and people and we say the real adoption will be slower. And ultimately it's not gonna be five years, it's gonna be fifty years. So the the real competitive advantage lies in how can you be faster in absorbing and adopting things.

unknown:

Fuck.

SPEAKER_01:

And in that sense, this is the this is the this is the leveler. If countries in Africa have a more higher curiosity in learning and are more poised to adopt things instead of resisting things, the actual did they will they they will follow the invention.

SPEAKER_02:

Maybe if you don't have structures uh and and ways of working that are so engraved that it takes more time to ditch them uh and and you're more prone to trying new things. And I don't know if that's the case in in countries such countries that we're talking about or not. I don't know I don't know them uh well enough to say to say anything about it.

SPEAKER_00:

We can see developing countries have been leapfrogging a bit in the past.

SPEAKER_02:

Yeah, in other technologies, they just went by cable phones and went to cellular phones or cellular phones. Made good without big banks and went to fintech directly.

SPEAKER_01:

We had one topic here on AI apartheid. So we're always talking about the AI divide, and we are comparing Europe to the great tech giants. But then we need to understand what happens when you compare Europe to Africa, or when you when you basically uh when Ikea is enforcing a farmer in India to use 3G and 4G as a way to measure their crop and all that when they can't even afford a phone, right? And all of a sudden we have AI apart by tech apartheid. So it's and and his main storyline here is like there's a there's a fork in the road right now. How much can we go open source? What really made your, you know, what happened at one level was you know, Linux and open source allowed Africa to be part of the software wagon where they would not have been able to do that 20 years ago if they were licensed software everywhere, everywhere. So, once again, open source as a key enabler to then actually jump on this whole bandwagon.

SPEAKER_00:

But getting back to the question here about you know what potentially developing countries and perhaps even Europe that is behind could potentially leapfrog in. I'd just like to test an idea with you and see if you agree with it. We can certainly see that some countries, especially US and China, is investing an insane amount of money right now. And that is a lot for pre-training and being in these kind of large foundational models. Um but then it's another thing to do the adoption that Henrik is speaking about a lot all the time. And perhaps you know, we could in developing countries and in Europe also focus more on the adoption part and perhaps not invest this huge amount in building these kind of super large foundational models that will not be practical anyway. If we will have, you know, we've been uh predict that it will be like five-ish kind of super big foundational models in the future that will be so big that no one really can use it for practical purposes because it will be so expensive to use. But they can be used to distill smaller models and we will have thousands, if not millions, of SLMs, small large uh uh language models that will be the one that's used more specialized ones, yes. Perhaps one being trained on the Swedish language or one being specific use cases, perhaps also with some kind of latent reasoning going on, etc. But if we then you know potentially uh go that route, could it be an opportunity for Europe and and uh perhaps developing countries to stop wasting if I call it that? Someone has to develop, but still we could skip that step and then start to use them for practical purposes. Could that be a way?

SPEAKER_02:

That would be a tempting uh uh scenario, definitely. And I'm thinking about something that uh I was told by um Victoripabelli, the founder of Synthesia, the Danish guy. Uh, uh, when I met him in Copenhagen at tech barbecue in in August, uh I was walking around doing interviews, interviewing people from this business, uh, different tech companies. Uh, what do you think about this aim of Europe achieving digital sovereignty? Is it is it uh is it desirable and isn't it is it realistic? And he was very much like, you know what, we can't, and we shouldn't compete with the tech giants in the East Silicon Valley the way that you were talking about Anders. We should focus on on the applications and and exactly what what you're into. Um because that's the only realistic thing. And maybe that's only maybe that's even the the the from a societal perspective most has best return on investment. Yes, yes. Because that's closer to what we actually want in daily life. What's what's good what do you think? Well, I mean who has asked for AGI? Who woke up one morning saying, oh Jesus, what a boring Monday. I miss AGI. It's more like, oh, why am I late to work again? No, but it's why why isn't there an app that that that gives me the shortcut?

SPEAKER_00:

A set of small SLMs works together as as an agent. So it still could be AGI, it's just it's not the super big ones, and it could be much more expensive. I actually do think it's an opportunity for Europe. And AGI. No, no. To do the adoption rather than the building the foundation models.

SPEAKER_01:

If you think about it, focus on the real problem you want to solve and work backwards reverse engineering like we always do.

SPEAKER_02:

For instance, in school, how do we get everybody to reach their full potential? How can we use AI in that? What kind of products can we do?

SPEAKER_01:

It's always been the way to be successful, to lace the shop on a big hairy problem and solve it better than anyone else. With AGI as as a as a vision, it's all use fluff, isn't it? As a problem. It's also to some part a religion. It's it becomes almost like a religion, but because one of the angles here we I think we've taken over and over again has been the SLM part. And let me do the example with sovereignty. I can argue why we need to have sovereignty. Okay, we need to have our own identity, we need to have our own language, we need to have our own ideology, and then even in war times, we we don't want to rely on anyone else. But then you can think of how we can reach sovereignty? What does it mean to have sovereignty? Does it mean that you actually need to build a supermodel yourself? Or can I actually distill a Swedish model? Can I get to sovereignty in a much more Simpler way. And we have a very good example with KB Whisperer as an example. We use a Lama model, an open open model, and then we train it and distill it on the on Swedish language. And we make a much smaller model with high-quality Swedish language and speech. And we can make a model that completely blows open air out of the water when it comes to language in Swedish and sound.

SPEAKER_02:

For those particular use cases. For these particular use cases where good Swedish is required.

SPEAKER_01:

So when we wanted to build a Swedish model, we could get a sovereign Swedish model that is now encapsulated over here, even if we trade it on a large language model. So it's also about did we really need to big the big model? I mean, like Louvre has been a proponent for this for years. We don't need an old LLM. We can distill a lot of SLMs the way we want it. So it's also a matter of practically what is the real goal with sovereignty? That's the goal. Okay, I can technically get there in another way.

SPEAKER_02:

Do you need to build the ships as well? But I'm curious, Anders, with your background, how relevant do you think? And to what extent that we stand on our own legs, so to speak.

SPEAKER_00:

I think it's very hard to do in coming years. But I don't think we need to. And I think there will be a lot of inventions, and we're actually good at inventions, not innovation, but inventions in Europe, meaning that we can come up with really good ideas, similar to how China was forced to do much more efficient models because of their uh export restrictions of GPUs. Um so they fixed that and do that all the time. I think we can do the same. I think actually that will be um a great opportunity. I think we just need to you know focus on that a bit more. And we do have all the problems that the person you mentioned, uh the laurettes that mentioned that we have a big market fragmentation in Europe and we need to fix those kind of things. But I think there is uh a real opportunity for us to leapfrog a bit and focus on application and um let others fight it out because it will in the end uh go to much more energy efficient uh models, and those are the only ones that are going to be used in practice. The big one will be used for uh, you know, the super big one can do that for very uh important things and perhaps national security kind of things, but in practice for companies, no one will use the super big ones, and that's a great opportunity. Let's skip that and do the practical things. That's a great opportunity, I think.

SPEAKER_01:

And and just to add to that, I think the the difference here when we had had like what what what episode is this? 199 uh you know 180, you know? So 169. So 169 episodes with great people talking about these things. Then some patterns emerge in terms of almost like why which race are we in? Why are why are we playing a game that someone in the US has thought up as a race? And why do we now do a mimicking me too kind of race in Sweden on that? Why don't we take a step back and look at what is the competition we're in? Which game should we play? And then I think this is what where where understand is now coming, and I'm supporting them wholeheartedly. You can flip the way of you know what is the criteria, what is winning? And all of a sudden now it it's not about the biggest model, it's about the smartest model, it's about the cheapest, you know, useful.

SPEAKER_02:

That that's what Simon Johnson spoke about. Yes. When I asked him what do you think is Europe should do, and then he he actually said that we should do what you're doing nowadays. Uh, we should see, okay, why what is our unique opportunity now? Okay, we need to invest in defense. We need really huge leverage in that uh area. So why don't we put our focus on using AI in this field? That that is something where there's money flowing to, that is somewhere where there's a societal need, so that there can be support for it. And then that could be our niche, or at least one of them. And the other one was he said, okay, the rest of the world is is denying, or at least perhaps the leaders in the United States, denying climate change. So why don't you use technology in a wise way to find means of fighting that or or making the transition?

SPEAKER_01:

Being the most competitive in this industry.

SPEAKER_02:

Yeah. And I'm just replicating what he said, but uh he was into this. Find your way of making use of the technology in real life. Yes, perfect. Then there will be adoption and money and societal benefits.

SPEAKER_01:

Yes.

SPEAKER_00:

And given the time, I'd love to hear a bit more about your personal views on this, Alexander. Um, but I'm also trying to find a good positive ending here. But if we before we go there, thinking a bit more about the potential bad endings, uh, and we can think about you know there could be concentration of power. What do you think will happen if you just take China and US to start with? Uh, do you have any thoughts about you know who potentially will be continuing? A lot of people are saying that China will be the biggest economy now in a couple of years. Do you think they will also take a lead in AI as well?

SPEAKER_02:

I think uh then I I will uh fall back uh and and and rest upon what I have learned from the previous years of Nobel economics lawyers whom I've talked to about this, especially Darona Ramolo, because what they got the prize for was uh the research about institutions' importance for kind of wealth and growth. Um and everything they say is contradicted by China, it's like the big exception, okay? It's not a democracy, yeah, it's not pluralistic, uh, it's uh uh but they claim that hey, we just haven't seen the end of it yet. So it's an exception now, but just wait and see, kind of. Uh so we'll see. Uh but um so far they're doing pretty well. Yeah, so um I wouldn't bet on their uh on that empire's soon demise. And uh uh well um I forgot the question, but I mean for one is the dominance, you know.

SPEAKER_00:

Will it be a concentration of power? You can think about countries as such, and and then think about the US or China and see who will dominate. And perhaps, and it would be nice if you're right, or the people that you're referencing are right, that China is is the exception, but still some indicators are saying that they could actually take the lead in various terms, but then we can also think about companies, and um we we have the Elon Musk companies of the world, we have the tech giants of the world. Um so will it be, you know, I I think no one is doubting that we are seeing a concentration of power for the tech giants. They are in the top 10 of the most valuable companies in the world today. Uh will that continue to rise or will it actually be an equalizer, so to speak, where other companies can start a catch-up? What do you think? If you take that question specifically, will it continue to increase in concentration of power and NVIDIA and Google and Meta will just continue to take more and more economic wealth?

SPEAKER_02:

Well, suppose that we are in a financial AI bubble and it will burst, then it could hit some of these companies pretty bad. I'm thinking about OpenAI, for instance. And but in the in in the uh in the leftovers, so to speak, there will be some pretty good value to be picked up by others. People will go to other places, they will do their own startups. Uh so the bits and pieces will be picked up and maybe be the beginning of something new. So that could be one good positive side of a bubble bursting if there is a bubble. Uh so that's one way to look at it. Um, the other is um, well, if they get so big and they get so powerful, in the end, uh I'm pretty sure that people in democratic power will try to regulate them out of being in a monopolistic power. At least in countries where they have good legislation when it comes to uh market dominance and and and monopolistic uh abuse. So uh well it let's just hope that those countries are not corrupt by then so that they actually use the legislation. So that of that I am hopeful. Okay, yeah, cool. Standard oil was broken up once.

SPEAKER_00:

Right, right. And even though Elon got a billi or trillion dollar deal here recently, it still could turn up well. Okay, let's not go to that. Cool. What are you most scared about though?

SPEAKER_02:

Oh, uh most scared about I'm not I'm not pretty I'm not so scared. Okay, um I no. Um for instance, when it comes I I suppose the question is when it comes to AI. Yeah. Well, on one hand, I could feel I'm thinking about my kids. Um there's so much talk about okay, now with AI and AI agents, there's not enough perhaps junior positions, entry-level positions, because you can just use an AI to do your PowerPoint and your research and your presentation and whatever. So if you're a junior lawyer, what's left to do? If you don't well, how can you raise to the ranks, so to speak? Yeah, and I think that what people are saying as a solution that we have to rethink university and uh see it more like uh apprenticeship that has to come earlier, so that there's not a clear di-lineage between now I'm studying and now I'm working. Uh, for instance, I think that uh what Palantir uh has done with this uh trainee program, Alex Carp is like saying, Oh, universities suck, you don't you just learn the doctrine, uh, you're not a free thinker. Come to us and what yeah. Anyway, I think that's he's on to something there. Uh, and that so so so that uh might not be a catastrophe after all if we reform the education system. On the flip side here, uh I'm thinking about uh one of my kids who has um cognitive uh challenges. I see how AI could help her a lot in her life uh looking forward, uh functioning on a much better level in in society thanks to aids that these tech technologies will bring her. Because you just use an ordinary chat button can get quite far.

SPEAKER_00:

Do you think AI, if we're trying to flip now to a positive ending in some way, yeah, I'm I'm leaning towards it now, positive on that way. But we can see AI helping out a lot with education and and also for people to to uh to improve their cognitive skills in different ways. But you could also think, you know, potentially AI can help a lot with healthcare, potentially, and finding cures and drug discovery in a way that we can't do otherwise, and perhaps solve energy crisis and fix the fusion.

SPEAKER_02:

The research part and and that we haven't even talked about. That's that is truly the most promising and hopeful side of using AI, uh, pushing the frontiers of knowledge as a researcher.

SPEAKER_01:

Because that's something we sometimes forget about. That's where it's somewhere that we have already seen the fantastic breakthroughs with Alpha Folds and everything like that. Is the acceleration comes from AI in the fundamentals of research, and then you have a network effect of that.

SPEAKER_02:

And that's actually what Joel Moker, this year's economics lawyer, it's one of the three, uh, highlights as his most hopeful thing. How AI will be pushing the frontiers of knowledge like never before. Yeah.

SPEAKER_00:

And that makes him very hopeful. Alexander, I think you know we will skip the standard kind of question because we've spoken about we've almost started with our ending question with the spectrum.

SPEAKER_01:

That's our ending question. You already answered it.

SPEAKER_02:

Oh, I'm like 60%, 65% on the positive side.

SPEAKER_00:

But Alexander, what are the next AI episodes going to be about?

SPEAKER_02:

Very good question. I I haven't really thought about it, uh, actually. Uh I could do that, but I don't want to, because uh I want to make that up myself. That's uh I want, you know, speaking of this, I want I I've come to the conclusion what's important for us not only to be successful, but to be happy in a world with AI, is that we have agency. Yeah, we have a voice. The more human I can be, the more you will want to listen to me in a world where there'll be shitloads of AI-generated good, but maybe average content, and some also very good. But even if the very good is just as good as me, I'm hopeful that you would want to choose the human version because we are humans. I think we like humans more than things. Well said. Cool. But I didn't pick out the episode there. No, you didn't. That's a cliffhanger. I want you to invite me again. Oh, then I'll deliver. Oh that's that's a good one taken.

SPEAKER_00:

Thank you so much, Alexander Noreen. Always a pleasure uh to speak with you, and you have so many insights. I'm still envious with all people that you meet, and you have a perfect, I think, job, and I hope you can keep it for a long time, but with the help of AI, that's right. Thank you so much for coming here. Thanks. Looking forward to having you back here, man.

unknown:

Take care.