AIAW Podcast

E133 - AI in the Nordic welfare system - Louise Callenberg & Peter Daniel

Hyperight Season 9 Episode 2

In Episode 133 of the AIAW Podcast on Spotify, we welcome Louise Callenberg, Founder & Lead Trainer at Leadership ARTS, and Peter Daniel, Partner at PA Consulting, for an insightful discussion on the transformative potential of AI and data sharing in the Nordic welfare systems. They dive into the report, "Moving the Needle in Technical Development in the Nordic Welfare Systems," covering key positions like empowering leadership, building a solid data foundation, solving welfare challenges with AI, ensuring scalability, and fostering interoperability across the Nordics.  Don’t miss out on this in-depth discussion! 

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Peter Daniel:

I'm really careful with using the words understanding what AI is, sorry. I would much rather say that they are finding out what different types of AI can do for them, rather than trying to find one umbrella definition, because that umbrella definition is so abstract that it's hard to really apply to everyday problems.

Louise Callenberg:

I would say and it's more leaders in the business who are interested in this kind of question.

Peter Daniel:

And this ties in really well to one of our key hobby horses here that we usually bring out, and that is that we need to put the tools in the hands of the people who are actually dealing with the everyday problems. So we need to raise the skill level of, for example, doctors or teachers, or what have you, in order for them to understand which problems are now solvable that weren't a couple of years ago.

Anders Arpteg:

It brings to mind a situation back in the day when I worked in Peltorion and we were going to also actually help a health company and you know, we were sitting down there and thinking this is the best use case to get started with. This is what AI would actually make the most value for. And we started to build some prototype for it, some proof of concept, and they said, ah, it's okay. And then we simply you know, they simply started to think by themselves.

Anders Arpteg:

And they came back to us a couple of weeks later saying, you know, they simply started to think by themselves. And they came back to us a couple of weeks later saying, you know what? We'd rather use AI for this instead. And we went, ah, nah, that's not going to really work. And then we tried it out, and that was so much better.

Louise Callenberg:

Because it's created value.

Anders Arpteg:

Yes, and they understand. You know what need they have and of course we thought you know as technical people that we can think what creates most value for them. But if they truly understand what the tool is and the problems they have and they can themselves understand what they can use it for, that's so much more valuable.

Peter Daniel:

Definitely Fully agree.

Anders Arpteg:

It sounds like you reached the point where I'm at today a couple of years ago. I think it's also another benefit with it. If someone is pushing a solution onto someone, that's really hard to make it work. But if someone actually takes the initiative to drive it themselves, you don't have to do the whole anchoring work, so to speak.

Louise Callenberg:

And in that sense it's more mature today than like a year ago.

Henrik Göthberg:

And could we in some ways picture this conversation? How does it go now? So it starts a little bit like we think we need an AI strategy. What is AI? What do you mean with AI, moving into discussing use cases or problems they are thinking about. So how does that cycle go now, you think, if you contrast that to how it was before?

Peter Daniel:

In the ideal world. I would say that that cycle is going to start accelerating now when people first, when the core business people or the professions, the non-tech people, when they have their skill level raised or when they raise their skill levels and you suddenly can enter into a constructive dialogue with the people from the tech side of things, and when these people start actually being in the same room for long periods of time and just getting creative and tying those two skill sets together and getting them to overlap as much as possible and as soon as possible.

Henrik Göthberg:

I think that is the big, big key to empowering and getting some real mileage out of new technology, real mileage out of new technology and how far in this conversation today would you say that they are understanding that this is technology that they might be able to adopt within the current organization and steering and stuff like that, and how much do they fully embrace that this might also, you know, data and AI changes workflows.

Louise Callenberg:

You mean in mean public sector? Yeah, so in this conversation, are they thinking?

Henrik Göthberg:

we love to talk about the digitize versus digital transformation, right, so this do. So they have an idea that something's going to be more automated, but it's within the frame of how they do things today. Or do they think? Do they have the reinvention a things today, or?

Louise Callenberg:

do they think, do they have the reinvention idea already now, or or how do you see that? I would say it depends on on who the person who is who's telling the question is. I've been looking for these vinova projects and seeing those applications, and and if you have done your first homework with asking yourself, what kind of value do you want to create and what kind of data do I have, and what kind of um solutions or problems do I want to to find a solution for, so, and I would say, though those organizations are more mature, but if you, you just want, like we need an AI strategy, because everybody else seems to have one.

Henrik Göthberg:

Because that was of course the underlying question. You know, when we see within observed shifts that they have matured. Are they maturing to that we need an AI strategy? Or are they maturing all the way to understand the implication, what they are asking for?

Peter Daniel:

Well, I'm really hoping that they're maturing beyond the we need an AI strategy. Or are they maturing all the way to understand the implication of what they're asking for? Well, I'm really hoping that they're maturing beyond the we need an AI strategy. Could you please devise that for us? So I would say that people are moving past that. Hopefully, or verifiably, I would say that people are doing that. I would also say that I lost my train of thought here.

Anders Arpteg:

If I may. I mean, I think it's okay to do both.

Henrik Göthberg:

What.

Anders Arpteg:

I mean with that is, if you want to digitize or digitalize, it's actually not a bad idea to simply start to optimize the current business processes you have.

Henrik Göthberg:

As a way also to learn what you can do to really see it.

Anders Arpteg:

But then if you understand that you can really transform even more if you start to change the business processes as well, that's fine. But it's actually not necessarily bad to start to just improve in the current processes.

Louise Callenberg:

We have seen that example Copilot are doing a quite big shift for servants in the public sector, because when they have their own experience on how to maintain your outlook in a certain way together with co-pilot, you get this revelation that AI could really change the way I use my time. And then when you come to how can you use AI in other processes, then you have been more into that kind of thought what can this be shifting?

Peter Daniel:

Yeah, and I think a key point here is the world revelation, because I think that a lot of the people I meet in my line of business now is that they have this intellectual idea of that this will be transformational when you actually get hands-on experience. You don't only get the intellectual version of it, you actually can feel it in your gut and I think that that is actually a revelationary or transformational change.

Anders Arpteg:

Sometimes we should people, when they write an AI strategy, don't exaggerate too much. You know, it's okay to start slow. I think. You know it doesn't have to start with transforming the whole business directly. I mean, that's actually potentially a bad idea. Even so, you know, sometimes I see these kind of AI strategies and they are you know, they are aiming too high, I think sometimes.

Henrik Göthberg:

Yeah.

Anders Arpteg:

Anyway, I think this will be an interesting discussion, but let us start first. Welcome you both here, louise Kallenberg and Peter Daniel, to the AI After Work podcast, and actually it's more or less exactly four years since we started now and Louise was our first guest then.

Louise Callenberg:

Yes, I know, homecoming Four years. Wow, four years and two days.

Peter Daniel:

I think, who's counting?

Louise Callenberg:

Where someone is.

Henrik Göthberg:

Four years and two days, but who's counting?

Louise Callenberg:

That was great it was. Yes, I remember that call and I just love the idea. I still do.

Anders Arpteg:

Yeah, I'm not sure if we have evolved, Henrik. We're doing more or less the same, but some improvements, I think, Still having fun right.

Henrik Göthberg:

I don't know, I don't want to improve on having fun, I'm just going to keep it there.

Louise Callenberg:

But actually we talked about some ethical questions then and we talked about is there any God in tech and data and that's my background as a theological person who is really into that kind of value discussion Is there any God in data? Yes, god in data.

Louise Callenberg:

And my definition, because we talked about definition of AI before it always starts. In that sense, you have to define what you mean by something good. So if you mean that you can make life better for humans and citizens in Sweden, then I would say yes, but it's all about what you want to do with it.

Anders Arpteg:

I add that to the list.

Louise Callenberg:

Yes, you do that, but short introduction.

Anders Arpteg:

Yes, Louise, perhaps if you were, how would you introduce yourself very briefly.

Louise Callenberg:

I would introduce myself as a humanist that loves technology and loves a society that brings the best out of what we can do together. And I'm also an entrepreneur. I've perhaps always been, but I've been working as a civil servant for most of my career.

Anders Arpteg:

For four years ago you were working at SKR, right? Yes, and you were basically heading up SKR right? Yes, and you were basically heading up their digital transformation work yes, so really cool times. But now you have also a number of companies, the leadership art right? Yes.

Louise Callenberg:

And we still help organizations to lead their transformation. Right, so that's what we do. But we are totally focusing on skills, on human skills. Right, so that's what we do. But we are totally focusing on skills, on human skills. Because when we are talking about the digital transformation, we always get to that culture question and how can you make people in the business like this change? And how can you, as a leader, drive the transformation with courage? Drive the transformation with courage. But also, when everything bad happens, you still have to steer and have a clear vision. And how do you do that? And I think that's a part of our human skills that needs to be addressed.

Anders Arpteg:

So the leadership art basically is like an educational company that helps people perform actually the digital transformation, yes.

Louise Callenberg:

And we have a different kind of organizations with us. But today we had Coop, you know the grocery store, and we have been to different municipalities as well.

Anders Arpteg:

You also have a very interesting background, actually having a bachelor in theology.

Louise Callenberg:

Yes.

Anders Arpteg:

Which is, I think, very, very interesting as well. So, coming back to the idea, is there God in data?

Louise Callenberg:

Yeah, and I have a podcast called Better Delat with AI.

Anders Arpteg:

So Better Delat and Better Shared? I think is an interesting concept. I'll actually add that as a list or a question very soon. But before we go into what Better Delat and Better Shared I think is an interesting concept, I'll actually add that as a list or a question very soon. But before we go into what Better Delat is and the report which is the main theme for this podcast, perhaps we can just hear a bit more from Peter Daniel.

Peter Daniel:

How would you describe yourself as an engineer and as a change manager and advisor to mainly public entities nowadays. I've been doing this for 25 years and slowly but steadily I've been evolving away from pure engineering and more into the human people side of things. So I think that Louise and I are now meeting each other in the middle here, somewhere Exactly. But you have an engineering background as well, then I do have an engineering degree, Then it's been mostly PowerPoint engineering since then.

Anders Arpteg:

if I'm really honest, but there you go.

Peter Daniel:

Yeah, awesome.

Anders Arpteg:

And you're working as a partner at PA. Consult right.

Peter Daniel:

That's right. Yeah, I'm one of nine partners in Sweden for a company called PA Consulting, which is a multinational company with a headquarter in UK. So we're about 4,000 people, everything from engineers to UX designers, to management consultant, leaderships, leadership advisors, etc.

Anders Arpteg:

But you're focusing on public sector. Is that correct?

Peter Daniel:

My main focus is public sector nowadays.

Henrik Göthberg:

yes, yeah, because I have an old background in Vattenfall and I remember a friend who joined PA consulting in the energy domain, so to speak. So I assume you have several different domains in PA consulting as well as technical disciplines, so to speak. So I assume you have several different domains in PA consulting as well as technical disciplines, so to speak, or different disciplines.

Peter Daniel:

That's a good description. Yeah, we're servicing five different segments and we have a number of different capabilities. So, yeah, there's some sort of matrix at play here.

Henrik Göthberg:

And which are these five domains we do?

Peter Daniel:

public services, as we already mentioned, and energy, as you brought up. Then we have financial services, we have consumer and manufacturing and we have transport and logistics, and we are, I wouldn't say, unique, but slightly different. One thing that sets us apart is that we focus quite heavily on the public sector, so about every 50 percent of every hour we spend at work is is towards the public sector in some shape or form and and used a little bit, used to orientate ourselves.

Henrik Göthberg:

Practically now, when you talk about ai and ai strategy in public sector, yes, how does that work? Segment focus and then discipline focus on different topics.

Peter Daniel:

Yeah, we do. Obviously, like everybody else, we have been investing quite heavily in building our competence in AI, and then that is basically everything from axe to limpa, from cradle to grave, from strategy formulation into making concrete action plans, into actually building proofs of concept and implementing solutions. So we try to cover the whole value chain there, and we do that in a very global way, which means that we have a global center of excellence for AI.

Anders Arpteg:

Cool, and before we move into the main report, which is interesting, can you just describe it? What is Better Shared, better Delat, you're the founder of that right. We are actually together. We founded it together.

Louise Callenberg:

We're like an old couple now, aren't we? Over the cloud discussion. That was like a blanket over that discussion for some years ago, perhaps, like four years ago when we sat here. We we have that kind of positive view on the digitalization in the welfare system, but but then it came to the GDPR and also some kind of scarcity for using cloud services, and that actually was something that was kind of surprising, because we are using quite a lot of cloud services in Sweden before and after, cloud services in Sweden before and after.

Louise Callenberg:

So the frustration in that was like we saw that leaders and politicians were getting scared of that digitalization and leading that change and in that time we wanted to give some perspectives and nuance.

Louise Callenberg:

But we also said that we have to focus on this question, not from the financial point of view, but what can be better in life and in quality of living, and can data and data sharing and cloud services save lives?

Louise Callenberg:

Then you have the moral question and then you can talk to the politicians on how to take risks, because what we saw was that, okay, there is some unclarity in how to interpret GDPR or we have some hindrance in the law, but how to deploy and lead actually means that you have to have the knowledge but also the vision. But if you don't, you have to have the knowledge but also the vision, but if you don't know what to do with it. So it's very hard to say to a politician or decision makers please be bold. They don't know what's at risk or at stake. So that's where we started to create those cases human business cases and, yeah, we said that it has to be data-driven and have a lot of facts. And that's why we collaborate, because I have like a clear vision of how to talk to politicians and you from PA could do the math.

Peter Daniel:

So, yeah, we basically we both, I think came to the same conclusion that digitalization is just too boring to make policy out of if you just keep talking about the financial benefits of doing stuff.

Louise Callenberg:

Yeah or the infrastructure.

Peter Daniel:

So what we said was well, we need to make this interesting for politicians, we need to talk about how many people die or fall seriously ill, or what have you from us not using the technology we have available already?

Anders Arpteg:

Awesome, that's a really good way, I think.

Louise Callenberg:

And then we gathered leaders from municipalities and regions to discuss in roundtables the cases and we took very, very many notes and then we discussed on what are the positions that those leaders want us to take as a collective, as a society, and then we put out the first report. So our white papers are basically notes from discussions from practical leaders who want a change, and I think that's kind of the key factor for our success. It's not you and me who are great thinkers. We are, but it's not that that's in those reports. It's basically those leaders who are doing the bold steps in regions and monolingualities.

Henrik Göthberg:

I'll try to summarize, if I get it so in a nutshell, petro Delat is all about that. There is frustration and friction in terms of we should be doing more or less or like this. So we collect the real people, the real leaders, and we facilitate the conversation where we end up articulating a position on something or different positions or, like this, a common ground and an aligned view that several can stand behind as a way to build momentum and build consensus and mobilize.

Peter Daniel:

That's a very good summary, and one of the really encouraging things we found was that, I mean, when we set up, it was basically me and Louise being a bit ticked off with the current state of events, right, and what was so encouraging was that after only a couple of weeks into this, obviously, we used our networks to find people in the different areas of local and national governance and you realized how many people actually shared this frustration that we had. So just building that went really really quickly.

Henrik Göthberg:

And when did you do the first meetup report? Which one was the first?

Louise Callenberg:

The first one was on health, and I think it's three years ago.

Peter Daniel:

Yeah, 2021. And if you give us a brief, overview of those meetings.

Henrik Göthberg:

What types of reports or positions and discussions have you ticked off so far?

Peter Daniel:

Well, first of all, one of the things is that we try to feed in a number of very concrete use cases into these meetings. For example, we're saying that it is undisputable that online connected pacemakers or insulin pumps significantly improved quality of life for the people who use them, compared to old school, non-online, offline tools. And there were some issues with where is the data stored. It's personal data, so it should be handled with respect and care. Where is that data stored? Et cetera, and there was a lot of very conservative interpretations on how you're allowed to use this technology.

Peter Daniel:

So we basically fed in a use case saying that these many people are negatively affected by not getting access to modern tech. And then we basically asked the network do you agree, is this a relevant case? We basically asked the network do you agree, is this a relevant case and, if so, what can we do to alleviate this? And I think one of the biggest takeaways from the first paper was the acknowledgement that this is actually complex issues and, as in almost everything within public governance, it's a matter of resolving conflicting goals, conflicting objectives. On one side, we need to preserve personal integrity and on the other side, we also have a legal and moral obligation to provide the best possible care.

Henrik Göthberg:

And to preserve life.

Peter Daniel:

To preserve life, if you take it to the extreme.

Peter Daniel:

And it's actually very hands-on. I mean people are dying unnecessarily from not getting access to the extreme. Yeah, and it's actually very hands-on. I mean people are dying unnecessarily from not getting access to modern care. So this is not new to public sector. I mean doing these trade-offs it's something we do every day. I mean the National Road Administration. They know exactly how much money they are prepared to invest to save one life, one person for getting killed on the highway, right, everybody in the public sector is making these judgments or trade-offs. But for some reason, when it came to digitalization, it turned into a very sort of dumbed down and black and white discussion, and I think that is actually one of the things I've really seen a change during the past few years, where people actually acknowledge that it is a tricky question and you need to be able to work with different level of risk here and as a service person, you need to or a civil servant, you need to work hard with creating change.

Louise Callenberg:

If it's a problem with the law or the interpretations or how they change with other countries, you need to work on that, and there are some civil servants that have that role and they need also to go to work and make a change, but they need also to know what kind of change are asked for. So this was also a way to point out those barriers that we have and start to collect momentum.

Henrik Göthberg:

But I think this is very good in so many ways and there's so much research you know Kahneman et cetera that highlights how we are illogically risk averse, right?

Henrik Göthberg:

So human nature we are conditioned to take less risks. So I don't think we think about this until we get it facilitated or shaped in such a way so you understand the full risk spectrum and then all of a sudden you can have a completely different conversation. So people are not dumb, but they're illogical because they have only considered a small piece of the puzzle in one question.

Anders Arpteg:

So, moving to the theme of today, so to speak. Can you elaborate what is the report that you recently published?

Louise Callenberg:

Well, that was our third report, and the report that we took forward was moving the needle in the technical development in the Nordic welfare system.

Anders Arpteg:

So the Nordic system.

Louise Callenberg:

Yes, because we had a thought last year that this discussion that we are having in Sweden and we see that needle is moving, but it doesn't move that fast really, and if you look at the data that you need to use to make those big data technology, developing those kind of technologies, you really need to focus on how to collaborate across borders. That's one of the things, but the other one was also we don have enough, uh, competent people in sweden and they don't have it in denmark either and not in in norway, so we also need to have like a way to. You know, one small country, two small country, makes a bigger country yeah, you know what to it ourselves.

Anders Arpteg:

But if we do collaborate. We could be good enough that actually boils down to the amounts of data too.

Peter Daniel:

it turns out that our data pools in Sweden, when it comes to a number of different types of care, for example, are just too small to draw the right conclusions. So by pooling our resources we can make things so much more efficient and so much better pooling our resources, we can make things so much more efficient and so much better.

Louise Callenberg:

And then you have the, the third point, that we are interpreting the eu legislation differently. So finland are having one kind of strategy, as estonia is doing one thing and sweden and denmark and and norway some other points I must ask who is the most conservative.

Peter Daniel:

Right now, I think Denmark it used to be Sweden, I would say that Denmark is giving us a run for the money right now, unfortunately.

Louise Callenberg:

Unfortunately, yes, but it was very interesting to gather those five cases in this report. We took one case from one country, so there are five and then gathered leaders from those countries to discuss just the same method as we always use, and we actually thought that they would be disagreeing more, but the thing is that they found, as usual, strength in each other and was really, really, really liking this kind of discussion, and also that we need to collaborate. It was so clear.

Louise Callenberg:

So you put together some kind of meeting workshop where leaders in welfare systems yeah, researchers with background as head of municipalities or regions or acting head of municipalities and regions.

Henrik Göthberg:

And overall. Now, how does it work? Is it a one-day think tank, two days, or how do you structure it?

Louise Callenberg:

It's a process. The first question is the cases. So you need to call up a lot of people, and that took a lot of time because you need to find it. In Sweden we know people, but in Finland or Estonia it's harder, so you need to use your network.

Henrik Göthberg:

To sort out. Use the starting point with the cases you want to bring in.

Louise Callenberg:

This is one segment of the.

Henrik Göthberg:

Okay, if I go the cycle now of one meeting.

Louise Callenberg:

And then you you also need to choose which kind of cases that you want, because there are perhaps a lot of cases from cancer, but we can't have five from cancer, so we need to, you know, find what what kind of Case selection? Case selection is one, and then the other one is who are we going to invite to our roundtables? And the roundtable are like two or three meetings.

Henrik Göthberg:

Two or three meetings over a period of weeks, months. Couple of months.

Louise Callenberg:

Couple of months? Yes, because in between we develop the cases and we also start to write the positions. So the second roundup meeting are about the findings from the first. So you have that discussion. This is what we find that you are saying.

Henrik Göthberg:

Are you agreeing?

Louise Callenberg:

And then we….

Henrik Göthberg:

You can literally draw up a straw man and a disposition and a draft point of view and then you iterate and the like.

Peter Daniel:

Because you need to keep in mind that a lot of the participants we have they are actually parts of public entities, which means that they need to be careful on what they sign, because they're doing that from time to time in an official capacity, which means that we need to do this very thoroughly and very iteratively to make sure that everybody is still on board with what we're saying together.

Anders Arpteg:

Cool If you were to try to just try to summarize perhaps a bit the challenges that all the Nordic countries are facing. You mentioned a couple. I think already we're too small, both in terms of competence that we have but also amount of data potentially. But how would you summarize the main challenges for the Nordic welfare systems?

Louise Callenberg:

Something that I could put to that list is also fractured infrastructure, right yeah. So fragmented infrastructure Totally fragmented and also we don't share data the obvious thing. So competence, financial issues we are too small, we don't share data and the fragmented infrastructure.

Peter Daniel:

And then we have the two megatrends. Obviously we have an aging population with fewer and fewer working people needing to support more and more elderly people and also, since the increasing digitalization is moving at pace in the private sector, people are expecting the public sector and the welfare services to behave in similar ways as the digital services I'm using that are delivered from the private sector, which sort of might, if you're not careful, end up in a situation where people are frustrated with what the public sector delivers and that, in turn, is going to hurt democracy, to be honest, to undermine our faith in our democratic system. So these, I would say, are the two sort of megatrends to use and a bit of an overused word.

Louise Callenberg:

That's interesting because our work name for this was Democracy at Stake. So we were yeah, because we were like drawing and putting the line out there and we saw that if the welfare system doesn't use AI or technology to solve their problems, because they don't have enough capacity in itself, so AI can help solving the problems. If we don't, we have a problem with the trust in society and democracy in the end.

Anders Arpteg:

When you did these kind of workshops, I guess AI was AI really the main focus or was it like more general kind of what are the digitalization challenges in general, or what was the focus?

Peter Daniel:

We were slightly more specific than general digitalization. We've been using the phrases data sharing and AI, and from time to time we just sort of used the blanket phrase with new technologies. But in reality it's about finding ways to share data across organizational boundaries and then applying different types of AI-related tools to them. I would say that's the definition.

Henrik Göthberg:

And how far are we coming in these conversations with what I am assuming leaders now to understand the why, the what and the how? So why we need to do it okay, what, on an abstract level, we need to do, but also what is blocking us, or what are the fundamental capabilities? You know sharing data we need to do, but also what is blocking us. So what are the fundamental capabilities? You know sharing data is easy to say. To do it correctly, you know you need to start being a little bit concrete. What you're talking about, so where, you know. So in these conversations, what's the aim is is more the position paper on the why, on the you know. So how deep are the conversations going?

Louise Callenberg:

So how deep are the conversations going? Well, I would say that these are leaders who know the why and they have an idea on how as well, and what the fundamental needs, but they are quite few leaders who have this insight.

Henrik Göthberg:

Yes, I would say so in the private sector.

Louise Callenberg:

If you talk about leaders, if we talk about the leaders, but you have to remember these are the people who organize the organization. So if you have entrepreneurs in your municipality or region who do those pilots or prove a concept with AI, or you know, in stuprören, then you don't have the way to scale it up. So one of the things that we talked about or focusing on in this paper and in the discussion was how to scale, because one thing is to you know to be a leader who build an organization that can change, so you can adopt AI, but eventually you need also to learn how to scale even inside your own organization.

Anders Arpteg:

And, if I understand it correctly, I think one term that we heard other people like Patrick Eckemo and others speak about is the prototype graveyard.

Louise Callenberg:

Yes.

Henrik Göthberg:

There are so many people that you know.

Louise Callenberg:

I want to build a proof of concept to do X, y and Z, and it's really hard to come from that to something that actually is put in production and providing value and if I understand it correctly, you're saying from the start you should have an idea about how to scale it, Otherwise you know it's right, yes, and you need the top leaders to make an organization that allow you to scale Right, because the frustration that Patrick is talking about the graveyard is actually people who are really. They are so fired up with this AI use and they have probably also really good ideas and pilot results, but when it comes to scale it up, they have a director who is like have not have that insight.

Louise Callenberg:

So then they say like oh, it should take part of that. We don't have more money and the collaboration with that municipality no, that's not in our strategy.

Henrik Göthberg:

But this is a really important but also quite deep topic to talk about scaling, because we can discuss scaling now from doing a pilot, you know, we do a research project, we get some Vinnova money in, we get a couple of PhDs from Rice and we do something super cool and it's a point solution that is built on spaghetti for one project and this is beautiful. Not everyone is doing that no, no.

Louise Callenberg:

Some use spaghetti, some others use penne.

Henrik Göthberg:

What I'm trying to say you're building something and putting it in production. First you have something which is on paper, but then you put it on production, but on a small scale, like so you're so with people who are not actually part of the real organization. Then we can talk about scaling in terms of adoption within the larger organization, but still for one municipality. Then you come to the other end of scaling and where maybe typically, in order to understand how you need to build things, you need to think about this like, well, you're going to have two or 300 of these, and how would you then go about data sharing? Because to do data sharing for one single case, I can do that in production with the IT department, but it's not going to be the same story. So I think this scaling topic that you were alluding to I assume this was exactly what you were this is a big one.

Peter Daniel:

A friend of mine who is running a startup.

Peter Daniel:

Now he came back from an investor meeting and was really fired up and said this was probably the best potential investor meeting I ever had, because she asked exactly the right questions. And one of the first questions she asked is is this architecture scalable? Can you actually get this to grow sustainably over three or four years without having to refactor or rebuild it all from scratch again? And I think that, if that's a fairly odd question to get asked if you're trying to find seed money here somewhere in the private sector, it's probably even rarer question to be asked if you're in the public sector, especially since there are so few people with an overarching responsibility for ensuring that things are scalable across municipalities or across regions, etc. But I really do think and that's one of the things we're putting forth in this paper that in order to not just do cool stuff but to actually affect significant change and realize significant benefits, you need to have that scalability issue addressed from day one, because, as you're saying, the graveyard is full of cool pilot projects.

Henrik Göthberg:

So many questions popping out of that. Can a municipality scale this on their own? We should try to keep it short as possible.

Anders Arpteg:

I think we will fail again.

Henrik Göthberg:

How's that working out for you? How's that working out for us? Not so good, too much fun. Too much fun, that's what I'm saying. I don't want to change the format.

Anders Arpteg:

It's too much fun, then we need to scale and we need to share data in different ways, as you say, and not only share data, perhaps in an organization, but also across regions and even across countries. But then we have the ethical issue of that and personal integrity, as you say. Is that still manageable, you would say, to still have an ethically sound solution to share data, or how do you solve the ethical issues with sharing data?

Louise Callenberg:

There are some. I think there are some politicians that need to address that kind of question and to have a discussion across the Nordic and the Baltic region on AI ethics, and not only on integrity, but also what kind of society do you want, you know? So there were, of course, some large ethical questions when we started to empower with electricity the society, and of course, I'm thinking about the lakes and I don't know the English word the dams Is that the English word.

Henrik Göthberg:

Damwana.

Louise Callenberg:

Damwana exactly. So that's one kind of ethical question that was asked in that time. What are we going to do to provide this society with that kind of electricity? And that is a politician's responsibility, I think, to solve those questions. But the interesting thing in Sweden is that we are sharing so much data our integrity, personal data already, but when it comes to the authorities and their point of view in responsibility over the data that they have collected there, you have like an interesting point of view.

Anders Arpteg:

So can you give an example of how we share data already? I mean, I guess, all the tech companies and social networks. We share so much data there, right? Yes, is that what you're thinking about?

Louise Callenberg:

That's one of the things I'm thinking about. But also when you go to apothecary and collect your drugs and medicines and someone asks you what are your personal number, and you said it out loud. That's also a way to share your personal data. But in in the digital area, of course, when you fill in formats on on internet buying stuff we've discussed this here.

Henrik Göthberg:

How funny it is in sweden where we have gdpr on the one hand side. But I can google, go to hittase and or I can I can find all the people I can.

Anders Arpteg:

I can find clear information my phone number and even salaries and so many you can find things that are sensitive by GDPR but it's out there for me to buy, and how does that work?

Louise Callenberg:

Yeah, and if you're driving your car through Sweden, you can find these big, big billboards signs from Ratsikt. Do you want to know what your neighbors are?

Peter Daniel:

It's so weird.

Louise Callenberg:

And in the same time we have this discussion on the integrity, but I think that's actually from offentlighetsprincipen and that kind of training that we have in the public sector to really be careful with the data and value and information that you gather from the people. So in bättre delat we have also one of the positions that the leaders have pointed out that we need to talk about the shift from the authorities having the responsibility over your data too. This is your data.

Anders Arpteg:

Right Taking ownership of your own data Exactly.

Henrik Göthberg:

Maybe one key question here is could you summarize? Maybe the way the report is laid out is with the key use cases and then positions on a certain amount of topics. Could we just briefly summarize the key positions and then we can discuss how they were arrived and what they mean?

Louise Callenberg:

Yeah, I can take those six points that summarize the report. The first is lead by empowering it's all about the leadership, but also making. Not talk about what AI can do, but do it. That's basically what the point is. And number two is build a solid foundation for the use of data, the infrastructure and the sharing the processes.

Louise Callenberg:

And data quality and stuff like that. Number three talk about how AI and data sharing can solve the welfare challenges, and what we mean by that is to have a focus on the challenges in society, not on the technology. On the jobs to be done, jobs to be done and the value created.

Peter Daniel:

And this is where we can start making policy politik out of things, where we actually get politicians interested and where we can get the general public interested by talking about. You know. These are the concrete benefits that new technology can deliver to you know, your ordinary family or your ordinary citizen, and we feel that that's not being discussed enough. We go all the way into details, talking about leaks of this and that.

Henrik Göthberg:

And we have conversations, of course, with people who are close to the AI civil minister and the AI strategy and all that for Sweden. And one of the key, hardest points is like how do we make politics out of?

Anders Arpteg:

this Exactly.

Louise Callenberg:

Compared to all the other stuff we want to discuss in Almedalen. Right, I have a lot of thoughts about that he can call me.

Henrik Göthberg:

It's very simple, because it's about the jobs to be done, and then, when it's the jobs to be done, then we're back in hardcore politics, exactly.

Peter Daniel:

And we don't do that enough. Oh sorry, I'm going to hang back.

Louise Callenberg:

But we can go back to that. I can just say the other three. So four put the citizens and end users first, and what we mean by that is you have to get the end users engaged in the use of data and AI. So the doctors, the teachers, the social workers, they have to be a part of the development.

Henrik Göthberg:

The bottom line is it's defining who are you doing this for. For whom doing what is the core question, the job to be done. For whom doing what?

Louise Callenberg:

If you don't get the story here.

Henrik Göthberg:

What are you solving?

Louise Callenberg:

We have a great collaboration with Barn Cancer Fonden in Sweden. They are the citizens. If you collaborate with them, they have the challenges, they have the money, they have the citizens. So if you collaborate with them, they have the challenges, they have the money, they have the drive. So if you do it together with them, you can really solve some big challenges. And number five make sure it is scalable. So we have actually that as a point. Make sure it is scalable from the design.

Henrik Göthberg:

Think big, start small scale fast.

Louise Callenberg:

Exactly. And number six ensure interoperability across the Nordics. So here we actually point out that we need an interoperability center in Sweden. I think that's also something that the Nordic Council of Ministers just started last week.

Henrik Göthberg:

And what do we mean by sharing the investment in the ideas or the best practices or how we do things? Or, concretely, what do we mean?

Peter Daniel:

We feel that the Nordic countries, and then we're actually including the Baltics as well. We feel that we both share the same challenges and, to a very large extent, the same culture.

Henrik Göthberg:

And values.

Peter Daniel:

And values definitely, which means that we feel that we should be able to provide a unified view within the European Union, because that's where most of the legislation happens.

Henrik Göthberg:

Exactly so. In order for us to be noise in the European Union, we can become a signal as a Nordic signal.

Peter Daniel:

Very much true.

Louise Callenberg:

And perhaps also lead by example on responsible AI.

Henrik Göthberg:

Yeah, so if we really want to get stuff done, we can collect ourselves in this story, of course.

Peter Daniel:

We believe that's true. And well, the network believes that that's definitely true.

Henrik Göthberg:

Or not maybe true, but the necessity to make ourselves heard.

Louise Callenberg:

And here you can also share standards. That's a common issue that comes up but also foster like a common ground on AI ethics as you pointed out, and here we also talk about the Nordic model, the Swedish model or the Nordic model and that's quite strong in our countries. How do you work together with the unions, for example?

Henrik Göthberg:

This is drastically different to other large industrial nations. They don't understand our union cooperation at all.

Louise Callenberg:

But do we do it in Sweden today?

Henrik Göthberg:

No, what do you mean? Do we do what?

Louise Callenberg:

The collaboration on AI with the union. No, have you seen it? No, I haven't seen it.

Henrik Göthberg:

I'm uninformed, completely uninformed.

Louise Callenberg:

Because it's not a political issue. I think that's why.

Anders Arpteg:

Interesting I haven't thought about that yet. To make this a political issue, I think is an interesting question. I'd really like to get back to that shortly.

Henrik Göthberg:

Let's wrap up the core topics of the report. So here we have six topics. I mean, of course we can dig into them and try to understand where we're coming from, but I kind of leave that up to you. Out of those six, which one do you think is most interesting to dig into more and to really go behind the scenes, what was discussed and how you arrived at it? Not for all six, but for the ones that were really interesting.

Peter Daniel:

Now you're asking us to pick our favorite child right Exactly Before that you chose six of these positions for some reason.

Anders Arpteg:

Was this done in some way through these discussions, or how do you end up coming up with these kind of six positions?

Peter Daniel:

By listening to a number of really smart people who have been thinking most of their waking hours about this for the past couple of years.

Henrik Göthberg:

It wasn't like a gross list, that sort of got clustered.

Peter Daniel:

No, no, it's more a matter of. I think it's been a very creative process. We've been the engineers among us have been feeding in a number of cases and then opened up a discussion on that, and then Louise does usually does a great job of sounded slightly negative.

Peter Daniel:

Louise always does a great job of sounded slightly negative. Louise always does a great job of facilitating a discussion among these senior leaders, and one of us is sort of frantically scribbling down keywords or good phrases and then we start sort of feeding that back and circulating it, so it's very creative.

Louise Callenberg:

I really pick their brains and dig deep.

Anders Arpteg:

If I were to be like a devil's advocate here and try to find some kind of you know, potential issues with the positions?

Louise Callenberg:

Sorry for doing that.

Anders Arpteg:

But in some way I mean, I'm allergic to platitudes sometimes I think the first one, for example, lead by empowering, is really good. I think the first one, for example, lead by empowering, is really good. And by having the real users in some way step up and do the work, that's a much easier way to get some true value and something in production. But how do you do it? I?

Louise Callenberg:

mean? What does it mean to lead by empowering? Yeah, I think that's. You just pointed out that the child that most of the leaders wanted to have as number one or actually what they said was this is the top position that needs to be addressed in every one of the other ones. So you can't lead, you can't have data foundation if you don't lead it by empowering. You can't have scalability if you don't lead by empowering. Et cetera, et cetera.

Anders Arpteg:

Because the way I try to avoid clarity is to add a knot somewhere in that. So what happens if you do not lead by empowering Nothing? But what does it mean? It means basically that someone else is not really empowering others to do stuff, but you decide for them in some way. Is that the right way to understand it?

Louise Callenberg:

Yes, I can give you a picture, and this is a need picture. I don't know the English word for that, but just hear me out. So many leaders or civil servants today. They can describe a leadership that sits with a red pen. So when you come back to your boss and it could be a director in a municipality, regions or authority in Sweden you have a leader who is there with their pen and like don't do that, do this instead.

Anders Arpteg:

And you know are the experts, exactly Because that's how we know, are the expert.

Louise Callenberg:

Exactly Because that's how we see leadership in the public sector today you have to be an expert, you have to be the one who decides. So leading by empowering is the opposite.

Louise Callenberg:

So it's creating the preconditions, basically, for other people to come up with innovations, exactly Because you have to make sure that your organization can learn from mistakes but also from success, and also to have this culture of sharing stories and sharing solutions and then prestige. So leading by empowering is actually building a culture that's quite different from that kind of culture that you have today and you can, you know, reflect on why we have that, but I think it's kind of interesting. People are very afraid of doing wrong.

Henrik Göthberg:

in Sweden, the tricky point is I mean, I think this is we are getting it. The tricky point is that when we're expressing it in a way, and then you can understand if it's a position, it's a strategy, you could always try by putting not in front of it.

Henrik Göthberg:

It doesn't make any sense. And then, if it's well, no one would ever put not in front of empowerment. So in this sense it's okay, tricky one. So then you know what you're saying now when we're, when we're looking behind the curtain, so to speak it's interesting to think about you know how you make that. Make that more pointy, I guess, because the empowerment here is key and they are taking it up. But it's so easy, that is something that you can. Oh yeah, man, I'm empowering, I'm listening.

Peter Daniel:

If I understand correctly, it's also that people are actually not empowering people today, perhaps especially in public sector, where they are basically yeah sorry, you can say something yeah, no, I'm just saying that that I think that we should consider this a headline and we actually fill, fill out quite a lot of the blank space with a bit of detail for each position. But I think that if we're talking about empowering, for example, you can dig up so many examples of everything from, like you're talking about, culture to legislation, all the way down to how public entities are governed financially, which makes it really really hard to try and fail, because you're really penalized financially for trying something out and failing.

Henrik Göthberg:

So there are so many concrete examples of as a leader is empowering her staff. Of course she is, but what we are talking about is a systemic empowerment. We are talking about things in the system and things in how we are incentivized or not. That is actually an anti-pattern for empowerment. But if you go to the personal, I'm doing my best in my circumstance, in my constraints everybody would argue that they are doing their best to empower.

Louise Callenberg:

So I think it's this the sharpness that we want Exactly, and something else in that financial is very, very, very important, because if you don't have finance and support on failure and learnings and an investment, you don't have.

Henrik Göthberg:

This is not the platitude. This is a systemic problem in how things are run or steered or governed in policy that you're wanting do what you want to address exactly, but you also have the reskill and upskill.

Louise Callenberg:

so that's also one of the key points in empowering. You have to upskill, and so that's also one of the key points in empowering you have to upskill and reskill all your workers, and it's different if you are a teacher or if you are a civil servant working with policies, but you have to do it.

Henrik Göthberg:

But to be concrete, an empowerment means there needs to be slack in terms of investment and time in order to reskill and upskill. So we can say that you should upskill and reskill your staff, you should teach them about AI, but you have no money, you have no time, you're understaffed under budget and, by the way, we don't do that here.

Peter Daniel:

That is empowering. As you're saying, we give you A the knowledge or the skills. We give you B the money and the time or the skills we give you, b the money and the time, it's time and resources.

Henrik Göthberg:

So when we get to this point, it becomes super obvious that we are not empowering the way we should.

Anders Arpteg:

But I think you know, but you need to go beyond the headline to get here, of course, this is true, and let's move to the second and try to, you know, do the devil's advocate here as well.

Henrik Göthberg:

It was fun Because when you did it now, do the devil's advocate here as well. It was fun Because when you did it now it was so much stronger when we came to that level.

Anders Arpteg:

And the second one then to build a solid foundation. It's hard to put a nut here, but still, who would not want to have a solid foundation for the use of data? But what I think you mean here and correct me if I'm wrong, but it is really hard to have the infrastructure to work with data in a good way, but is it? I guess the question is, is it for a single organization to simply manage the data and do the data governance properly themselves, or is it more about the collaboration? Or what do you mean when you say you need to have a solid foundation, because I think everyone wants to have a solid foundation here.

Louise Callenberg:

Exactly, and a robust use of data and reuse of data. Yeah, everybody wants it, but to be able to have it you have to share. Well, I'm not a data engineer, so please help me out, but you need to have the possibility to structure your data in the same way.

Anders Arpteg:

So that enables collaboration.

Louise Callenberg:

Yes, so the data needs to work together and some of our municipalities put in front that we need, like, an API strategy. In Sweden We'll need to have politics on that. Every data that can be shared should be, shared stuff like that, but that's just the politics. To do it concrete, then you need to have the standard work and again in electricity that we have been having for a long time, you have the standards and we have a lot of standard works in different technical areas.

Anders Arpteg:

If you didn't have a solid foundation for the electrical grid, for example, that we need to have working throughout the Nordics, it will be horrific.

Louise Callenberg:

Yes, totally.

Anders Arpteg:

That perhaps is similar, it's similar.

Louise Callenberg:

And electricity and water are a very bad combination. So if you have a ship who you want to have a battery or electricity on, and then you come to shore, then you need to standard to make it safe that nobody gets hurt, but also et cetera, et cetera.

Anders Arpteg:

We need to have a Nordic data grid.

Louise Callenberg:

That's one of the things. Sorry, but the other thing in a solid foundation is also that it needs to be funded and have the power to be the data lake that you go to.

Henrik Göthberg:

But this is the point right. So to have a solid foundation is a no-brainer. To structure it and politically highlight the mandates and what is federated and what should be centrally funded versus what should be municipality funded is the core question here, in my opinion. So a solid foundation as a position headline, this is good, but it's quite weak if you're not making any positions on what we mean here. And researchers at the Jättebosch University they made an article about the digital paradox in Sweden if you read it, with the four paradoxes, and I had a deep discussion with them. This is normal stuff, man. This is what we've been doing for 10, 15 years.

Henrik Göthberg:

It's not either or. By the way, it's a hybrid, so you need to you know. So I think, if we're not getting to the real meat of the conversation, it still can be so much stronger.

Anders Arpteg:

I think you know I worked with the health sector for like 20 years ago or something and in. Sweden. It's kind of interesting. You have this kind of at least I think five different data formats for just sharing the journal of patients in Sweden, and they're not compatible with each other. So just moving from one region to another can suddenly mean that these people or this hospital cannot really read your data. I'm not sure about today.

Louise Callenberg:

Well, it's the same today.

Anders Arpteg:

It is yes.

Louise Callenberg:

That's one of the cases that we have run up into many times, and and actually it's not only between regions, it can be apartment to apartment really so that's that's a really um that's not a solid foundation.

Anders Arpteg:

No, it's not.

Louise Callenberg:

It's a really problem but but one of the things, because I've been thinking about that so how do you make that happen? And probably by decisions, right? So humans need to gather up and decide on informatics and taxonomy and stuff like that that you have been doing in infrastructure.

Anders Arpteg:

I did research in Semantic Web for like 20 years ago and they tried to come up with a single standard for the whole of the world and it failed completely. And it is hard, and you know sometimes, I think you know, coming up with a single standard that everyone agrees on will be really hard, but I think actually, ai can circumvent that in some ways. I think that we can have ways that may be different but we can translate between them and perhaps we can have ways to share data without having all exactly the same standard and formats.

Louise Callenberg:

I think that's the key point here, because we need to think about it in a different way, not to solve it in the ways that we have been doing it. But here's the hard part when we talk to each other, we don't have that kind of metaphor for how to do it in the new way. But if you think of ai that could be the translator then you need like principles or other things, but you also have to make the decision. Of course, we in sweden are going to make it you as a patient, able to collect your data and it should follow you, because that's our first position paper. It was on the issue on on the value chain or the health chain. What did you call it?

Peter Daniel:

and actually actually being able to, to own your own data. That's one of the principles that we put in the first paper.

Anders Arpteg:

Do you think that? Yeah, sorry, go ahead.

Henrik Göthberg:

No, I think, in order for it to be a position paper, because ultimately I was joking about that. Oh, I want to build a solid foundation, we have all agreed upon that. But what is the architecture of a blue foundation versus a red foundation, because they're very different ways to reach a solid foundation. Do you want to do it monolithic in one huge monolithic data lake, centrally? You know that is that sounds interesting.

Peter Daniel:

Yeah, or do you want to do?

Henrik Göthberg:

it in modern distributed architectures with the platform and stuff like that. So so, and here is the fundamental, where does the power for this foundation lie and investment and mandates, and how do we foresee that? So it's sort of not.

Peter Daniel:

Until you get there is the technical stuff becomes politics as well, and I must say I'm slightly flattered that you're asking us to provide a detailed answer to that no, I was curious on how far into the topic we got. Right now we're at the stage where we want to raise awareness. I want to stress that the intended target audience for this paper. I would say that it's two target audiences.

Henrik Göthberg:

This is an important comment, by the way, which one?

Peter Daniel:

One is politicians we need to phrase this in a way that makes it relevant for somebody in the political system and the second one is high-level decision makers in the public sector, both on the technical side. But to them they would probably react much like a bit of the comments that you were giving, that this is actually fairly self-evident stuff. But if you're not coming from the tech side, if you're, for example, the head of the social services department at a municipality or you're head of the neonatal care at a hospital or something, I would say that we need to start by raising the awareness of that. These are actually important issues for you too, to think about.

Peter Daniel:

And that's the stage where we're at right now, I would say, with this paper, is that a fair conclusion?

Henrik Göthberg:

It's a position on what we need to look into then.

Peter Daniel:

Exactly Because if we sort of took this one step further and said, well, we need to consider whether we should be using a distributed architecture or a monolithic one, you would get blank stares from a lot of the politicians.

Henrik Göthberg:

I know, and now I took it on purpose down deep. But what I'm trying to get to is that, even when you're doing it for the highest politician, in Sweden what is the choice? So when you say we need a foundation, it's not a choice.

Peter Daniel:

The choice is not to do anything, which I think, to some extent, if I'm being a bit mean is what we're doing now and we don't have that infrastructure.

Anders Arpteg:

I think it moves to the second position, or third, rather, position that you have. That's a good point, which really says about you know, talk about how AI, data sharing can solve some of the challenges we have today, and I think exactly what you're saying if we don't do anything, we will not solve these challenges and by I think, as you said before, if we demonstrate, you know what the opportunity cost is by not doing it they will start to understand we need to do something right, and I think just describing the opportunity cost here is super valuable and preferably the opportunity cost, not only in in swedish crowns or or what have you, but also in people not getting the care that they have the right to expect.

Louise Callenberg:

And one of the strategies that we point out is work with civil society media to foster a public debate. That's one way to do this, and I think that when I talk to civil society organizations, they often feel stepped aside on this kind of matters that you could talk about human rights in using data and AI and healthcare for the many or the few stuff like that.

Peter Daniel:

And, if I might be a little bit provocative here, I think that from time to time, the point of view that I would guess that the public has differs a bit from the point of view that some civil servants have. For example, if I asked any one of you around the table whether you would prefer to get the best possible pacemaker hopefully not, but if you do, if you should ever need one I would give you the selection of two of these One of them which stores some pretty important biometric data about you in the US, and one that doesn't store any data whatsoever. And one of them is more likely to be able to keep you from getting a fatal heart attack. And you have that choice by yourself. It's always a trade-off right, and I think that that trade-off is not being put to the general public.

Anders Arpteg:

I think that's a great point and if you can make that more visual and apparent for leaders, I think you've done a great service. With these kind of positions. It's a really good point, I think.

Peter Daniel:

Mind you, it's not black and white. So I'm not saying that it's always right to do X or Y. I'm just saying it's a complicated issue and for a long time we haven't been treating it like an issue at all. I would say.

Anders Arpteg:

Okay, so how do we talk about AI and data sharing? I mean, I think we can all agree that the need for it is great and we need to speak about the opportunity cost and the lives that it can kill potentially if we don't do it properly. But how do you do it? I mean, one is, of course, to publish a paper that you have, but we need to, I guess, in some way, talk about how do you do it. How do you reach out and talk about these issues?

Louise Callenberg:

Well, that's a million-dollar question, because we have been doing this for three years and it's hard. It's hard to reach out with the stories, but I think it's storytelling that you need to do with the stories. But I think it's storytelling that you need to do and you need to address it in a way that you can, so you can follow the argument, so to speak. And I think that if you make civil servants aware of what you can do with the AI and make them use AI in their everyday work life and they also start to get those kind of own stories, then perhaps they can make you know the movement it's empowering the leaders.

Louise Callenberg:

It's empowering the people or empowering the leaders and not see it as something that a politician needs to do. Sometimes in Sweden we talk about that we don't have enough political leadership in this kind of question, but you can also say we don't have enough teachers talking about it.

Anders Arpteg:

But I think you know we mentioned this before how can we make data and AI a political issue? Because it's not today and I think it's really rather hard for a politician to even speak about this, because it doesn't really help them in any kind of election yeah. But perhaps you know, if you do speak about the opportunity cost, about the potential issues, if you don't do this properly, that could become a political issue.

Peter Daniel:

That's exactly this is exactly the line of reasoning that we've been having now for the past years. I won't say that it's worked perfectly, but I think that it's been a relevant way of addressing the issue, one of several relevant ways. But I really do think that, in order to make this a political issue which it needs to be we need to move away from saying we can save X billion by moving into a hyperscaler architecture or by consolidating Transformers by the way.

Peter Daniel:

Swedish data. One of the most memorable moments we had in the first paper was when one of our members stood up on stage and said that you know she'd been part of trying out using machine learning to identify women and children most at risk from being killed by their husbands and dads, and that project was canceled due to privacy issues privacy legislation issues, which may very well have been valid, but still she stood up and said well, if you could point out to me where it says that we're going to be placing integrity issues above the child convention, then I'm happy to accept what just happened. So I think that you know that to me is policy-making stuff right there, because it affects me as a father and probably pretty much everybody else. And if we can continue having those sorts of discussions, it doesn't have to be about legislation or integrity, it can be about anything. Why aren't we spending more money on building this data foundation, for example?

Louise Callenberg:

We need to explain why, what it does to you in your everyday life, and also to share the stories where data and AI are being used to prevent, perhaps, children in school not learning and we have seen also that you know this late discussion on skärm till pärm if you use your phone or not in school and also a fear from politicians that the school systems and the children are not learning enough and it's all the computer's fault, basically. But we also need, in that area, to talk about all those kids who are helped with digital tools and all those children who, in the first time, have hope for the future. So that kind of discussion needs to be addressed as well. So where can you use data and devices to help? And I think that kind of stories need to be told. And one of the things that I've been dreaming about is you know, television should do a Documentary yeah, documentary about is you know television should do a demo documentary.

Louise Callenberg:

Thank you on moms and dads who has children in school, who who has been helped from technology and listen to them and their stories because it's going to to move you in the gut. You know homesitting children who can be a part of of education because you you can use your computer and video.

Anders Arpteg:

Interesting how these kind of I think that's you know I mean, it's a strong opinion that any kind of digitalization is dangerous in some way and using the phone or the computer is bad for people. But it is a two-sided coin in some way.

Louise Callenberg:

Exactly, of course. I don't think that children should have telephones. That's my point of view. I think they should have, like, a phone that you can call on, nothing more. That's my point of view. I have four children and they all have phones. I would just say that.

Henrik Göthberg:

So I'm no better, but but you know.

Louise Callenberg:

I think that could be a valid point, but I'm very angry on the discussion in the school system that we think that digital tools are not helping.

Henrik Göthberg:

But what are the low-hanging fruits out of the different political debates we could try to tie into? I mean, like to do storytelling and to move it into a political issue. Yes, so we are strategizing on this. And then what are the low-hanging fruits? You know, school maybe.

Anders Arpteg:

Learning.

Henrik Göthberg:

Where would you summarize, or where would the group summarize, the low-hanging fruits for making this a political debate?

Peter Daniel:

I would say that there are already several examples of political decisions being made, for example within the judicial system, on how to share data to fight organized crime or terrorism.

Henrik Göthberg:

Yeah, here we found traction to do something you do a lot.

Peter Daniel:

Yeah, and you could have a lot of opinions on why. It's always very easy to restrict people's freedom when it comes to terrorism rather than other things, but let's not go down that route really, but that was one low hanging fruit that is definitely one low hanging fruit might go down that route really, but that was one low-hanging fruit. That is definitely one low-hanging fruit. I would hesitate to call sharing of health data low-hanging fruit as a whole, but there are definitely discrete areas.

Henrik Göthberg:

That's a really strong story. Let's not use the word low-hanging fruit, but the politically most viable path.

Louise Callenberg:

Saving life is always. Saving life, yes, and school as well, but that's a tricky one. It's a tricky one, but you know politicians like that, they like if you have like a conflict.

Henrik Göthberg:

So the school system should be Media likes conflict and media means conflict, means politics, means something that people listen to and the politics there's a, there's a you need to figure out. You know where? Where would the media want to put?

Louise Callenberg:

some fuel on the fire. I would like, yeah, if media should also look at into that alternative costs.

Henrik Göthberg:

That would be really interesting this whole debate people are you know what's his name who is sort of uncovering the costs of procurement and our inefficiencies.

Peter Daniel:

Jens Nylander.

Henrik Göthberg:

Jens Nylander, I think, is doing a very, very good digital political statement right now and this is seen right now.

Peter Daniel:

Yeah, and it's slightly uncomfortable for a lot of people, let's go with that one.

Louise Callenberg:

Climate change. Climate change, that could be one, and also but before we leave that one.

Henrik Göthberg:

I think this is one of those sort of politically, this is one avenue where it's like this starts to get uncomfortable and or does that mean it's shut down that, that that is shutting down that, or is it something that opens up?

Peter Daniel:

I definitely hope not. I mean, jens is doing a great job in what I've been and a number of my colleagues have been involved in for a number of years, that is, fighting organized crime, because quite a lot of the fraud is actually organized crime. So I certainly hope that it's not being shut down. On the other hand, without sort of taking anything away from that initiative, which I think is great, it is also slightly on the boring side of how you can use analytics and, as I said, fighting crime or terrorism isn't boring, but it's also a very obvious route that we tend to go down a lot and a masculine as well.

Peter Daniel:

Elderly care, for example. Why is the care system tied so badly into the healthcare system with the different actors there, and why are we struggling so much with sharing data? Why are we making right now judgments on personal integrity versus risk mitigation for elderly people? For example, is it okay to put cameras in people's homes to monitor whether they have fallen and broken a femur? Is that something that you would accept as a private citizen that we could monitor?

Henrik Göthberg:

you, but I have an angle of what we're talking about now, but we need to.

Peter Daniel:

Really, we're just getting going.

Henrik Göthberg:

Sometimes it feels like there is a discussion ongoing, but they are missing the ammunition. We need to give them some ammunition to the media, some ammunition to the politicians yes, the media.

Peter Daniel:

some ammunition to the politicians yes, let's put some fuel on the fire.

Henrik Göthberg:

So it's a little bit like if we can find where they want to debate and then feed the data points into here, connecting the dots with those debates.

Louise Callenberg:

That would be great. I fully agree.

Peter Daniel:

That's what we've been trying to do to some extent yeah.

Anders Arpteg:

I think my time management policing skills is getting frustrated here but, I think it would be awesome if we just move through the six positions and then we have some ending questions as well. But if we take number four, put citizens and end users first, and then let's, let's try the devil's advocate kind of position, platitude position here as well. What happens if we don't put citizens and end users first? Who do you put first then?

Peter Daniel:

The legislation, the regulation, the internal cost efficiency of the public actors. That's an easy one.

Henrik Göthberg:

I think that's the one I see how you really start from a particular perspective. We're not. You really start from a particular perspective. We're not going to start from the cost perspective. We're not going to start from the legislation. We're just going to start from the citizen yes. So here it is a choice.

Louise Callenberg:

It's a choice.

Henrik Göthberg:

It's clear For me, that's clear.

Louise Callenberg:

And also, as you pointed out, when we started talking about the AI strategy. Perhaps that's the wrong way to start, exactly.

Anders Arpteg:

Right, so okay, and I'm trying to keep the time here a bit short, but if we were to try to summarize a bit, what's the problem with putting citizen and end users first? Is it that people are focusing too much on the cost and too afraid about legislation? What is really the challenge here?

Louise Callenberg:

In some ways it's a way of working. When you talk about involving citizens, you tend to think about it as a hearing. That's not really the point. The point is to to use design, design thinking or design solutions, and if you do that, use that kind of method, then you decide, make a design that's usable and and make the end users in the front seat so you can test and then correct and then test again to make sure that your idea are are being good and a good idea. But but sometimes in the public system we are in the procurement and we, you know, tend to set up an idea that we really already know what we want, and also those who build the solutions. They also want the public sector to tell them what they are going to build, and in none of that discussion the end users are there. So you also need to have a procurement strategy that puts the end users first, and also to have learnability in those processes.

Anders Arpteg:

Perhaps you can bring some of the private company mentality. I remember the Spotify days. You know the top one metric that you want to optimize is really the end user experience, Exactly. And then if it costs a bit more, but that doesn't matter, because if you in the end, in the long term, really improve the user experience, that's worth more than anything else. And perhaps the similar kind of mentality should be placed here.

Peter Daniel:

And as long as you get a wide enough definition of what the end user experience is, because you can build a great web page for reporting x, y or z with all sorts of user experience aspects taken into account, or you can realize that we don't actually have to ask the citizen at all. We can find this information elsewhere, and that is that might be. That might be the the best user experience of all.

Anders Arpteg:

And I guess I mean I think we've spoken about these in other positions as well, but I think you know what you're trying to say that really putting you know what will the opportunity cost of lives be instead of crowns or dollars. And by using that kind of metrics, then we are putting citizens and users first right.

Louise Callenberg:

Yeah, and also to create that service that enables citizens to make their own choice. Right, that could also be a way forward. That's very concrete.

Anders Arpteg:

Awesome, should we move?

Henrik Göthberg:

to the next.

Anders Arpteg:

Just two left, but make sure it's scalable, and I think we've spoken about this quite a lot as well, so perhaps we don't need to spend too much time on it. But okay, what happens if you don't make it scalable? What's the problem of not making it scalable? Why not simply build a prototype first?

Louise Callenberg:

Well, prototypes are good. I think also pilots are good. I think this discussion that's out there that we're going to kill every pilot. I'm not sure I totally agree on that, because it's a method along others. I think projects also has its place, but the thing is that you need to think scalability first and in the beginning, because if you don't, you tend to, you know, just work in your own department and make something for the end users that you are working with, and I think that's also a culture thing.

Louise Callenberg:

And if you follow the money in the public sector, that's how we are designed, so it's really hard to be scalable. So you need an organization that allows you to scale. And also to answer the question if you put a lot of money and effort in one kind of digital service, are the other ones going to pay for that later? That's also a question that's arising. And also, if you have a municipality that could have a digital service that could be used for another one, how are you going to share the costs and which are the taxpayers? Who's going to pay for it? So scalability is also on how to organize digital.

Peter Daniel:

And I would say that that might be, in Sweden at least, one of the most important aspects that we are, by design, an extremely silo oriented public governance system, and finding ways around that is solely needed.

Henrik Göthberg:

I'm thinking also oh, this is good no regrets positions, but they are not really uncovering the real problems enough, like you're describing now, which is fundamentally a position on how we should share what the state should be doing versus, you know, different departments versus the municipalities and all that. So this is what I was looking for On the high level. These are six key areas. Agreed, tick in the box. I would love to see what's the position paper. I would love to see positions on those no regrets topics.

Peter Daniel:

Well, we are going to continue working in this Nordic network, so be careful what you wish for.

Henrik Göthberg:

It was a rhetorical leading question. Are you going down into positions on these positions?

Louise Callenberg:

In the paper. You have key points under each of those positions that kind of go deeper.

Henrik Göthberg:

They're too soft. You can interpret them left or right?

Louise Callenberg:

I can agree on that.

Henrik Göthberg:

I can interpret that the way I want it to, as a central approach or a distributed approach. Red or blue.

Peter Daniel:

True, you sound exactly like I do most of the time when I'm in my client engagement, so I can't help but agree, obviously we would love to keep digging here, but, as you say, we need to start somewhere, but you needed to get them together as a group.

Henrik Göthberg:

So then you need to have a solid foundation. You need to lock the door on that one. We need to have the first closure Now. We can't go back from here. At least Now we can negotiate. Is that the idea?

Louise Callenberg:

That's one of the ideas yes yeah, we had some discussions on the trials and we were that's one of the things that we were discussion discussing that's where we could go. We had a choice to go, uh, harder on that one or clearer to me.

Henrik Göthberg:

To me it's clear on position.

Louise Callenberg:

Yeah perhaps, but the thing here is that we are pointing out an area or all of those areas could need to dig deeper in, and that one on medical trials and the method around it. That's one of the things that if you're going to make something scalable in healthcare, you need also to think about the scientific method that you are using and we are designing the medical trials.

Henrik Göthberg:

Yeah, but you have already relaxed me because now I see a great salesman in front of me that is doing partial clothes.

Anders Arpteg:

Do you see what I mean?

Henrik Göthberg:

So sometimes we need to lock it down, what we're going to talk about. And now, when we lock that down and let's stick here and now we can move into the harder questions, because if you open up everything, on everything, you're not going to get anything done. So I kind of agree with it as an approach.

Peter Daniel:

I'm not happy with it yet how far it needs to go of course, the other alternative would have been to wait a year with this one and deliver some more details. I think we'll get there. I think we or somebody else will get there. It doesn't really matter who, but as long as somebody gets there it's going to be fine.

Anders Arpteg:

Awesome. And the final position ensure interoperability across the Nordics. It's hard to say why shouldn't you do that? Because I think we all agree that's a very easy one that we need to have, and it's certainly not the case today, right?

Peter Daniel:

I think the operative phrase here is in the Nordics, because you could easily have said insure interoperability across Europe or across Sweden, yes, but across Sweden is also relevant, I would say. But rather than going, what we're saying is that the Nordics have a number of common denominators that makes it interesting to work with interoperability within that region.

Anders Arpteg:

Didn't you also in the report, have some kind of number on saying if we actually combine all the countries in the nordics, we come up to a rather big player in the world, right, yes, if you count citizens right, okay, that's the number okay or bnp even, I think, and also this one nordic interoperability.

Henrik Göthberg:

This one, I also think, is a fairly clear position in some ways, because it highlights that we are we. It's not just as simple as working in sweden, and it's not just as simple as following european legislation. It means something that we want to create a critical mass around positions, to be a political force in the european negotiation, so to speak. So I think that this one is an effort that is not the same as you would do locally in each country. That one I can sort of choose up straight away.

Peter Daniel:

You're talking from a political context. You could also apply it to a technical context. But by standardizing across all these countries we actually become a significant enough client or possible customer to a number of different vendors that we can actually get more in. There's a larger chance that we can get some adaptations.

Henrik Göthberg:

I was thinking technically almost first and then standards.

Louise Callenberg:

I was almost expecting that I was doing that and I was also doing the values on the positions of how we think about different legislative laws coming up, and here you also have the Nordic ethic point of view and. Ai, trustworthy AI.

Henrik Göthberg:

You can make a statement in that and there's so much going on with the AI Act, so it can also be a way to become clear in that whole debate, to form an opinion that is more than one at a time the most important thing on the ai act.

Louise Callenberg:

Well, that's not, that's nothing that we need to, you know, adopt to it, just do it perhaps, so in in order to, you know, put effort in doing rather than talking. So here we have, like, one of the key issues that I think a nordic interoperability center could change. You know that the scenery is not like how to we do we?

Henrik Göthberg:

interpret? Should we really have five different interpretations? A center for, you know, I want to support small business with a, you know, complying to the ai act, and we're going to have four or five of them.

Louise Callenberg:

I don't think we need no and we don't need more like pre-studies or or think tanks on that. We need to show, show it start doing it.

Louise Callenberg:

Yeah, together, together and and just just putting it in front of us Last week I was invited to this Nordic Council of Ministers. They had like a think tank kind of meeting and AI Sweden was there. As you know, Martin Svensson and Mikael were represented and they actually put that to the audience that they are going to start that kind of center or something like that the Nordic AI Center right.

Louise Callenberg:

And I think that was very well received, and we were discussing what are this kind of centers responsibility in the Nordics? And then we use this position paper that we have Because I was invited, because we had put this report together and I sent it to them and they were so happy about it Because, of course, the Nordic Council of Ministers, they also have that question. So, what are we going to do? Ai is important, but what? What are?

Anders Arpteg:

the key issues. It's going to be like, not just, oh, we're going to build a Nordic language model. I mean, that's that will be the wrong. It was good, right. So I hope you know all the kind of positions and challenges that you mentioned in the paper. If we can get some progress on that and really start to collaborate on how to find value, I think that would be amazing. So great, but awesome. So you will continue to. I guess this is the question then what's the next step for after this paper? How will you try to continue the work and find value from it?

Peter Daniel:

I think it's going to be or I know that it's going to be a two-pronged approach, maybe a three-pronged approach, depending on how you see it. We're going to keep on working with the Swedish network. What that is and we're going to be the area we want to probe there is lifelong learning and scaling likely. Don't take this as gospel. We still need to finalize the decisions here, but that is likely the area we want to do next in Sweden. And then we want to do the Nordic, keep growing the Nordic network, and there we're actually going to start looking a little bit at tangible, measurable benefits. How can we get better at measuring and how can we get better at actually steering towards these goals?

Louise Callenberg:

And then, louise, we have your area as well around leadership, right yeah, and we would also like to address the leadership issues empowering by leadership, leading by empowering and we are also talking about like a big meeting or something like that, with everything that we have been doing.

Anders Arpteg:

So perhaps next year If you can help. Also, you know, finding some common understanding of the AI Act and other legislations going forward and how to really interpret that, and find you know the AI Act and other legislations going forward and how to really interpret that, and find the sandboxes and what not to? Really empower businesses. It would be really great if we could find a common Nordic point of view on that regarding doing the whole GDPR thing all over again, learning the lessons from GDPR and doing it right this time. That would be amazing.

Louise Callenberg:

And we are going to continue to put the leaders in the front seat telling their stories and make area forum and debates for them.

Henrik Göthberg:

And so are you meaning, in relation to this position paper, the position paper of six key areas, are we talking about those six positions, and what is the next step to move on those six positions into metrics, into goals, into choices, or is is that part of what we discuss now, or or are we moving into other projects?

Peter Daniel:

to some extent. Yes, we will keep digging into these six positions. I mean, this is where we're, that's our starting point. As you said that we established that I'm not sure we're going to be digging into all six of them at the same time no.

Henrik Göthberg:

So which one was the favorite out of the six to dig into?

Peter Daniel:

Scalability. I'm not there yet, I must admit. Well, to was another one Skilling, yes.

Henrik Göthberg:

How was skilling part of which position was closest to?

Peter Daniel:

skilling. Now we're digging into specifics here. That is actually part of the Swedish network, which is so skilling was not part of the six positions.

Henrik Göthberg:

Yeah, it's part of empowering. Skilling was not part of the six positions.

Peter Daniel:

Yeah, it's, it's the first. It's part of empowering. Yes, it's part of empowering yeah, so skilling.

Henrik Göthberg:

Then if, if we go into a concrete topic of skilling, it's under the umbrella of empowering, I would argue oh like, if I look at the position, yeah, okay I'm thinking we should try to keep it less than two hours. Okay, I'm thinking we should try to keep it less than two hours.

Anders Arpteg:

I'm really sorry for pushing the time here a lot, but we have to train, you know it's only been four years, right?

Peter Daniel:

We've been trying for years. Oh, that's great.

Anders Arpteg:

Should we do the normal kind of philosophy kind of thing?

Henrik Göthberg:

in the end I would like to go a little bit more. Ai divide go philosophical. Let's have two more topics then, okay so and on the fundamental.

Anders Arpteg:

Okay, you want to start?

Henrik Göthberg:

Yeah, so we have had a lot an ongoing discussion around how we can define the divide on AI from the tech giants, the leading countries, into our country and then you can go into. I think you can go in the micro level and look at the AI divide within organizations and competences on different roles and then you have another end of the scale of AI divide which would be the rest of the world. That is not US, not Europe, but if you take Africa or any other. So there are some fundamental topics here about inequalities around the corner here. Or, you know, distribution of power or distribution of wealth around the corner here. Or distribution of power or distribution of wealth. Have you thought about the philosophical, macro perspective or geopolitical?

Peter Daniel:

perspective. Personally, yes, as part of a better day maybe not so much, no, but this is now in the philosophical end.

Henrik Göthberg:

What is our understanding of the AI divide in the philosophical?

Peter Daniel:

end. What is our understanding of the AI divide? For me at least, the main geopolitical and socioeconomic divide is going to be the aggregation of capital, if we're not really careful here, the disempowerment of the middle class, where you're actually industrializing quite a lot of what's made the middle class and the democracy the representational democracy so successful in the past hundred years. I'm not saying it's going to happen, but I think that is a very real risk.

Henrik Göthberg:

Yeah.

Louise Callenberg:

Interesting.

Henrik Göthberg:

I mean, if you look at the statistics in us right now, I mean like the distribution of wealth has gone the wrong way the last 20 years and I'm not sure, exactly the last 20 years.

Peter Daniel:

I'm not. I'm not sure that we can blame ai for that, but I think that it might be. It might be an additional, additional force in that direction you not the ai but the tech bubble or the?

Henrik Göthberg:

you know tech and innovation and stuff like that. But it's interesting what happens if you truly have the conversation of the haves and have nots in AI. What will that do to that distribution? Probably polarize it further.

Anders Arpteg:

That's my simple thing. But you can also speak about the moat between the tech giants and the rest of the world and the companies, and some people are arguing that it's increasing and some are saying it's decreasing. But if we look at the last year in terms of investment in infrastructure, it's a very, very clear trend. We have a few, like four or five companies that are investing like tens of billions or hundreds of billions of dollars in just infrastructure to be able to build and to serve these kind of models. That is going to be a very few set of companies that's going to do that, like a few number of them. No Swedish one is going to do it. I would say no European one is going to be able to compete with these companies and if they succeed, if they are going to have an AI that is empowering that company that much, they will grow in wealth. Potentially it could be the case that that will cause an even bigger divide, saying the few super rich companies are going to grow even richer.

Anders Arpteg:

So, what does that mean? What does?

Henrik Göthberg:

that mean, from a geopolitical standpoint, going in that direction.

Peter Daniel:

It all depends on how many beers we're into this discussion.

Louise Callenberg:

I think Well of course, that would be very hard for many countries if that's going to happen. If that happens, probably also you need some right-at-question politician.

Henrik Göthberg:

Yeah, and if you take this conversation out to the extreme and you go to Africa where they sort of kept up thanks to open source and Linux, if you imagine we're all paying for our chatbots or our LLMs and they have no purchasing power at all, there's going to be like two worlds. It ends up in two extremes if we're not careful. It's the dystopia.

Louise Callenberg:

Yeah, but but there remember this uh as well there are people working those companies and they also have values. So perhaps you should be more uh interested in what kind of values are those companies driven by, how do they foster their company culture and how do they see themselves as a global player?

Henrik Göthberg:

So far, I think they have shown their true faces, quite often in a fairly greedy approach to be honest.

Louise Callenberg:

And then because that's my point as well then you need to think about what kind of society and global relationship do we want? So do we need to do some politicians on that? So do we need to have a discussion on human rights issues, on data? Or do we need to have some discussion on how to? Is it okay to use data and technology in that way that they are doing, so they can build that kind of wealth? Are there other legislations that can be addressed? I don't know. But how do you foster a culture?

Henrik Göthberg:

It's tricky to point fingers to people who are running like hell and doing well and investing, and when you start pointing fingers at people who are not doing anything, in this case, you know. So I think it shines a light on if you're not acting, if you're fat and happy and if you not see the world going by. And if you not see the world going by, to sit and scream from the back seat is kind of not so cool.

Louise Callenberg:

No, I think you should be getting involved instead. I think that politicians and the tech giants need to talk about it.

Henrik Göthberg:

You have to address the issues and you also have to talk about values. Yeah, but just now, to make this AI divide a little bit more balanced. At the same time, we have ICA, who's literally pushing technology into farmers that can't afford it, and they're forcing them into the cold chain in a certain way. That I think is very unethical. So we are not doing anything better from the position we are for the ones who is worse off?

Peter Daniel:

But I would also, I would like to challenge fundamentally what you just said about you know Africa be getting handed the short end of the stick here, because I mean for the past decades, I would argue that you have seen the poorest countries benefiting quite a lot from digitalization. Benefiting quite a lot from digitalization I mean I spent a number of times in Southern and Eastern Africa around the dot-com boom, and I mean the classic example obviously is the mobile payment systems in Kenya and Tanzania that really empowered what was then Herders.

Henrik Göthberg:

But that was open source. That was open source. That built that. That was Linux. Yeah, it was. We had the guys who built that stuff here.

Anders Arpteg:

Moving from the more wired kind of networking to just moving to mobile directly, actually leapfrogging them into a future that others didn't have to pass. So I mean there are examples of this where they actually can move faster in some ways.

Henrik Göthberg:

My whole point was to sort of where's the debate here on the?

Anders Arpteg:

AI divide and how do?

Henrik Göthberg:

we see it end to end and we are staring on the tech giants and what they are doing and what they should not be doing. And then we are thinking about the countries, what they're doing, what they're not doing, and then it's like, well, there's another part of the world who is still on 3G or 2G.

Louise Callenberg:

Well, there's another part of the world who is still on 3G or 2G. Yeah, and that should be addressed.

Henrik Göthberg:

So it's an interesting topic to have part of the debate. It was the angle.

Louise Callenberg:

Anyway, that was a good one, thank you.

Anders Arpteg:

Let's move into an even more philosophical question here. No rights answers, but just speculation. Yeah, let's see where you go with this. So imagine hypothetically that we have a future of AGI that could be coming next year, five years, 10 years, 100 years or never, who knows but it could be a time where we have an AI system and machines that are, in general, more intelligent than us in any kind of task. It could be leading to a more utopian future where we actually have more or less a world of abundance, where goods and services are provided for us and we don't need really to work to the same extent as we do today.

Anders Arpteg:

It's actually Nick Bostrom came up with a book recently called the Deep Utopia that he speaks about a potential future living in this utopia or it could be a more dystopian one. And then it's all the media hype about living in the matrix and the terminators of the world and the machines are going to crush us and kill us all. What's your thinking there? What will happen if we will build these kind of AI systems that are potentially significantly more intelligent, moving into a super intelligent world? Peter, do you want to start?

Peter Daniel:

What's the question? What's going to happen, do you?

Anders Arpteg:

think we will move more into a dystopian future where we have potentially a human extinction kind of situation where machines will kill us all? Or will it be more of a utopian future where we will live even more happily?

Henrik Göthberg:

To look at it as a spectrum from positivist to dystopian, and why.

Peter Daniel:

With my Baltic genes, my genetics from Estonia. Obviously, I'm leaning towards the dystopian side of things here.

Peter Daniel:

But then again, my whole life experience has been that there is always two sides to the coin, right? I would think that one of the things that we're not really discussing all that much yet is going to be the effects on on our, on on the philosophy of humans, by realizing that, no matter how good I get at chess, I'm never going to be able to meet to beat the machine. I guess we had a number of existential crisis with grandmasters in chess back in the 90s, but they play more chess now than they did ever before.

Peter Daniel:

And that is interesting. Are we going to become a planet full of pets and hobbyists doing stuff the hard way because it's fun? Where do we find our meaning when we know that there's always going to be somebody who's not human? That does it better.

Anders Arpteg:

I think that that is actually one of the most terrifying thoughts, or at least one of those thought-provoking things, if you take some examples like the deep utopia is saying, already today we have people that don't have to work, that still are not perhaps as intelligent or powerful as other people. It can be children, for example, it can be retired people, it can be people actually that have parents that are super wealthy and they don't have to do anything. So we have examples of people that do not have that kind of work for a meaning in life kind of situation and they still seem rather happy, right? Or do you think we need to work to be happy in some way?

Henrik Göthberg:

I think we do. I mean like sorry, please answer what we think.

Peter Daniel:

Yes, we need to work. I think we need to do things, for I mean all research points towards the fact that we need to do things, for I mean every all research points towards the fact that we need to do things for other people in order to be fulfilled. Do we need to do it nine to five or get get paid for it? Probably not, I think, but we need to.

Henrik Göthberg:

We need to do things for other people, I think, to have purpose and have and go for achieving goals and all that, and then then you can tie that to. You know we can do voluntary work. We can do. You know I'm a volunteer here and here Of course it's work. It's not per se paid in this sense but it's work. So I think there is a definition of work here that might not be as tied to the monetary.

Peter Daniel:

And, to be a bit rude, I think that the definition you implicitly gave of work is about what 200 years old that somebody else is paying you to do stuff for him or her?

Louise Callenberg:

Yeah.

Peter Daniel:

That's just a couple of hundred years old, that definition of work. Back in the 17th 18th century, most people didn't work for somebody else, they worked for themselves by farming.

Anders Arpteg:

But you worked for a living, so to speak. You worked because you had to to survive in some way.

Peter Daniel:

Yeah, but I mean these things that we take for granted now, like we were talking about what's going to happen to nation states, for example. Well, they haven't been around for all that long, so very likely our concept of work is going to change too.

Louise Callenberg:

It's changed a number of times before in the history of man, but perhaps we will have to do stuff to survive but not work in that sense that we have right now. But the interesting part is that humans always create. We are curious and we like to solve problems and we like to be social, and in history of humans we have also seen this kind of discussion on dystopia and utopia. So that's always when we talk about the future. Are we going to go in that way or in that way? We don't know.

Henrik Göthberg:

It's not a new concept. No, it's not. It's heaven or hell. It's heaven or hell.

Louise Callenberg:

It's heaven or hell. Even that are invented. It's a question that wasn't on the table.

Anders Arpteg:

What do you think then?

Louise Callenberg:

Well, we work very well together, and one part of that is that I am a positive person.

Anders Arpteg:

You compliment each other.

Peter Daniel:

I do the risks, she does the opportunities yeah, that's how we roll.

Louise Callenberg:

No, I think, on that note, that humans are curious. I think we are going to create a new normal. So if there are instruments and machines that are more intelligent in some ways than us, we're going to work with that. I think.

Anders Arpteg:

Going to adapt to that in some way? I think so.

Louise Callenberg:

But the interesting part here is because there are some values or ways that we humans like democracy bringing us together, deciding stuff, having like rules that we are sharing or values that we are discussing. Are we going to stop doing that, then we are going into dystopia.

Henrik Göthberg:

So I think that's the key issue so ruling the world, you would say but these, these discussions on philosophy value, will they become less important or more important in the the further up towards agi we get?

Louise Callenberg:

Hopefully more, but I think philosophy is always more interesting in times where everything is kind of shaky. So we have been in a time area for like 100 years where we didn't have to talk about philosophy and moral stuff.

Henrik Göthberg:

But now there are More than ever. So the conversation of this and everything's accelerating and all these different choices we get. You know, someone said jokingly, but also, if we have a chief AI officer, we need a chief philosophy officer.

Louise Callenberg:

Exactly.

Anders Arpteg:

To balance the dimensions, here A chief values officer, perhaps, and perhaps empathy, that's a really core human thing. Do I understand you correctly, you're more on the utopian?

Louise Callenberg:

side.

Anders Arpteg:

Of course you are, yes, cool Well.

Henrik Göthberg:

I am as well.

Anders Arpteg:

I'm a positive person. I think as well. What about you, henry?

Henrik Göthberg:

I tried to formulate this before and I think we will have AGI and I think it's up to us and the choices we make along the way. Either way we'll end up. I don't think it has anything to do with the techniques. It has all to do with how we want to shape it. Looking at how we are good at shaping it right now, I'm right now not on the positive. Last time I said I'm 50-50. I'm somewhere in the middle. If this is good or bad, if it's going to be better or worse, I think we will muddle through. I think we will not really feel it like evolution, but right now I think we'll not really feel it like evolution. But right now I think we're not managing the shaping so well.

Anders Arpteg:

So that sort of means that I'm a little bit negative. I'm certainly more afraid about humans abusing AI than AI abusing humans.

Louise Callenberg:

So that's my opinion, I agree with this. I agree with this point. That's a good point.

Anders Arpteg:

Awesome. I think, with that note, we should actually try. We still failed. We still did two hours, but it didn't go past two hours. Yeah, it didn't. Thank you so very much, louise Kallenberg and Peter Daniels, for this awesome discussion. Thank you.

Peter Daniel:

Thank you so much, so great. Thank you for having us.

People on this episode