AIAW Podcast

E125 - AI: Politics and Ethics - Liza-Maria Norlin

April 11, 2024 Hyperight Season 8 Episode 12
AIAW Podcast
E125 - AI: Politics and Ethics - Liza-Maria Norlin
Show Notes Transcript Chapter Markers

In the latest episode 125 of the Artificial Intelligence After Work (AIAW) Podcast, we're thrilled to host Liza-Maria Norlin, Party Secretary of the Christian Democrats in Sweden and the insightful author of "The Courage to Lead through Values." This episode delves into a rich discussion with Liza-Maria, starting with her recent re-election as a Member of Parliament and the key issues she aims to address. We explore her background and delve into how AI aligns with Christian Democratic values, dissecting the pros and cons of the EU AI Act. Liza-Maria offers her unique perspective on what it means to be a Party Secretary, and the emerging landscape of GovTech Sweden. A highlight of our conversation is how AI supports value-based leadership, drawing insights from her book. We also tackle pressing topics like AI's role in enhancing public value, ethical considerations in AI development, Sweden's position in global AI governance, and speculate on the potential futures of Artificial General Intelligence (AGI). Join us for this enlightening journey into AI, leadership, and the intersection of technology and values.

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Henrik Göthbrg:

So on a normal day like that. So how many votes, how many motions, how many bills, how many things? Is it 10 votes in a day or is it 100?

Liza-Maria Norlin:

No, we have every Wednesday afternoon, so it's almost like half an hour, and then on Thursday afternoons for half an hour we do the votes Everyone is gathered to do the voting and this is quite.

Henrik Göthbrg:

you see, if I get it right as someone who doesn't know, this is not very AI. But you're processing bills, you're discussing and all that, and then you come to a very almost like. The final step is almost like an administrative table, very mechanical. Okay, now we have 10 topics. Vote, vote, vote vote, vote.

Liza-Maria Norlin:

You get this booklet on the table, you flip the pages and it goes really really fast and we had to look around the room to see you know if it's a good guess or no.

Henrik Göthbrg:

And it's not really learning and discussing then no, that has been done for years, potentially, or the same day or the same day, but then in the end we get to that sort of okay voting time.

Anders Arpteg:

Cool, it's like a ritual.

Liza-Maria Norlin:

Yes, yes, yes it is.

Anders Arpteg:

And this is because some other person went on maternity leave.

Liza-Maria Norlin:

Yeah, that's true. He's home with his kid now, so I got this opportunity.

Anders Arpteg:

Awesome. Congratulations, by the way.

Liza-Maria Norlin:

Thank you so much.

Anders Arpteg:

And you will continue that for some time now.

Liza-Maria Norlin:

Yeah, until mid-June.

Anders Arpteg:

Mid-June? Yeah, awesome, and can you just describe briefly what kind of uh questions will you be working with?

Liza-Maria Norlin:

well, I'm, uh, I'm in the social committee, so it's uh. You know, I always had the passion for for elderly care and also issues that has to do with the youth and the children, so it's a good place to be in the social committee perfect, nice, well, with that, very welcome you here.

Anders Arpteg:

social committee Perfect, well, with that, very welcome you here. We are very honored to have you, lisa-marie.

Liza-Maria Norlin:

Norlin Maria.

Anders Arpteg:

Norlin, you're the General Secretary of the Christian Democrats, now also a member of the parliament and also an author, right With a book. I'm not sure how to pronounce it in English or translate it to English, but something about you know leading with courage and values or something.

Liza-Maria Norlin:

Yeah, the courage to lead through values, Ah okay.

Anders Arpteg:

Yeah, I'm looking forward to hear about that, but also like a spokesperson for digitalization and AI questions, right In some way.

Liza-Maria Norlin:

Yeah, in some way, and I used to be like a process manager for GapTech Sweden for GapTech Sweden, but now in the party they usually ask me questions when we are discussing themes on digitalization.

Anders Arpteg:

You and Erik Slottner.

Liza-Maria Norlin:

Me and Erik Slottner. We're the team Awesome.

Henrik Göthbrg:

We need to talk a little bit about the book at some point in this pod.

Liza-Maria Norlin:

That's not me on the front page. No, no, no.

Henrik Göthbrg:

Because I find the subtitle is super interesting in so many ways and I have many angles on this how we can discuss values, what we mean with that, so I add that to the list.

Anders Arpteg:

Add that to the list. It's already in the list, perfect. But, lisa Maria, how would you describe yourself? What is your background, your passion? Who is Lisa Maria?

Liza-Maria Norlin:

Well, I'm a very, very curious person and I don't know if I'm a leader, but if there is no leader in place, I will become the leader. Otherwise I can also just be a follower if that's necessary, but usually I end up seeing the need for a leader. I live in Sundsvall, western Ireland, with my family of two teenage girls, very happy about that, of course, to my education, I'm a language teacher in Swedish and English, been active in politics since I was 18 years old, which is quite a time ago now.

Anders Arpteg:

Two years ago. Two years ago.

Liza-Maria Norlin:

But I'm really a Eurovision fan.

Henrik Göthbrg:

I love music. Oh, you have the so we could sing here.

Anders Arpteg:

Do you know the Data Innovation Summit coming up. We have a very famous person joining that you should, oh, oh.

Henrik Göthbrg:

Charlotte Pirelli.

Anders Arpteg:

So this is your vision all the way through this year yeah awesome.

Liza-Maria Norlin:

Yeah, so I've done so many things throughout life and I think the curiosity part of me is the reason for that.

Anders Arpteg:

Awesome. Should we move to the next topic, perhaps? Or can you just elaborate a bit more? How did you really come into politics? And perhaps the Christian Democrats?

Liza-Maria Norlin:

18 and I have some friends who were engaged in in politics and they were like we need a chairman for our you know, the youth organization in Sundsvall. I was like okay, so I read through all the programs for the different youth organizations and I was like I'm KDU. So I said yes, why not? And that's where it started. Yeah, that's where it started I never thought it would be plus 25 years, you're somehow into politics Not so much full-time, usually part-time.

Anders Arpteg:

And can you just, if you were to reflect on yourself, try to understand why did you choose to go into this direction instead of moving into industry or academia or something else?

Liza-Maria Norlin:

A big heart for society and development and people and wanting to make a difference.

Anders Arpteg:

Beautiful, beautiful. We are an AI podcast, so we'd love to hear more about your thoughts about that. Perhaps, if we think about AI and the Christian democratic values, can you think about AI and the Christian democratic values? Can you think about you know how Christian Democrats are thinking about AI, their pros and cons or connection to the principles that you otherwise have in the party?

Liza-Maria Norlin:

I think this is an excellent question. So I actually brought here the principles of my party, which is like our ethical, because this is quite tricky, you know, but still you know, when it comes to human dignity, personalism, principles of solidarity, political stewardship, you know, subsidiarity, human failability, it's like what is this about AI? But I think for Christian Democrats, the ethical issues when it comes to AI becomes extremely interesting because we are very value-based as a party and ethics if it's healthcare or childhood or whatever is something we're debating. So when it comes to AI, I would say perhaps it can be a bit reluctant because we see the ethical issues and the challenges with it, but in the same time we also see the need to keep on developing society. So it's a mix.

Henrik Göthbrg:

But it's interesting because it's also I'm testing now, because if you have some very fundamental values as a party, then you can take another way into looking at that. Then you can take another way into looking at that how can we use technology or innovation to strengthen these values in society. So to some degree, I mean like, if you look at some of your core values, they are quite complex in terms of multifaceted and they are quite complex in terms of navigating them. So you could imagine then at some point, when you go from values, how do I operational values or how do I go from a poster to real life or real work or real everyday stuff Somewhere along the line. There it's an interesting thought how we can use the technology to really further that. Because at some point, what do you say? The rubber hits the ground. We need to do stuff, we need to do stuff differently or we need to push these values.

Liza-Maria Norlin:

What you have to say about, for example, one important if I were to explain one of our principles. It's about where should power be in society, and we believe that power should be where it's sort of the best knowledge of how to use it and yeah, so, for example, like family issues should be decided mostly by the family and not by politicians, because they know their environment. Environmental issues is very good if it's on the EU level, because then we can help each other among the countries. It's a global issue. When it comes to AI and technology, who should make the decisions? Who should lead this? And then it comes into my data, big data and all these kinds of questions, and I think this where should the power decisions? Who should lead this? And then it comes into my data, big data and all these kinds of questions, and I think this where should the power be?

Anders Arpteg:

What do you think? What do you think? What's your answer?

Henrik Göthbrg:

I can answer it very strongly.

Liza-Maria Norlin:

Yeah, tell me.

Henrik Göthbrg:

And I can answer it with analogies and metaphors to large organizations, enterprise, where it's sort of my home turf and what we have seen is sort of you try things out in small and very local communities in a unit, in a functional unit in an organization, and you don't have the competences in order to work with it.

Henrik Göthbrg:

So then the pendulum swings over and then you try to do it very I would use the word technically or in our industry monolithic, centralized, big, and you try to do it very centralized in dust. And this doesn't work either, because all of a sudden you have this AI competence, the data competence, too far away from the real business problem, and I think it's a beautiful analogy here family or whatever is your business right. So the going mindset in many of these things is that we need to strike a balance between what is standardized or what is sort of patterns that we all can use, that is sort of done for the greater good centrally, but then executed on, or what exactly we should use it for, is quite distributed. So the world is getting more and more distributed in terms of technology and data and how we use AI and many different use cases. So, whether you like it or not, the technology goes very distributed on this.

Anders Arpteg:

Well, you wish, right? I mean, is it really true in the real world? I mean, today we can see the power of the big tech giants, the super scalers of the world, gaining more and more power and they are potentially even accelerating that divide between the normal companies and these kind of super scalers. Is that something you've been concerned with?

Liza-Maria Norlin:

Yes, and I think we are coming closer to an EU election and still not many people are voting in Sweden. We still need more people to go and vote any party as long as they vote In Sweden. We still need more people to go and vote any party as long as they vote. And I mean we need EU to set the ethical framework for this new digital solutions, or AI, because we need an ethical framework somehow, and I mean Sweden alone is too small. When we have the West, and we have the East and we know the strength when it comes to development of these kind of issues, I think on EU level, politicians start to realize well, we are lacking behind here. We need to step up.

Henrik Göthbrg:

But it's interesting you said you wish, because the question was what I believe in and then I believe in distributed. Is the reality going in this way? Is the AI divide growing or going? So the world is right now going in a slightly different way and there are many different thinkers on this. Who actually points to this problem where you have execution and people doing things that don't have the sort of accountability of the consequences, how they're impacting and affecting the world. So here's something like what you're talking about is a super, super big topic how this sort of goes hand in hand, how people can do things in technical power without having accountability for the implications of what they're doing.

Anders Arpteg:

Before we leave the topic about, you know, ai and Christian Democrats. Can you just try to elaborate perhaps a bit more? You know? Is there anything specific about the Christian Democrats and their view on AI compared to other Swedish parties?

Liza-Maria Norlin:

Swedish parties. I would say it's somehow. I think there is quite a big interest to these questions on the more leadership level because of the ethical perspective Many people are on. You know, how can this be used to be more effective as a public sector, for example? So that's one perspective and I think, of course, our engagement for elderly care and to use different kinds of tools, which has developed a lot in the recent years, is another way of recognizing a Christian Democrat in these kind of issues.

Henrik Göthbrg:

But that's a very interesting one because of your values. It pushes you to learn potentially more about the topic, which actually puts you in a very interesting position in terms of leading the voice on how we should think about these topics. Who started this pod based on? We need to demystify AI to close the AI divide In general. Anything that closes the AI divide, anything that gets attention on AI outside the industry, is a very, very good thing. So maybe potentially this is a good position because this is only going to be smaller and to be competent around these topics in one party is a competitive advantage, even in this political landscape.

Liza-Maria Norlin:

I think one difficulty when it comes to politics and AI and previous we talked a lot about digitalization and so on I think there's a lot of insecurity in politics. What are the political issues in this? What should actually politics do when it comes to this? This is very difficult. This is for the private sector or for the people who have the knowledge about this, and they haven't really seen the importance of leadership in this. You don't have to understand all the details, but you have to understand the transformation we are going through very soon.

Anders Arpteg:

Okay, I'm trying to summarize this in some way. The Christian democratic view of AI and for one, of course, using AI for the interests that you have in the party and I guess also the social dimensions of it, is something that you're extra interested in. If you were, if I tried to push you a bit in how positive or negative you are to technological kind of advancements, do you see anything specific for Christian Democrats there?

Liza-Maria Norlin:

yeah, I think there is a reluctance due to the fear of where, where is this heading, and that's again with ethical issues. Also, when it comes to integrity, subsidiarity, inclusion, these kind of questions, of course, a lot of fears regarding this development. But then, in the same time, how can we take responsibilities for next generations, which also is very close to our values? We had to do something about this and use this in a good way to make better.

Henrik Göthbrg:

There is an interesting or challenging stance you kind of need to take. And I think it boils down to do we think that this can be slowed down, or do we think this can be stopped, even? Or do we think this is a natural force? The gene is out of the bag, so therefore we need to steer it. So it's a fundamental difference do we need to steer it versus do we need to stop it?

Anders Arpteg:

and yeah, that's an interesting one perhaps it could be a good segue into another topic potentially, and we need to have responsible use of AI. If I interpret you correctly and I'm sure we both and all believe in that in some way and there is a big new AI Act regulation coming up in EU. What are your thoughts, potentially, about the AI Act being enacted now?

Liza-Maria Norlin:

Well, in general, I think it's very positive. This needs to be done on an EU level. The thing that we from our party has been trying to guard in this is it will not stop innovation. There needs to be like a room where you can innovate without the regulations being too harsh, but as soon as you need to, you put this into market or whatever, the regulations need to be there. So I think that's the dwelling question that has been important to us. We need to keep innovation going but in the same time, have a common framework that protects again the values.

Henrik Göthbrg:

And what is your ideas, or the party's ideas, on how to make that happen?

Liza-Maria Norlin:

Oh, I don't know. I don't know exactly, I would say, but I know Erik Slottner. When he has been representing Sweden in this issue, this has been one of his main principles in the negotiations.

Henrik Göthbrg:

Yeah, I want to test an idea on this topic because I think the challenge is how to go from strategy to execution, or how to go from regulation and policy to something that is worthwhile, not stopping innovation.

Henrik Göthbrg:

And one way of looking at it is that if you have a pie chart, the actual policy piece oh, we have done the legislation, we're done now is less than 10% of the work, and I propose that you could probably make it a competitive advantage for Sweden. Let's make our country the smoothest and most efficient country to be in for any investor, for any startup, for any company that wants to do AI. What I mean with that is basically, we don't want lawyers and consultants to make money on guiding or something that is very complicated. We want to make it very streamlined to follow the policy we want to make doing the right thing, doing policy right. We want to make that the easy choice, so that sort of brings. Then okay, what is that all about? To make it super smooth, to be clear what we need to follow. What do you think about that kind of vision?

Liza-Maria Norlin:

I think it's a good vision, but this is the big struggle how do we get there? And in the same time I mean when I was working with GovTech perhaps this is another topic later on which was really how do we make the public and private sector work together? How can we use the innovating startups in collaboration instead of just using the bigger private actors for the different services in public sector? There are so many fears in the system because we don't know how to do the collaboration, how to do this innovation, but perhaps there could be some help with the regulations and stuff to make this seamless. I don't know.

Henrik Göthbrg:

But I think we asked the core question before what is politicians or what is the state's responsibility or role to play here? Responsibility or role to play here? And if I then argue well, you want us to follow regulation, so you need to put heavy investment in terms of making following regulations smooth. So how do you digitize that process, how do you digitize that work so you can do a lot of things, which, in my opinion, becomes a no regrets. We are simply smooth in working with this because we have invested in really not stopping at the first piece of pie, but going the whole way. We went the whole full circle and we invested in that and therefore we are great at that. So I think, potentially, to go from high level to execution, that's a lot of work. Potentially to go from high level to execution.

Henrik Göthbrg:

That's a lot of work. Someone needs to pick that bill up and that is, I think, a political will, and again, to do that could be interesting.

Anders Arpteg:

Yeah, so just trying to close the topic, a bit about the AI Act here. I guess you can say there are pros and cons with this, or there are risks and opportunities, so to speak, with a regulation like this. Do you do? Are you positive or negative in terms of will it actually be able to foster innovation and or do you think it will be a hindrance as it is formulated right?

Liza-Maria Norlin:

now I think I think it's positive, uh, in general, uh, because we need something to hold on to, yes, and that's a start, and then we can learn and adjust, but I think it's a good beginning.

Anders Arpteg:

Yeah, and I guess it boils down to how we really implement it. It's one thing to have the legislation, and then we have to basically make the rules in Sweden as well, connected to this. So that will be something I guess you'll be working with as well in coming time. Should we move to GovTech? You brought that up, so perhaps we should.

Henrik Göthbrg:

But before we move to GovTech, I want to learn a little bit more about what it means to be a party secretary and if we can explain that role in not the political for someone with no political background like myself.

Liza-Maria Norlin:

I have one boss, and my boss is the party leader, ebba Bush, so that's simple. And then we have an organization with employed people. We are sitting in the parliament together, so I'm running an office with a lot of colleagues, but we also have almost 25,000 members around the country and most people in politics are not employed, they are there on their free time. So it's very interesting to lead an organization like that. So on an everyday basis, very, very, very busy and the days never really end. Yeah, phone calls, a lot of phone calls. You know I was so surprised because, you know, before I got this job, I never spoke in the phone, hardly. You know, I was doing Teams or Zoom or whatever, digital meetings, sms texting is my thing. Suddenly, I spent hours on the phone every day.

Anders Arpteg:

This is very, very interesting. So calls from different members of the party, yeah, from everyone in the party, colleagues or members.

Henrik Göthbrg:

But is it fair to say that as a party secretary, especially when your party chairman is also minister, means that you you're kind of running the daily operations of an organization, but with an organization that don't have, so has a lot of what is its english word, you're working voluntary workers, so to speak, compared to paid workers. I don't know if the english word is correct. So it's the daily operations of the party.

Liza-Maria Norlin:

Yeah, it's the daily operations of the party and, of course, very close dialogue with the party leader on a daily basis. So we work together. You know we are aligned. That's very important, of course, and just the last bit here to understand it.

Henrik Göthbrg:

That's very important, of course, and just the last bit here to understand it. So if I think about this like a normal, any organization, private or public sector, so you have like a party, there's like a management team, and then there's this and we have communication with the other press and you have the websites and the campaigning material and then the organization how they organize and activities, education, and then the organization how they organize and activities education.

Liza-Maria Norlin:

And then the political development team that supports the members of parliament but also develop new kind of political issues for the party.

Henrik Göthbrg:

Writing stuff.

Liza-Maria Norlin:

Writing stuff, looking into stuff and being innovative.

Henrik Göthbrg:

And coming up with what the party stands on these topics.

Anders Arpteg:

I guess Do you find any use for ChatTPT in your work?

Liza-Maria Norlin:

Yes, I have. I did my Christmas card that I sent out to people. So I do, but I think the manager for communication, he's even better at this.

Anders Arpteg:

So yeah, and for our live viewers potentially, if you were to give some tips. You know how to use Chativity or not. Do you have any personal ways to use Chativity?

Liza-Maria Norlin:

But just you know, whatever you're going to do, think about. Could this be used here? Is this a value here? Could it just help you to?

Henrik Göthbrg:

get a new idea, brainstorm or whatever.

Liza-Maria Norlin:

See it as a colleague. Usually everyone is so busy these days, but somehow it's just GPT or your AI friend is not that busy he's always there.

Henrik Göthbrg:

I like that simple advice See it as a colleague, someone you can brainstorm or hone in your idea of what you want to write, or something like this are you surprised by the performance?

Anders Arpteg:

I mean, I still am, even though I've done research on this and for 25 years. I'm still sometimes surprised when I use it and just ask it some you know simple question and see how well it's formulating the answers.

Liza-Maria Norlin:

Yeah, yeah he was not very good, or she, or whatever it is. It was not very good at formulating my summer speech, but I was better myself. But I mean, I'm blown away and you know, I'm also trying to find out in my brain how does this work? You know, I want to see this, I want to picture this, I want to touch it, you know.

Henrik Göthbrg:

So it's uh, it's a challenge one thing I've done lately and it's been partially fun, but it's also been partially a learning curve how it works is to use um, chat, gpt, to develop arguments. So basically, like in consulting, where I work with it, with data and AI readiness and change management, I've used ChatBT so it becomes like a dialogue. So I haven't started with oh, I need to do an essay now. So I have this idea that this book, written by this person who thinks like this, links to this and this and these topics, and I have a thesis about this. What do you think? Interesting, henrik, yeah, yeah.

Liza-Maria Norlin:

You know interesting what you're saying. Very polite, always Very very polite.

Henrik Göthbrg:

But then a very, very nice colleague who brushes me like, oh, you're so smart, but then in the end and then basically supports me in finding the logical links and arguments around how, maybe different? I have a sense that there are two different concepts linked together. How does that link look like? Or how would you argue this link? Or, okay, anyone else that thinks like this, or anyone else that has written something similar on this? You know, and it's interesting, it works. It really works.

Anders Arpteg:

And thinking about. You know I have to spend a lot of time making calls and you know Klarna made a big bet on ChatDBT and they made their customer support much more productive. You know, by making this big bet, do you foresee potentially having an AI chatbot answering calls at Christian?

Liza-Maria Norlin:

That'd be good especially when it's crisis you know, I just use the. And I can go to sleep. Can you solve this crisis for me? You know that's also something that people that have my role, whatever party they know we do the stuff that no one really wants to do and we had to handle, you know when, when it's really tough situations as well.

Anders Arpteg:

So you could train your own chat to PT. You know, you can build your custom one. You just send up the data to it and it will answer in the way that you do.

Liza-Maria Norlin:

Good good, good.

Henrik Göthbrg:

So a chat to PT, or you could do the one a chat to PT. How should we handle this crisis? Give me a three step process.

Liza-Maria Norlin:

Three step process here and I would use it. That would be actually quite nice.

Anders Arpteg:

Goran has trained his own one and you know you can do it as well. You don't need to program anything.

Goran Cveetanovski:

It's um, it's like extremely simplistic, so I'm not like this guy.

Goran Cveetanovski:

So this guy knows everything so, but they have done it extremely, extremely simple how to use it. But of course you need to also know the limitations of it, because what I understood and how it works is basically like a human would do. Because what I understood and how it works is basically like a human would do. It will only answer to the capabilities of the knowledge that the one has Right. So if it doesn't have so much knowledge, it will just basically give you what he thinks is right with that specific knowledge that he has.

Liza-Maria Norlin:

I mean this is so interesting because you know there is so much knowledge in an organization and you know we have so many Excel files and we gather all this blah blah blah and all the programs we had for different conferences. What if we could actually use this and help us to think and be innovative?

Henrik Göthbrg:

I mean I see so much possibility here. This is truly where it's going. We have different techniques for uploading the knowledge and the token space, how much you can upload. It grows and grows and grows right now. So potentially now you can upload a lot of pages, a lot of stuff that basically frames their thinking in relation to exactly this. So this is completely doable today.

Anders Arpteg:

I mean this is co-pilot. I'm not sure if you use Microsoft products or not, but they have a lot of support already, for if you use the latest Office with the ChatGPT co-pilot support, it can basically do that today.

Henrik Göthbrg:

So it could be something to try out. So, instead of starting with the prompting, you start with uploading 50 documents.

Liza-Maria Norlin:

But do you know leaders for organizations? I mean it could be private organizations that are very good at using this to help them become better leaders for their organizations. Do you have good examples of this?

Henrik Göthbrg:

I have one example from Tetra Pak and I wouldn't say it's a leadership example, but I think it could be applied. And here we have a very simple topic. Tetra Pak builds huge manufacturing plants that puts out milk or whatever, and of course, they are made up of thousands of small modules and those thousands of modules are made up of smaller modules that break. So there's a vast number of you know, when you want to find the problem and then find the right spare part, you know that's something you call customer service and you talk about that and you're trying to identify the problem and then the spare part, of course, all this information they have already done a pilot on to upload that for their main service lines and we're basically you get fairly good results in relation to this.

Henrik Göthbrg:

So if I then take that into leadership, the core question becomes what should you upload into the leadership document? I mean, like if I go to Vattenfall, where I used to work, you would upload your corporate governance, you would upload your policies, your corporate governance, and you would basically upload some stuff that basically frames them. When you answer these questions, these are the fundamental governance policies. If I follow the governance policy rules, what are my options then?

Anders Arpteg:

Quite easy, you should just upload your book, then but before we move into this topic, before we move into that book, it would be fun to just upload your book then. Yeah, upload your book. But before we move into this topic, you can do that Before we move into that book. It would be fun to just hear your thoughts about it's actually that simple. Yes, Lead by this. We mentioned GovTech Sweden.

Liza-Maria Norlin:

Can you just briefly at least describe what it was and how you got connected to it? Well, I used to work at Brun Innovation, which is a digital innovation hub in in western orland and we have been for many, many years working with, because we have a lot of government working with digitalization and it used to be it and also in the in the private sector. So they started brun to to collaborate and work together and I started working there and they asked me to find, you know, first of all, some projects related to using data to make good in society. And we realized that what are the what's hindering us from this to actually happen? And we realized it has to do a lot with the sort of soft infrastructure, the way we work, then the knowledge issues, et cetera, et cetera, and I mean we had good meeting places for the ones working in private and public sector, but it didn't really happen. You know, this true collaboration didn't really happen.

Liza-Maria Norlin:

So GovTech was a word that we found and we tried to analyze you know what is this and realized it was on the more international arena and that's where we could use the startups and their ideas and connect them with government development, and this is quite rare rare in Sweden for it to happen actually. So we wanted to join people in Sweden who wanted to work with us, but there wasn't such community. So we started GovTech Sweden as a community and ecosystem, and we got a lot of great relations with organizations abroad and we said let's do this. This is who I am, you know. If it's not there, well, let's do it, and we create something and it's up running today.

Henrik Göthbrg:

So it's really nice. And what was it? Was it a conference?

Liza-Maria Norlin:

We started off with a conference, gavitic Day, and the thought of that was, you know, we gathered people and I was like, do we need this kind of network? That was like the main topic for the first conference, do we need this? And I gathered I think it was a group of five you know one from National Authority, one from Startup, one from SME, you know small business, one from a municipality and I put them around the table and I said let's create the vision, let's create values, what is needed here. And we created sort of the framework for GovTech Sweden. And well, that's where the journey started and today we are part of a European big project on this topic and we are the Swedish part of this collaboration, which is really, really nice and I hope it's going to be successful.

Anders Arpteg:

So conferences, can it be seen like an accelerator, trying to help startups get connected?

Liza-Maria Norlin:

Yeah, I mean we had an accelerator next door to Brun so we could just put people there. So we made sure that accelerator had a program for Gavtech. There was also one here in more Stockholm environment. But now we're doing pilots in this new project on the EU level so we collaborate and learn between different countries in EU. How can you actually make a startup work with now Örnsköldsvik Kommune in Western Örland on, for example, energy issues, and develop something? And how can other countries also use this technology if they want to? In the next level Because you know I have when it comes to beliefs I believe the best things are when you develop them together with also the people. Who's going to work with it?

Liza-Maria Norlin:

You cannot just take a. You cannot have a finished solution and just put it there.

Henrik Göthbrg:

This collaboration finished solution and just put it there, this collaboration makes the perfect designs. So and I and this is so deep that question that belief because it goes into ai, data and everything it's about co-creation between the tech skills and whatever business problem you're having and in fact this is, in my opinion, one of the biggest problems when we are not co-creating but rather we say collaborations, but what we're we're ping-ponging, it's different. Do you see what I mean? But so you could summarize that with GovTech, you're kind of driving a arena for open innovation, where the open innovation happens by cross-pollinating or co-creating between different competencies In this case maybe the public sector domain problems and needs and different other types of actors who can then come with a diverse set of ideas to cross-pollinate.

Anders Arpteg:

Henrik, you were part of this right or connected to this, I mean so very early.

Henrik Göthbrg:

I think we met I can't remember so like 2016?

Henrik Göthbrg:

Something, no, no, no, 18, somewhere there, when you were at Brun Innovation and you were on the journey to basically start looking at this.

Henrik Göthbrg:

This was before GovTech, and I think it started with looking at how do we build a community in Sundsvall.

Henrik Göthbrg:

And, if I can reiterate on this, I think, like the observation that you made or that we could make, is that there is a growing competence around data and AI, very specifically, very niched, and it has some sort of center quite close to the large cities like Stockholm and the employers around there that you have, like Spotify or whatever. How do you create this climate and this environment in Sundsvall? And on the one hand side, you have a lot of the public sector is quite large, with the public verk I don't know what you call it in English like the big public sector players yeah, departments, or and then on the other side, you have an IT community, and maybe the IT community there has not been exposed to the data AI topic in the same way as so how do you start building both the consultant community and the buyer or user community, so to speak? And I think that's quite early, when this was sort of how do you build education around this? How do you educate? Yeah, that's how I remember.

Liza-Maria Norlin:

And I think we talked a lot about middle management when it comes to who do we need to educate in this? Yes, I remember, and that's what we spotted. If we're going to educate someone, it should be like the middle management level Because, as you're saying also, we have a lot of consultants in Sundsvall. How can they also be producing products or services in the long run that we could also sell and export to other countries?

Anders Arpteg:

Please. And yeah, I was planning to close the topic a bit, but I can just say the vision for.

Liza-Maria Norlin:

Gaptic Sweden how we formulated it was to co-create to accelerate public value. So the public value was like a keyword and there's a book, I think, from the 80s or 90s somewhere talking about this, which is very interesting. So how do we co-create to accelerate public value in the digital transformation?

Henrik Göthbrg:

And I think this is one of the strong. When we met the first time, it was this the core value here is co-creation. It's actually not collaboration, no, it is co-creation.

Liza-Maria Norlin:

That's different, not collaboration, no, it is co-creation.

Henrik Göthbrg:

That's different, and it's different, right, and I you know why is that different? Because I think this is at the core where people like to do this wrong. They think they are doing something well, but they are not really co-creating. They are listening to each other, they are collaborating, but they are really pink. So what is the difference?

Anders Arpteg:

collaboration and co-creation.

Liza-Maria Norlin:

I think you need much more of trust and transparency and openness in co-creation than in collaboration, and you have also to be ready that someone else will perhaps be the one who gains more value from this collaboration as well. I mean, somehow we do this for something bigger and to make this as good as possible, rather than collaborating. So this would be good for you, this would be good for me. There's somehow a bigger cost, but you actually do and develop together.

Henrik Göthbrg:

Yeah, and if I can ask with my lingo now, which is now we are at the heart of what Dairdags is doing and advocating for. If you have a normal company, you have in the old analog organization, you have the business side, like who is doing the business operations, who are not data savvy or AI savvy, and then you have data and AI people in another department and then now we need to do a project, now we need to work, and then all of a sudden, you have collaboration. It's when these guys meet these guys once a month, once a week, and then they talk and they do requirements, and then they ping pong the requirements in between each other. Whilst co-creation is when you lock them in and say you are the same team, now you have the same mission and we now have a cross-disciplinary problem to solve together. So for me, this is the difference between departments working well and collaborating, talking to each other, but they're working to the left and working to the right, versus there's one work.

Liza-Maria Norlin:

Yeah, this is my way of defining it Like the good tech hack. Before also, in some way, we had different teams doing together and creating things.

Henrik Göthbrg:

This is, you know, in a very small version kind of way, but then you co-create together there and then there is nothing else, then it is not the we and you, it's not your competence, my competence, it's our team and this team now.

Liza-Maria Norlin:

And I believe this will be win-win in the end for everyone. This will be win-win in the end, yeah, yeah.

Anders Arpteg:

Sounds awesome. Should we move to the book? Yes, yes, Because you know it's a topic that is very dear to my heart as well leadership I think it's a very important question that you can never learn enough about. So please, if you can, Lisa Maria, can you speak about how did you, why did you start writing this book?

Liza-Maria Norlin:

Well, I was a point in my life I think this was like 2017 or something and I was a bit frustrated and I wanted to learn more and I was like, should I start studying again? You know, go to university and learn. And I felt like, nah, not really. And I felt leadership is such a challenge. It was a big debate about schools. You know, is it the principal's fault that the schools are not as good in Sweden as they should be? And a teacher got burned out at work and I was like, what is good leadership really? What is this? And a belief I had. I don't really know where it comes from, but I was quite sure that there is something about values and leadership together. That's really, really important.

Liza-Maria Norlin:

And I listened to, I quit my job and I decided to write a book because I listened to one of the principals in Sundsvall for a school who, according to a lot of statistics and stuff, is one of the best schools in Sweden and Sundsvall has quite bad schools, sort of, when it comes to statistics. So I find this interesting and I realized when I listened to him in a seminar that he's working with values. This is the thing you know. So I went up to him and I said, hi, can I write a book about your journey as a principal? He'd been there for eight years and he started the school and I'm like, because I think you use values, he's like, no way. No, no, this is not the kind of leadership I represent.

Liza-Maria Norlin:

I'm like, okay, but can I write the book? Uh, so, so that's where, where it started, and during this journey which is the crazy thing about journeys is then I had to read, you know, research. I didn't find good enough research in sweden, or books or whatever. So I found one, this page from the 90s in us and it connected me to a book and I was like one of the authors here is Simon L Dolan, and I read this book and I was like, wow, this is how he put my thoughts in the book. So I wanted to use a lot of his research and his methods and everything in my book. So I said maybe I have to ask him. I cannot just steal this.

Liza-Maria Norlin:

So, I tried to find his address. Somehow I found him and we had a Skype meeting and since then we are very good friends and he's a wonderful professor. I've been working with this for 50 years now, with values and research.

Anders Arpteg:

Professor in what university?

Liza-Maria Norlin:

Now he's in Barcelona, but he's been in the US and in Canada. He's originated from from Israel, so he's been around the world.

Henrik Göthbrg:

But the title here is the courage to lead through values, and the subtitle is how management by values support transformational leadership, culture and success. So there's a key word here, which are values, and you are now circling back to it and you found so could we use dissect? What do we mean with values in this context?

Liza-Maria Norlin:

Well, values is so fundamental for for humanity it's. It's somehow the rules of the game, the rules of life. That could be on the personal level, of course, but also in an organization. You could be aware of them or you could be not aware of them, of course, but also in an organization. You could be aware of them or you could be not aware of them, but they do exist. And there is a very good picture in the book which shows like a sequence which help us understand what values are.

Liza-Maria Norlin:

Because before values, you have beliefs, and the beliefs that you have as a human person is put into you you, you know, probably in your mom's tummy, so very, very early, and you cannot change a person's belief. It's very, very difficult. And then, in early times of life, you, you create your, your values. If you break, sort of, if you, if you do something against the values you that have, you will feel it yourself. Values is not like external things, it's in you, and it's the same with organizations. In the sequence of beliefs, you have values and then you have norms and norms. They are created from the different values in a society and you have the norms together. If you break the norms, people will react on what you're doing.

Henrik Göthbrg:

You create friction, you create friction.

Liza-Maria Norlin:

You create friction, and then you have attitudes, you have behaviors and then you have results from this. And the whole point of this is you can, as a leader, totally focus on the results that you want. I'm going to lead because I want to be the biggest party in Sweden, or this and that, or the sales numbers will be this by the end of the month, or you can start from the other direction that you work on beliefs and values and that will take you to great results. That's my belief.

Anders Arpteg:

I like this. I remember, you know, when I went to Spotify as well and heard Donald Eak speak about the way he tried to lead the company. Um, I mean, one value at least that he tried to lead by was not really, you know, of course not commercial success, but it was really the user experience in some way, and finding the true value for users, but also for creators in that sense. So I think at least it partly perhaps connects to this that if you're guided by some kind of core value, it will drive the results in the right direction. But it's more general what's the advantage of being led by values compared to results?

Liza-Maria Norlin:

I think. I mean, some people are very sort of goal driven. Of course, I've been a teacher you can pay your students and they will get a good grade. But I don't know. If you read the book Drive, it's also a very good book and I mean, if you want to have change in the short run, you can. You know, as I told you, you can tell your kids. You know, you get a hundred bucks and you will do well on the test.

Liza-Maria Norlin:

This will work now, but it will not create the inner motivation in the long run. So the question is how do we create this inner motivations? And that's what values does to you, and values will help you create the perfect team in that sense, also, because you will connect people that are aligned with those values. There are so many things to say about this, but it is a huge difference. It doesn't mean you should not have goals, but they need to be aligned then with your values and then they will be very strong as goals as well to work with. So, management by values, as you said, there is a contrast to the beginning of the 20th century, when you have the management by instructions. As a leader, you can just give instructions and the workers will do what you tell them to do. Then we have the management by objectives, which is quite still common today. But the new thing in this complex world, in a world where things are going faster and faster, the only way to run things is by having values as a base.

Henrik Göthbrg:

But I think there are so many layers on this to also make this very practical. So, and now I go back into the research and what we've been working on in DERDAX, where we basically say you know, why do we fail or why don't we move faster in terms of becoming data and AI driven, and then you know, then we can understand what is symptoms, why it's not working, and we can try to fix that. Or we need to understand what are the roots to this symptom, what is the root to this problem? And when you do that line all the way to behavior, the problem I find is that if you're working too far down that curve, you're really only working in symptom space. You're not working in root problem space. And my hypothesis is that when it comes to succeeding with AI, succeeding with technology innovation in a good ethical way, it has something to do with it.

Henrik Göthbrg:

There's something in our norms or values that have shifted. And the way I look at this, we use the word heuristics. Have you heard the word heuristics? Everybody lives by rules of thumb In order to make sense of the world. You form norms. So there are norms formed in a company.

Henrik Göthbrg:

This is the way we do things and there are some very, very deeply rooted norms in how we organize. For example, we have our functional division of labor and we have done that for many, many years. All of a sudden, now that norm is an anti-pattern. It doesn't work like that. We need to work co-creation, we need to work cross-disciplinary. So now we need to break a norm to say that the norm is cross-functional. The norm then has stemming from a different value, so the value drove, a norm that drove us into division of labor, that drove us into something that made us successful in the 1900s. In order to fix that problem now, you need to go all the way back and understand the norm and the values have shifted. And the tricky part now is that we're not now anymore talking about values on a very fluffy or godlike point of view. It's like Daniel Eaks what is the core values and principles?

Anders Arpteg:

how, I believe about business driven by too much numbers in terms of quarterly results, then potentially you may optimize for short term, but it can potentially be a suboptimal way of driving it in long term and values are more stable in some way. So if you drive it by values, it will potentially be more long term, fully correct.

Liza-Maria Norlin:

And I usually say that you know values are your identity, organization, personal advice, but it's also your map when you have to make decisions. You should go back there and think what is guiding me here? But it could also be your anchor when things very sort of soft level in relation to you know, I believe in ethics, I believe in good blah, blah blah. Equality, whatever.

Henrik Göthbrg:

Equality. But I believe we need to think about these topics on a very, very daily, operational basis. So I take like another very pragmatic core value. I believe in cross-functional teams. I believe in diversity, not because a higher goal, because I want to co-create between different perspectives. So for me that becomes a value. Now that is very, very deep down on an operational level. If that's our first principle, everything we do is cross-functional in in thinking that you then permeate into how do we organize, how does our processes look like, etc. Etc. So for me, the important thing not to misunderstand this is to take values into something which is sometimes very, very tangible and that takes courage.

Liza-Maria Norlin:

that's why but that's why the title is takes courage, because usually when I do that we to ask we need to add someone more to the team, I need to ask one more. People say to me it will take more time, Lisa.

Goran Cveetanovski:

Yeah.

Liza-Maria Norlin:

You know, don't ask one more, just do it. You know, but I'm like, but then this will not be as good as if I get another perspective.

Henrik Göthbrg:

So if you, now want cross-functional teams and you don't want that. What you're doing now? You're breaking the norm. No, no, no, no, no. That's not your budget, that's not your mandate. The data guy is over there. Don't you worry about this. You should only worry about requirements. I don't want to work like that. You say yeah, and here now you become friction, you become a problem.

Liza-Maria Norlin:

And we're not able to fix it.

Henrik Göthbrg:

So I think this is so profound, but I think it should not be misunderstood as high level fluff. Really true, very true, I think it's how can we make this very operational?

Liza-Maria Norlin:

And I think that's why I used a case with. I have the research and the mythology and everything, but I also have a principal who has been doing this in practice, in everyday work, in an organization which has been very, very successful and I think that this is the key why it's amplification and also the founder of this goal also had the beliefs and values can we connect it to ai in some way?

Anders Arpteg:

yes, is it? Yeah, oh, please. Do you have some idea for that? Or is it simply if we want to improve or believe or have a value, that ai's can have a big potential? Or how should you think about ai potentially connected to the leadership thinking?

Liza-Maria Norlin:

in the book. Well, then again, because you know, why am I as a leader, I'm trying now to use AI in my organization when it comes to when we gather a lot of data and ideas from all our 25,000 members, and so on. How can they? Why am I interested in this? Well, I guess this curiosity value is personal of mine, but then it's also, you know, we want to make difference for humans.

Liza-Maria Norlin:

I mean, that's why we're in politics, and how can we make sure that we get value of all the ideas out there? I mean, there's so many ideas, people have so many ideas. I cannot call everyone to get them. So how can I use technology to gather ideas and also to analyze all these ideas so we can have even more better political suggestions on the table? So, from the beginning, there is a value why to use this? But then again, we also, and maybe, for some people at least, it could motivate them why to use this? But then again, we also, and maybe for some people at least, it could motivate them more to do this, because I can explain this step by by talking values, rather than, if we do this, I get 20 000 more ideas on the table. That could be nice to hear, but what do you think, ai? And leadership and values I.

Anders Arpteg:

I'm thinking AI will be such a transformation for so many companies and so many companies have a big transformational change that they have to do and if they want to make a big transformation, they need very strong leadership and perhaps having the proper foundations, like the book is suggesting, for proper leadership. You have a potential to potentially succeed with taking advantage of a new technique like AI.

Henrik Göthbrg:

That's my thinking. Okay, I'll add something to you. I think you can make it even more to the point With data and AI. It's clearly moving super fast and it's innovation coming. Technology is out there for you to use.

Henrik Göthbrg:

We have a value system that we have had for 20, 30, 100 years, which has been about efficiency and doing what you know more and more efficient. And we talk about economies of scale. So the core value, the core dogma, is how do you do economies of scale? We've been indoctrinated to this core. This is what business and organization is all about.

Henrik Göthbrg:

And now, all of a sudden, we say actually, that is not the right value to strive for. The right value is to strive for adaptability, flexibility, economies of learning. So we are not here to cost cut, we're not here to increase efficiency in the old way. We are here to reinvent our process constantly. So your core value has now shifted from cost-cutting, efficiency obsession to adaptability obsession. So now you now need to instill in the whole enterprise. There's only one game it's adaptability, no, no. No, it's cost-cutting. It's efficiency, no. That was last 100 years. So for me, then, that goes what are the core values? What are the core norms? That fits with VUCA. That fits now need to go down into our belief systems and unlearn stuff that was fitting for a much, much slower moving economy.

Liza-Maria Norlin:

Have you listened to Katarina Gidlund from Mid-Sweden University? She's doing a lot of research on digitalization and she usually talks about this that the values today. Exactly what you're saying, henrik, is from the Industrial Revolution and they fitted in the Industrial Revolution. There is a new era now. What are the new values in this system to make us successful?

Henrik Göthbrg:

I can make an Elon Musk comment on this. We always joke we have. He's impressive, right in many ways Crazy guy, but still impressive. He uses the words first principles, so it's not the same as values. But what he's trying to understand is how do I go down to the very fundamental mechanics of making this work itself for him, that we need to start, we need to, we need to question everything until we get to physical laws, more or less, and in understanding, to reinvent stuff. So there is a belief system, a value system that is very closely connected to going to first principles, and what we're talking about here is those norms, those values. This is a change of first principles in how you organize business or how you organize government sector.

Anders Arpteg:

Yeah, I think that's how it really yeah and simply also to phrase it very simple, is you know, if you want to? You know, every leader or boss of some company wants to say they have to use ai somehow. And so many, unfortunately unfortunately, I think of companies think okay, how can I make money, how can I reduce the costs that they have in my company by using AI? If that is the core driver for you to use AI, they may not succeed as well as they could if they instead thought I have these values in my company, how can I use AI to increase them?

Liza-Maria Norlin:

Right, this is beautiful and I think it's's. I mean, you mentioned the private sector, but I think it's so much the same in in the public sector because you know, for example, in the elderly care or the hospital system, we will not have enough workers, you know, tomorrow. And how can we use technology? Because we need to. You know we have this big challenge or we don't have enough money, so we need technology. That's why, and this driving force, I think it creates a lot of stress. It creates, you know, will I get you know this value in money for every money, you know, coin invested, instead of thinking how do we build a really great health care system using the technology? I mean well said.

Anders Arpteg:

I love the concrete examples as well, and unfortunately, you hear people say I can reduce the number of employees that I have in elderly care or something, but instead to think you know how can I improve the value for the patients that we do have, and argue from that that is the best, the better set of principles, so to speak, to reason from.

Liza-Maria Norlin:

Yeah, and I think that will be effective in in in the long run and you will probably solve a lot of the problems while while you have it there. But it's it's the mindset you have initially. I think it's important if you're going to succeed, or not.

Henrik Göthbrg:

I think there's another angle on how values relates to ai, because if you start thinking about ai and doing ai for good and doing and you know, and, and now we start talking about maybe even intelligent, autonomous artificial agents, then the core topic becomes what are the norms and principles, what is the AI utility function? When we are programming this you cannot sort of just waffle through the value part or the fundamental part and get to the coding. You need to be quite sharp on how will this fit and will deliver on the values we want. So I would argue a value-led approach becomes super important when we start doing artificial approaches, because if you have the wrong values or the wrong utility function, it will use.

Liza-Maria Norlin:

Is it too late?

Henrik Göthbrg:

No, no, it's not too late, but it's more about the conversation on how you do this needs to move away from a very fluffy discussion on ethical stuff down to this level. You need to be able to bring this from ethics to this level, to real, concrete level.

Anders Arpteg:

And perhaps this is a good segue to speaking about public values I know it's a dear question of yours as well and specifically about oh right, right, right right.

Goran Cveetanovski:

So yeah, go for it, corian. It's time for AI News brought to you by AIW Podcast.

Anders Arpteg:

Yes, so we have a section in the middle of the podcast we will get back to the question we started speaking about. You know AI for public values, but before that we have a small break Not always that small, but we try to make it small where we bring up some news items, something that happened recently, in recent week, and one. Each one of us, if we want to, can speak about some news article. Do you have something, lisa Marie, something that you would like to speak about, or should one of us no, but I recognize was I think it was on the news last night this issue about how AI.

Liza-Maria Norlin:

Is it a threat to the workers and can it? Well, don't have to be your teacher next year, because AI will be there instead. So I think it's a good thing that it's becoming you know, on the normal news, so to speak, some AI news every now and then.

Anders Arpteg:

So can you just elaborate. But what was really on the news, so to speak? Was it about the teachers getting replaced or something?

Liza-Maria Norlin:

No, it was more in general how the future workers will AI replace them or yeah, so this kind of issue was on topic. I think they will come back to it more later as well, but it's interesting, I think, when it's on the headline news and AI topics.

Anders Arpteg:

And it is right. It's getting more and more news in the general news.

Liza-Maria Norlin:

Yeah, like general news.

Anders Arpteg:

And what's your thought about replacing people? And what's your thought about replacing people? For me at least, my big bet and belief is really that AI will augment existing workers and, simply, we will have more stuff being produced. We have more better services, we have more products. There is not really a lack of need of products and services, especially like in Android and care. You have so much more you would like to do, but we don't have the people for it or the efficiency for it. So I mean, in my view, it's kind of strange to think that it will replace people. It will simply make us more productive. What do you think about?

Liza-Maria Norlin:

that yeah. I mean, if you look back in history, I mean we've been through developments many times before and it's not like we are unemployed today just because we are not in agriculture all of us anymore or in an industry. So I think it will be sold by itself.

Anders Arpteg:

But co-workers is a good thing perhaps we don't have to work 40 hours a week in some time in the future. That could be a nice future. Henrik, do you want to take something?

Henrik Göthbrg:

or should I? I have one which is I saw flashing through in the media. There was a news coming out of MIT and it's been sort of circled around a little bit and the core heading is a faster, better way to prevent an AI chatbot from giving toxic responses, and I just found it quite interesting to you know how different techniques are evolving around this. But this is quite small news, but I found it quite interesting also here today, like we have AI in election and all that kind of stuff, so we have not only toxic responses. But there are so many angles here. Should I take it? Yeah, I can take it quickly.

Henrik Göthbrg:

So this is about how do we put safety issues and how do we do different things in order to avoid toxic responses, and so the core background is here. This is a quite technical view on how they are sort of using different techniques in how you build these systems. So what they have sort of what they are highlighting, is a couple of different techniques or a couple of different features. One is they call it automated red teaming, how you work with the large language models in a way that you can sort of find the stuff. That is not the right one. I don't know the details enough. I realize this.

Henrik Göthbrg:

I should highlight this and another key topic here. What you're doing is like when you're building the model and when you're building the training data, you know you have different reward systems in terms of saying you know, this is good, this is bad, this is good, you know, in order for the model to learn. So it's also highlighting that, how it's rewarding curiosity and what I understand with that is in a way that it's rewarding him curiosity and in what I understand, what I just think, in a way that it's rewarding a model not to only go narrow-minded into. I like the toxic really. It's like the narrow-minded approach. I don't know, but I just wanted to highlight it. But I haven't come deep enough in it to really dissect it, any thoughts on it.

Anders Arpteg:

I mean I think we need to do a lot of research in making sure that we can have guardrails and safe models out there. So anyway, we can make the process to validate that and make models more safe is super valuable research. I heard something in a meta if you just counted the number of research articles they're making in terms of AI, it was basically half in just safety aspects and the other half was in functional aspects. So it seems like this is a very important topic and it's great to see that so many companies are thinking about the safety aspects of AI these days.

Henrik Göthbrg:

Yeah.

Anders Arpteg:

We can leave that. Yeah, I can take one of my least favorite company, apple. Do you have an Apple or iPhone?

Liza-Maria Norlin:

Apple computer. You're an Apple fan.

Anders Arpteg:

I'm an Apple fan too.

Henrik Göthbrg:

I'm an Apple fan.

Anders Arpteg:

I'm not, but I have an iPhone anyway, I can tell you. Anyway, apple actually has not been very, you know, in the forefront in terms of AI and they have not been building a big language model. They are starting to. But now we actually released a research paper and you know a lot of media got it completely wrong. They said you know, Apple now have a model that is beating GPT-4, chat, gpt-4, etc. It is not, but it's built for a very specific use case, which I think is interesting.

Anders Arpteg:

So the paper was called Realm Reference Resolution as a Language Model. So it basically means that they want to understand, given a piece of text or what's happening on the screen on iPhone, can you connect what each reference is referring to. So if you just like he or she or something, what is that reference? And and also try to see, you know what the connections are of entities on the screen or in a piece of text or in like background activities happening on the phone. And and it seems that they want to use ai, a really small model that can actually put on the phone so you don't have to call the network, the cloud, to get get an answer. So it's a really small one. It's just a billion parameters instead of trillion parameters as ChatDVT has, or 1.5 they claim.

Anders Arpteg:

So they really want to be able to put it on the next iPhone and what they want to do is also to seem to be automating tasks on it. So you can basically tell the iPhone something like please write an email and check my calendar for this and that, and then automate this task for it so you can think RPA, but using proper. It sounds like RPA, yes, but then the model understands what's happening on the phone and I haven't seen that much work in this sector before. And it's nice that Apple is trying to find this for this very specific problem in trying to do reference resolution and on the phone and on the screen and building a model on it. That that will actually be really useful, I think, and I'm sure Google and others will do the same. But I'm glad to see that, for one, apple is releasing a paper publicly about it and that they are starting to catch up a bit in the yeah, but I think that is the underlying news here.

Henrik Göthbrg:

Apple released a paper Because one of the examples you know why I love my product in Apple. Why does it doesn't, you know, but you think you know the way they have been running a very close shop all the time, not been contributing to the open source community but leeching, using the stuff but not contributing themselves. You know it's, you know from a values perspective.

Henrik Göthbrg:

A co-creation perspective A co-creation perspective for the greater good. You know, if you want to vote with your wallet, don't buy an Apple because it doesn't really stand for those values. And here now, interestingly enough, we see now papers from Apple which we haven't seen so many.

Liza-Maria Norlin:

Why are they doing this?

Henrik Göthbrg:

Are they shifting in mindset or are they, you know? Do we know? That's a good question, I can guess, but I don't know.

Anders Arpteg:

Or do we know? That's a good question. I can guess, but I don't know. But if they want to have talents working at the company, talents sometimes want to publish because it's a way for at least researchers to have a career. So if they don't allow that to happen, they will not have talents working there.

Liza-Maria Norlin:

But this is so interesting because so many times the workforce is pushing the value system for the companies.

Goran Cveetanovski:

Yes, very well said. Companies do not have value, but people do.

Anders Arpteg:

Do you think so? What is a company?

Goran Cveetanovski:

It's an organization of people, so the people make the value right.

Anders Arpteg:

But Google's don't do harm kind of value that they changed, make the value right. But google's, you know, don't do harm kind of value that they change, by the way, but still they can have, okay you have fourth value.

Goran Cveetanovski:

So like, okay, you need to be carbon neutral. So, yes, we're going to be carbon neutral. That is a force value, but the value actually comes from the people. You thought I mean you. You have been hiring people and building teams for many years. Do they have your value? Or do they have their own values? Or collectively, do you build a value?

Anders Arpteg:

But very often, though, in an organization try to define the organizational values, saying these are the core values of our company.

Goran Cveetanovski:

Yeah. And it's important work, I think, for any kind of leader to yeah, but when you start a company, you don't have value. You just, oh, let's do a product.

Henrik Göthbrg:

Tell me a startup that actually has a value, but there's two different things here, because there's one thing to put values on a poster and have that as corporate values, and I think what Goran is trying to highlight, that well, in order for you to live the values you know, it's like someone said Hofstede said culture is nothing more than the collective programming of the mind. So culture values is then the collective, the real values you know not. So you have corporate values. That you might think. But the beautiful thing is when the corporate values then trickles down into how we behave.

Anders Arpteg:

I think you can define it on an organizational level just as you find. Yes.

Henrik Göthbrg:

But it can, you can, you can is defined the same as truth. This is the values.

Anders Arpteg:

You can, as a person, define values as well, but is it really the way you are?

Henrik Göthbrg:

It's exactly the same.

Liza-Maria Norlin:

Why are you, as a startup, starting to make a new product? I mean there must be values behind that initiative. The value could be make money. Of course that could be the driving force, and I think these kind of values are different for.

Henrik Göthbrg:

But I liked what you did when you took it down to the individual or personal level. So I can have my self-image, the values that I project to myself Even if I'm a psychopath, I'm always the hero right. And then you can have the behavioral values, what you're actually doing. And then you can have, lastly, the perceived values from the environment. So I think there is a truth here. Even on individual level or corporate value, you can have values that you state, but are you living those?

Anders Arpteg:

values. As a mother, you can say certain values that you want your children to have.

Goran Cveetanovski:

Okay, so now we're defining their different values like a commercial value, culture value, organizational value.

Henrik Göthbrg:

I mean the difference between stated and real. For me, it's, ultimately, what is your, you know? Are you walking the talk, so to speak? And I don't. In terms of my values, is to exercise a lot? That's my values. I haven't. In terms of my values, is to exercise a lot, yeah, that's my values. I haven't fucking gone to the gym for a month.

Liza-Maria Norlin:

But we can also on a societal level have I mean, we talk about society level, you know a common societal values and, and I mean today we see conflicts.

Anders Arpteg:

Yeah, and country level, and you know level and European level.

Liza-Maria Norlin:

It's based on values. Those values come from politicians and they are people.

Goran Cveetanovski:

But they represent people but it's the same.

Henrik Göthbrg:

It's like going from policy or written text to living by it or acting on it, behaving accordingly. But it's the same on an individual level as it, behaving accordingly. The same on individual level as well. Exactly the same on individual level. It's a very philosophical theme. Is this news? By the way, this is not news good to go back to Apple.

Goran Cveetanovski:

I think that in my chat GPT answer with my limited, what is called the episode of knowledge, I think that they need to step up the game they have lost. They started making like deals now with Google and et cetera to implement their Gemini and they are way, way behind. And this is actually 14 hours ago. So, anders, you should confirm this, because I couldn't confirm it on any other sites. It seems like you know. Nick Clegg from Meta came on a conference and said like hey, we are just weeks away from publishing our new Llama. Then, one hour after, you have Google basically doing Gemini pro 1.5, which is actually now the latest model, our later open ai release their own frontier level, which is a model which is the final version of the gpt4 turbo. So they are right, both of them are multiodal, right. And then you have Mistral one hour after they are saying the same 24 hours.

Anders Arpteg:

The race is on.

Goran Cveetanovski:

Yes, the race is on and they have understood that right now you cannot actually wait. You need to be first on the market, because that actually makes a difference. It's ridiculous, we're talking hours now. Yeah, makes a difference. Ridiculous. We're talking hours now. Yeah, there is a quite a lot of debate and I had like great lunches this year this week speaking about different things. I've got something in my.

Anders Arpteg:

This is going to be bad but in any case, moving to the next thing.

Goran Cveetanovski:

This was very interesting, uh. So, um, we have been talking about artificial general intelligence and Jan Lekun, like always, like a critic says, like there is no artificial general intelligence because humans do not have general intelligence. So the only thing that we can hope for is basically human level AI, and then we can hope for more. Yeah, we can hope for more. Yeah, we can hope for more, but then he's explaining it a little bit more better, because he says, like a four-year-old actually has 50 times more information than the biggest world LLM. And this is because of the interaction that a human actually does with the environment, and this is something that we have discussed before as well, because it doesn't make you human just because you have, like, all the library in your head. Right, you need to also know how to utilize that knowledge into intelligence. So he's calling it right now and this is probably the sell part because every single person that goes out on stage sells something. So he's saying, like, if we want to get like a human level intelligence or beyond that, we need to have alternative architecture on how do we build the models. And then, of course, he has a solution for that which he calls objective driven ai. So that is with the news.

Goran Cveetanovski:

And just one last thing, which was very interesting from all these discussions that I had this week um, first, there is like a report from I believe it was mckinsey, but maybe I'm wrong about this that only two thirds and even less of the leaders today that are running publicly on company some of the biggest are ready to lead in the new digital economy with AI as a power, because, first of all, they don't understand the technology. They are not literate in that. So we need different type of leadership, or at least the old leaders need to understand that the paradigm have shifted quite a lot. You started talking about middle management and me and Henrik we have had a lot of discussion about this and we actually aligned on many items, but I believe truly myself that, uh, right now, even sweden, um, sweden has, like many countries, but sweden especially has enormous amount of people that are capable of leading these countries, these companies, to be more data driven and air driven and to lead the transformation forward. But they are not the leaders.

Goran Cveetanovski:

They are the middle management or lower all over right, not in not an expertise, but in terms of calling the shots how things are done yeah, exactly, and with all these conferences that we have been doing and et cetera, I usually, when I go on stage, I usually say that you are the future leaders of your companies and how we build society and how my children will be using this technology in the future, because these are the guys who are actually building it, so they understand how this is going to work. So we need a different type of leadership, a bit in terms of yeah, there are very uncertain times and things like are changing quite a lot. We need to change a lot of things from a societal point of view. We need to change laws. We need to change. I mean, I was speaking with a friend of mine that moved to Sweden five years ago.

Goran Cveetanovski:

He moved from California. He's extremely good software engineer Very engineers, right, we should not mention good software engineer and data engineer Right, we should not mention him. But that is fine. So he's in the country for five years and still he hasn't gotten even the residency Right. He stepped in a loophole and now he's thinking like this is never going to work, so he's going back in the US for some time now, right?

Henrik Göthbrg:

And now we're talking like top level data engineering in Sweden.

Goran Cveetanovski:

No, but he's thinking so. It's not that he's leaving, et cetera, but he's getting a little bit fed up that the system doesn't work. And I believe that AI is about people and if we want to win the race, we need to change the narrative. We need to change basically the politics of society and everything else and we need to start being attractive to get these people in, because right now we are exporting talent. We are not importing talent.

Goran Cveetanovski:

In the 70s, 80s, 90s, we were importing talent because we had, like a different school system. Everybody was able basically to get educated here and in return, they work five years in scania or erickson's or whatever it is. They build a company. And right now we are not doing that. Canada is doing that, france is doing that, silicon valley, for sure, is doing it, china is doing it, dubai is doing it. But we are exporting talent. We have the best universities, we are not doing anything about it and we are not even getting charged. We are not even getting paid for exporting talent. So we are teaching them and basically leaving them out.

Anders Arpteg:

We have to make them come here, do the education and then get the family so they can't leave. Yes, exactly, that's the lock-in.

Goran Cveetanovski:

All right, this was not unused, but this was a rant. Yeah, it was a rant. Ai rant. By word yes, like usual. Okay, Thank you. So by word yes, like usual.

Anders Arpteg:

Okay, thank you. So let's get back to what we started speaking about. At least and I know a big important, or big important questions for you, especially now in the parliament that you're in is public value or social good that we can find. And if we think about AI, do you see any specific areas, like elderly care or something else, where you believe AI can make especially good value?

Liza-Maria Norlin:

I mean, if we look at the big challenges of today, I mean EU is really on to the twin transition when it comes to digital and green, etc. So I feel that politics is already there. So I feel that politics is already there. But what I have, one of our big issues in Sweden, but also internationally, is the health of young people. We don't really know why their health is the psychological health is declining at quite rapid speed. We are still not helping enough of dropouts from school. We know what kind of alarms.

Liza-Maria Norlin:

I think there are about 10 factors or something that you can see. If you can see this in a child's youth, you can be quite sure it will be he or she will end school too early. And we have, you know, the social system, we have the school system. Sometimes the police is involved, whatever. They cannot share data as they should.

Liza-Maria Norlin:

I know the politics are into this right now, but my vision and I think I have a picture with me, a project that I was, and this is politics also, but I did it in Broon, because sometimes you use other places data and digitalization for youth people, and it is about what kind of data can we collect about young people to prevent them from becoming dropouts or mental issues or whatever On a big social level.

Liza-Maria Norlin:

So we can see okay, this is alarm clock. We can see already 10-year-olds are having these problems or whatever, but also very close to the child. And then of course, that kind of data will be to the social health care team, whatever in school, et cetera. But as soon as you start talking about this, people are very reluctant and think you cannot use data about children, because they are so valuable to us and the integrity and everything. Yeah, but they are so valuable, how come we're not using all the tools that we have to make sure that they will have great opportunities in the future? We are missing out because we don't have the data. Teachers are alarming but no one is listening because it's not data.

Anders Arpteg:

It's a teacher saying this kid is not feeling well, okay, and I love it to be able to use the possibilities of technology, data and AI and actually find early signs, I guess, so proactively being able to prevent potential future, and it's the same with healthcare and Donny Ficks, because the data points are there, but the data points are siloed and they are not scattered.

Henrik Göthbrg:

And what happens is that when something went wrong afterwards, and then you lay the puzzle, all the signs were there, all the opportunities to act were there, but we today only lay the puzzle afterwards and from a technical data point of view or algorithm point of view, to create a signal system that allows preventive and proactive engagement.

Anders Arpteg:

this is what you're talking about, and then, of course, do that in a safe way, but still being able to take advantage of data from different parts.

Liza-Maria Norlin:

But we hardly look into these things because we find it too scary and I think we're missing out things. So, one of the important things that I feel that the big challenges that we have in society, this is where we should begin.

Henrik Göthbrg:

But I find this example extremely good because here now we have, like the Christdemokratisk political values we started. Here we have the core concerns, of course, about AI ethics, and now we have a very real situation where clearly so far we've been more scared of technology. We have had a mindset into this that we need to guard ourselves from the AI rather than guarding ourselves from maybe a bigger problem, and I think this becomes now the ethical dilemmas and the ethical navigation that needs to be done. At what point in time, you know, do we balance this? Do you agree? Like the ethics here, it's not that simple to say let's not do it. If you say not do it, then like, at what price?

Liza-Maria Norlin:

And still the fear sometimes. I mean one of the, because I looked into where should we start collect data. One was absence from school. This is already, you know, collected from the schools, usually in digital systems as well, and we're already gathering this kind of information. Another kind of information which is less digitalized today is, you know, when you have worries for the child and you make like a form for this. A lot of schools still use papers and they put it in a folder in the room. How come is that so safe? And the thing is, someone loses this. Every life is a life. That's also very important to me. I mean, this is one kid. How can we miss out using the information that we have to help? And but you know, this is me personally, this is not my party, but I think we should look at the challenges that we have and use the technology for this and use co-creation with people that want to team up. In this project that I was running, I had but this is of course not a technical problem.

Henrik Göthbrg:

This is purely a policy.

Liza-Maria Norlin:

This is policy, but also technical, in the sense that how do we connect different sources of data which is locked in different kind of systems.

Henrik Göthbrg:

Once again, this is not a technical no, it's not.

Anders Arpteg:

There are some technical aspects to it. It's not without technical issues, but I think the main thing, as you say, is more from a legal point of view. I mean, today we're not allowed to.

Henrik Göthbrg:

I don't agree. I think this really clearly. This example of connecting this data boils down to 90, I mean, like it's a 10, 20, 70 rules that we have used BCG used it, I've stole it it's 10% about the algorithm, it's 20% about the technicalities around data and having the right data and it's 70% around change management and people and process. So this becomes a process problem, a sharing data, legality problem, a policy problem, organization problem. We don't work together. We don't have this competence and this take. We don't have the competencies. So if you, if you take all the competence questions aside, if you take everything on this away and you simply use going to technically solve this and that's a technical issue that you have to fix as well.

Anders Arpteg:

So I agree that it's very much more that the main part is really from a policy and organizational point of view, but it's not without any technical issues.

Henrik Göthbrg:

No it's not without, and what happens is that, because of the it's a talent I give you, that it's a competence, talent having the right resources that can do it it's about the processes and organization and it's about the legalities, because what happens now is like some of the stuff that you want to do now. As you said, this piece of paper is not digitized yet, so I can't get that data in, so you actually need to start collecting the data in the correct way then.

Liza-Maria Norlin:

But one thing that I really envisioned about this was also when I looked into the data that we had around young people and look at the national authorities web pages, you can find things. The data is usually five years old, perhaps eight years old. I'm like how are we going to help the kids today? I mean real time. This is the cool thing with digital is it's real time? We could have a real time database in in Sweden, gathering this kind of information and make using AI. Help us to analyze this without going on. You know, integrity, personal issues make things. Where do we need to put our efforts? Is it the school?

Henrik Göthbrg:

system, or is it the kindergarten or is it parents? Question here. And it's a very, very simple, very direct answer that all these questions about the ethics, the mandates, the legalities and all that, the only way to start is to do a small experiment and start working on it. As long as you're talking about it, you have not started.

Liza-Maria Norlin:

That's why we wanted to do this.

Henrik Göthbrg:

So start with the experiment, because then, one after another, the real problems will pop up, and you need to do this. So start with the experiment, because then, one after another, the real problems will pop up and you need to solve them, and you solve them as you go along.

Anders Arpteg:

This is the only way of doing it.

Liza-Maria Norlin:

That's the only way.

Anders Arpteg:

But you need to have the right preconditions to even start this. So I guess, some people may not know that there are legal restrictions on how to share data between organizations and authorities.

Liza-Maria Norlin:

You need to have the best in the room to do this, but there are movements, you know, trying to relax this right. Yeah, it is, and regulations are looking into this. However, it's more about how they can communicate people amongst these. I'm not sure how they're going to use the data and actually use AI to improve the work. It's more about you know, the teacher can talk with a social service person Even here.

Henrik Göthbrg:

You know, from a technique like we now talk about fancy words like federated machine learning. You know how can we share data without sharing data? How can we have privacy preserving models? It's doable.

Anders Arpteg:

But maybe I mean this is you know from a, I think there is a huge value that's locked currently in different silos and if we could unlock that the potential, I think would be super good for a super important problem that we have in our society today. So I wish you the very best on this and that you can actually get some movement.

Liza-Maria Norlin:

Movement in small steps forward.

Henrik Göthbrg:

But, anders, you used a very important word precondition Creating the arena, whatever you want to call it, and maybe that connecting it back to what is the roles of the politicians, what is the roles of you know? What stance should we go? Maybe it's not about solving the issue, but creating the preconditions, creating the arena, creating the atmosphere, creating the legalities. This is the precondition.

Anders Arpteg:

That's why we need politicians like Lisa Maria.

Anders Arpteg:

And you need to co-create with the politicians so we make the right decisions, should we? Time is also flying by here, so I was thinking about going even more philosophical. Potentially, so we can speculate here without having you know any truth, uh, connected to it, potentially. But if we flip the question here a bit, what we said, you know we. We can certainly see we have value locked into data that we can find a lot of value for, but then it can also result in ethical implications from doing this, result in ethical implications from doing this. What's your thinking there about potential ethical concerns, especially in terms of AI using large amounts of data in different ways? I think you presented a very balanced view that you also need to think about the other side of the coin. But just to make it more balanced, from the other side of the coin, do you have any concerns from using AI and large amount of data?

Liza-Maria Norlin:

When I thought about this I was in Singapore not last year but the year before, related to GovTech, and you know I hadn't been there before and you know we had a long program. So when I wanted to take a night's walk I was like, can I walk alone at like one o'clock in the evening by the seafront? Is it okay? Will it be safe as a lone woman? Oh yes, the police will be there in like five minutes if something will happen to you and realize all the cameras and everything.

Liza-Maria Norlin:

And we have on EU level also discussions about AI in cameras for the big sport events, et cetera to prevent terror from happening. I mean, we're moving towards a control society and AI can help us to be even better at controlling et cetera. So these are difficult issues. But I think this is not only for AI or using new technology, this control parts of legislation or everything. It's in every area, and sometimes you make it sound like it only has to do with the new technique and that kind of possibilities, but that's not really the truth. This is kind of ethical issues in everything's not really the truth. This is kind of ethical issues in everything that we are doing right now in society due to threats due to war in Europe and due to development in China as well.

Anders Arpteg:

So I'm trying to.

Liza-Maria Norlin:

That was a fluffy answer.

Henrik Göthbrg:

We need basically to have a balanced view on this is that yeah, we need to have a balanced view and I think now I think the most important reflection you made is that sometimes we go in and ask the question oh, what's the ai ethical considerations? And it's just, and I think what you pointed out this is a broad societal ethical consideration where AI is a technique amongst many other techniques, where the core question is big in terms of balancing, now, the internal workings of the country versus the geopolitical topics, even with war, control of your citizen versus control and spy geopolitically.

Anders Arpteg:

So I guess we can take like a spectrum here. Yes, either you go like far extreme China social scoring kind of system where you use all the data to basically hinder or, you know, prevent some rights for humans in an extreme, and I don't think any one of us want to go there, of course. But then you can think about the complete opposite as well, when you have no cameras anywhere and you can't even have a safe subway system or whatnot, and that's not either good.

Liza-Maria Norlin:

So there is a balance there you need to find, and I guess that kind of threshold, where we want to be, is the the hard question right, and that's where you need the ethics, and you also need the continuous conversations about this, and you need to have people from politicians and people, professions with different perspectives in the same room, so we're actually not building things that we will not be happy about in the future. But also, of course, the ethics will be used differently depending on the challenges in a society as well.

Henrik Göthbrg:

But let me test another one and linking it back to this topic.

Henrik Göthbrg:

now we're going to collect data on youngsters or whatever, and link it back to what we always say don't regulate the technique, regulate the application, regulate what you're doing with it. And I would argue that here, maybe, is where we need to look differently on the regulation. So it's not about the collection of data for a very specific purpose. That is clearly good. It's when we collect that data and we apply that data and use it for other integrity breaching topics. So maybe the regulation needs to be much more towards the application. So, basically, with this application and then you need to have safety measures that the data doesn't leak or is not used. Rather, I mean, like GDPR, very specific You're allowed to collect data with consent, but then you need to be specific what you're going to use it for.

Henrik Göthbrg:

So isn't that the same kind of logic and argument that don't regulate the technique, don't regulate? Data is open, but it's open in relation to an application that we consent to. Data is open, but it's open in relation to an application that we consent to, and this is maybe then a slightly different way of legalizing or looking at these topics. So there are sort of a way to get everything is locked down, but there's easy way to get exempt. Or everything is open, but you need to be very, very specific where you're going to use it for. So there's some. It's a different mechanic here.

Anders Arpteg:

What do you think about that? I mean also, we don't want to have a society where politicians are abusing the system like that. So at least somebody wouldn't go and like prevent, you know, fair voting systems or whatnot, which I'm sure you could. But you see what I mean. It can be a slippery slope Sometimes. If you start to use data in some way, it can easily move to the next and it builds up these kind of feelings of being safe and sometimes it goes too far without even noticing it. It can be like boiling the frog. You know, suddenly the frog is dead without even noticing it.

Anders Arpteg:

So we really need to be, I think, be very mindful here in finding what are the proper guardrails, how do we make sure that it doesn't become a slippery slope and we have some clear values. Saying this is too far.

Liza-Maria Norlin:

Yeah, and somehow you know humans are able to make good and bad, so we need to have, you know, systems that somehow protect us or regulate us, so we will at least not go too bad.

Henrik Göthbrg:

Okay, let's try another angle then what if this is only to some part around the policy and regulation on a high level? But then it's about implementation, implementation, implementation. So what we need to get to is to very, very strong process. And I'm actually working on the data innovation summit keynote speak and one of my storytelling I'm thinking about is like saying it's like this stop governing data, stop managing data. How can you say that? Start governing data work, start managing data work.

Henrik Göthbrg:

So when we say AI ethics or data governance or AI governance, we are talking in the abstract around how we govern data. What we really need to govern is the people working with data and how we're supposed to work with data. So we're going from a very fluffy idea to what is actual process and how do we have guardrails and how do we have structure, how do we have smooth processes, how I get support in guiding me in what I can do and not do with data. To see there's a distinction do we want to govern data or do we want to govern data work? Do you see? Do you see? I mean like, so there's a mindset shift here, maybe because we're getting can you give an example?

Henrik Göthbrg:

yeah. So when we say data governance, yeah, all right. So all we need to have quality data now go to large organization and we use similar or the same data for many different purposes. So we take finance data and we use it to close the books. Then the data needs to be governed in a certain way to match out the books and we can only use it in this way. And this is for the accounting. So we need to have policy around data work in relation to accounting.

Henrik Göthbrg:

Then I go down and work with the sales and pricing and I'm trying to find the right price point on a piece of equipment and I need to understand my cost. But here is a much more different problem, right? So we're doing data work where the cost dimension is just one part and we're trying to do an algorithm or something else. So now we're doing something completely different and we need to govern that data work. And what is then data quality? Here the books need to match 100%. That's the objective of matching up the books. Here it's about finding a price point maybe very fluffy, right, and it's hundreds of parameters. So what is then data quality? Can I have one policy and one process for both? Can I have one. We have a data quality policy, and it's like. This Doesn't work. You need to govern the data work in relation to the problem you're having. So are you governing data or are you governing data work? One way of looking at it.

Anders Arpteg:

Perhaps before we move to the final kind of futuristic question, I'd just like to hear quickly about your thinking about Sweden compared to other countries in the world and then thinking about development of AI. What do you think about the current situation we have in Sweden? I think Erik Slotner has been clear that he's very concerned with Sweden lagging behind a bit. Are you concerned with that as well, or what's your thinking about the current situation?

Liza-Maria Norlin:

Yes, I'm concerned, and I've been for quite some, many years. I mean, I think Sweden as a nation had a great potential here because our community, our people, use technology quite a lot. We were very early adapters of IT systems. I know this is also a problem for us, as we cannot be as quickly footed as we want to sometimes, but we have, you know, and we have good school engineers and everything. So we had an opportunity here, but somehow we weren't really in that race.

Liza-Maria Norlin:

And when I have compared us to other countries, and even like Denmark and Norway or whatever, they invest more, also from the public perspective. They put a lot of public money into this. It's not going to happen without investments and this is also where Sweden has been lagging behind for many years. So if we want to be one of the front runners, not only in the ethical discussion in the European Parliament, we can be that, of course, but really leading this. And I think because we are, you know, in Sweden, we think we have great values here and everything, but we have a good, you know, trust from many countries in the world. So I think also we can have, especially when it comes to technology in the public sector, we could be quite successful here in also exporting ideas. So I'm worried absolutely. I think we will catch up in the race.

Henrik Göthbrg:

But you mentioned investment as a key topic here. Elaborate a little bit more on that.

Liza-Maria Norlin:

But it's investment, of course, in knowledge, but I think there is some. This is not my profession, but there is an infrastructure to make this available. And if we just look into the political side of Sweden, we are very decentralized as a nation, which has made you know 290 municipalities, and they all own their own sort of structure and investments. Things are going too slow, from what I understand. When you look at countries such as Ukraine as well, they have invested in like a national infrastructure that enables innovation and the use of technology in a different way which Sweden hasn't done.

Anders Arpteg:

I'm very glad to hear you say that because if you think about the AI divide that we're speaking about all the time and seeing the latest kind of investments that Microsoft, for example, are doing we had a news last week where they invest $100 billion into new compute infrastructure for Microsoft, and Sam Altman has spoken before about $7 trillion, like three times the value of the whole company. It's an insane amount of money. We know Google has an infrastructure that is far beyond anything and these kind of companies are just running away with having an infrastructure and compute that is far more than anything else and a single company or a startup or a public authority have no way to come anywhere close. But if we start to collaborate, as you say, to build up some kind of common infrastructure that we can share in Sweden and perhaps in Europe in some way then perhaps we can actually start to catch up a bit.

Anders Arpteg:

we can actually start to catch up a bit.

Liza-Maria Norlin:

Do you think that's something that we want to, or should we just join the giants and they will build it for us?

Anders Arpteg:

It's a very good question Because they're obviously best at it. Yes.

Liza-Maria Norlin:

Yeah, I mean, I don't know how long the Swedish authorities have been trying to make a video meeting platform that they can communicate. That's why we're still using Skype.

Anders Arpteg:

You know it's a very good question. I love it, by the way. I think actually short term we have to join. We should join Long term not, but short term. There is no way we can quickly catch up.

Henrik Göthbrg:

But it also depends on what you're meaning with joining. But it also depends on what you're meaning with joining, because to use cloud infrastructure for one of the big boys in the short run is the only way to keep up, I would argue. But then you get to the long term of digital sovereignty. And I mean, like we had Eric Hugo here, who's a South African, and we talked also about what happens with cultural values and ethical values in a country when everything is one language. And I'll take an example English is a very commercial language. English is a very consumer oriented language. So simply by doing everything in English or even do it in Swedish, we take it into a large language model that is actually English and then spitting out Swedish. It actually shifts the cultural values of who is using it right. So there is a real point on digital sovereignty and keeping you know those cultural values. But I think there is a deep question here.

Henrik Göthbrg:

When we say we need to have a common infrastructure and this is a conversation me and Anders have had for years, even sitting in a bar using salt and pepper coins, to discuss when do you start out in the local domain and when do you go central and what is the mechanics of central investment versus using it distributed right?

Henrik Göthbrg:

So what should not happen is a pendulum that swings all the way back. Essentially, I mean, so the real problem we have is like we have 200 municipalities, and if everybody starts doing data and AI in themselves we have talked about this in an enterprise setting you're simply not going to get anywhere beyond Excel and Power BI because it's too hard and it's too big investment. So at some point you need to scale up platform perspectives. Now. The tricky point, then, is that the old way of organizing. Then people think it should all be organized and the power should shift to the center, and this is also wrong, because that's too far away from the business problem we said so how do you do federation? How do you do orchestration between platform and providing infrastructure which is self-service for the distributed?

Anders Arpteg:

And it's also the type of infrastructure you need. I mean, for the low level, you know, building hard drives or machines you need to have a Swedish version of that. But the higher up you go and making sure that we can have the values that we believe in in Sweden or in Europe, that is higher up in the infrastructure tech stack and that is where we should invest and that is where we, in short term, can actually have a common infrastructure that is having the proper legal demands that we need to have for Swedish authorities, for example, and that is something that can be built on top of the existing low-level infrastructure. And I think that would be a super fun or fast way to actually go about doing this. But Lisa.

Henrik Göthbrg:

Now we're getting into hardcore, nerdy, techie stuff, which is actually where the conversation needs to be in politics. So what Anders said now is profound Tech stack from GPUs. Do we need to invent our own GPUs? Probably not. Can we go a bit higher? And then you get to a point where do we move from generic technology to something that matters to our values and our and makes us unique or make us independent or make us own our own data?

Henrik Göthbrg:

And clearly now you, if you think about it, even the whole world, is using google, or you know, aws or azure. It's still fairly low right, so everybody can use the same technology. Vattenfall is using the same technology as Google, or Scania is using the same technology as Google, but it doesn't make them Google, because in the next level, is that? What are the practices? What are the stuff that has been built on those cloud infrastructure that sets Google apart to this? So you need to get a fairly amount of understanding for this. Until you say, as an investor, public investor of the state, we will use cloud in this way. Here we will invest. This is what we mean with Swedish infrastructure.

Liza-Maria Norlin:

But can I ask you? Because from a political perspective it's always been very easy to say let's invest in broadband, in Wi-Fi, in fiber or whatever. That is super, super low, it's just super low. But it's like been very easy to say let's invest in broadband, in Wi-Fi, in fiber, whatever. That is super, super low you know it's just super low, but it's like an infrastructure for things.

Henrik Göthbrg:

It's easy to understand.

Liza-Maria Norlin:

It's easy to understand, but this kind of investment that we need on a high level, which I understand from a user perspective, that some countries have been successful in, and Ukraine states this has been important for the war, even because people can be innovating and services to people using their, their, their digital infrastructure. What kind of infrastructure is that? You know what? From a political point of view, what should we invest in?

Henrik Göthbrg:

but it's very simple.

Anders Arpteg:

Can we take one more hour to?

Liza-Maria Norlin:

speak about that please you can send me a paper.

Goran Cveetanovski:

Can I add something?

Goran Cveetanovski:

I think that what you mentioned it's really important in some sense, because I don't think that the nerdy stuff should be discussed in politics the politics, and when you mentioned as well about digitalization.

Goran Cveetanovski:

Digitalization is about infrastructure, about building a road that people can actually drive to, and then there will be entrepreneurs that will build I don't know Benzin Maca or whatever it's called like a benzine station and then a coffee shop or whatever it is. And when we are talking about, in my belief in Sweden, gaining to this, it's one creating public-private partnerships, because there is a lot of industries that basically would love to. This is one creating public-private partnerships because there is a lot of industries that basically would love to keep up with the market right now, but they need to have incentives so they can build. So if the politicians gives them, hey, we are going to go hard on AI, right, here is investment, private public partnership. Give us the best thing that you can do to build. Make a fund for a startup company so they can build the next Google in Sweden instead of in United States. That is how we're going to catch up. The politicians doesn't need to do the infrastructure.

Henrik Göthbrg:

The private sector will do it, and then we will use it that is how we did it, but I want to go back to sorry, but I really want to go back to Lisa Maria's question, because I think it's a really, really important question. What infrastructure are we talking about? And this is where you need some knowledge to differentiate between the apples and the pears. It's not just fruit. We're talking about fruit, yeah, but which fruit? Right? Yeah, and when I say it's super simple, I can give you a pointer that you don't need to do anything else than go to Scania and ask them how they are looking at this problem, and they are working on this problem for the last 10 years and they have an idea how to solve it and it's used the same. It's the same problem.

Henrik Göthbrg:

In a nutshell, if you can imagine, one of the big problems is that you're not going to have super data engineer talent everywhere. You're going to have super data engineer talent somewhere and you're going to have less experienced people. That needs to kind of be engineers, but they can't be expected to be superstars. They're not unicorns. So we need to create ideas on how we create infrastructure. We buy infrastructure in terms of Google and get them, but what they are doing as an enterprise is deciding on? What are the standards, the practices, the processes, what is the experiences that we want people to adhere to? So then, they build what is called infrastructures code. They build an experience on top of this that then is used by many different departments.

Anders Arpteg:

And we could speak much more about this and we actually should but I think a good starting point. There is a government initiative called Offentlig AI. They actually started working a bit in this direction and if we could have more investments in that type of initiatives, I think we would be on a great start. So that, I think, would be a great way to start at least.

Henrik Göthbrg:

So there are projects and there are actually answers to this. But what you did now you asked the right question. So Erik Slottenen needs to ask the right question. You can't be broad brush on this and say, oh, we need infrastructure, and then the knee-jerk reaction. Infrastructure means broadband, Right, that's what it you know. So now we have moved up in the stack, so to speak. But we need to know, when we invest carefully, not go too low and not go too high. But I think there are answers here.

Anders Arpteg:

Hard to stop speaking about this.

Henrik Göthbrg:

I think this is where I would like to have Erik and everybody for five hours, not on the pod, two hours on the pod and then at least five hours afterwards.

Anders Arpteg:

There may be a future where we have AI that are super intelligent, that are not only human level intelligence, but far behind that is more. A single AI system may be more intelligent and knowledgeable than all humans combined For one. Do you believe that will happen at some point?

Liza-Maria Norlin:

Close at least, and that's just know. When I look at like my grandma, you know what kind of society she lived in and what it's like now.

Anders Arpteg:

Well, it's, it's possible let's assume, just for a hypothetical reasoning here, that that it will come a point. Some people believe very shortly, in a couple of years, others like decades and other people never. But let's assume it will come a point. Some people believe very shortly, in a couple of years, others like decades and other people never. But let's assume it will. What do you think will happen then? We can think two extremes here.

Anders Arpteg:

One is the terminator, the matrix, a dystopian future where machine will try to kill all humans. Or it could be the other extreme try to kill all humans. Or it could be the other extreme where we have AI that will provide more or less free energy. They have solved a lot of medical issues that we have and cured cancer. We will have basically a free, or more or less free, products and services being provided to us. We live in a world of abundance where we're free to pursue our passion and creativity as we choose. We don't have to work 40 hours a week and so forth. That's the other extreme. Where do you potentially think it can be? Will it be more to the utopian kind of future or dystopian one?

Liza-Maria Norlin:

Well, I have to be positive when it comes to the future, because then again it's us who is going to run this and create what future we will see. But to get there we need to be value driven, Otherwise, we will lose track.

Henrik Göthbrg:

Love that answer.

Anders Arpteg:

Awesome With that. Thank you very much, lisa Maria Norlin, for coming here and great questions. I love the discussions we had and thank you very much for coming.

Liza-Maria Norlin:

Thank you so much for having me Thanks. Thank you.

Parliamentary Procedures and Personal Background
Christian Democratic Perspectives on AI
Opinion Sharing