AIAW Podcast

E137 - Testing and evaluating AI - Petra Dalunde

Hyperight Season 9 Episode 6

In Episode 137 of the AIAW Podcast, Petra Dalunde, Swedish Coordinator for CitCom.ai at the AI TEF SCC, delves into the complex challenges of data quality and governance amidst the evolving requirements of the EU AI Act. She explores the heightened responsibilities imposed by Article 10, which demands rigorous data quality for AI systems, especially those classified as high-risk. This episode sheds light on the often subjective nature of data quality assessments in critical sectors like healthcare and underscores the need for professionalizing data quality roles to meet these stringent expectations.

Dalunde also emphasizes the importance of cross-disciplinary collaboration between policymakers and engineers to tackle these challenges effectively. Beyond policy, she introduces listeners to cutting-edge initiatives such as the Smart Twin project, which utilizes EU funding to empower SMEs in the realm of digital innovation. These initiatives, supported by Test and Experimentation Facilities (TEFs), provide essential resources to fuel AI-driven projects with applications ranging from traffic prediction to educational tools.

The discussion wraps up with a focus on the AI Act’s legislative framework and the subsidy opportunities it presents for Swedish SMEs, ensuring they can access critical support without violating state aid regulations. This episode offers a comprehensive look at the role of AI in smart cities and the strategic funding opportunities available for SMEs to flourish within the EU’s digital landscape.

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Henrik Göthberg:

Tiffany, why don't you let us in on this story, what you're thinking and what the background is here?

Petra Dalunde:

Yeah, so I read a report on data quality today, actually deeply going into data quality in the framework of AI Act, and I realized that the demands in Article 10 are so high and specific. So my previous understanding was that any good AI system that could be used in a high-risk category could become that suddenly Because, yeah, we developed it for something else, but now in this context it would be a high-risk category AI system, so we need to certify it. But I realize that you have to back down and start all over again and fix all the compliances on data.

Anders Arpteg:

So your epiphany, so to speak, was that it is potentially more work than people think to be compliant with high risk.

Petra Dalunde:

Yes, and it may not even be possible, too expensive or not even possible to use that, particularly even though it seems to be the right one, because you cannot be compliant.

Henrik Göthberg:

You're highlighting that you went down the path, you realized something was not something. Ah, this is not Article 10. So we are going down this path, and if then you want to transition that into another area, which now is Article 10, it's not that easy. Exactly, it's not something. Even you have made technology choices or data governance choices, or data quality choices that now they don't hold up anymore. So, actually, the design pattern, what we have built is not really an Article 10 type system.

Petra Dalunde:

Exactly so. For me, this means that it turned my understanding around and in the best of worlds, everyone would follow the Article 10 specification on data quality, and all AI systems could become high-risk systems if they are suitable.

Henrik Göthberg:

That is my way of solving it. That is basically, if you're very, very professional on computational data governance, of governing data in code and that is your standard, it's not really a problem, is it no? But if you're trying to do this in policy and bureaucracy, then it's not so easy to switch, is that?

Petra Dalunde:

fair. Yeah, I agree.

Anders Arpteg:

But how do you determine what data quality is then? Can you really automate that? Can you say that if you have this type of data, for example, in a data set, then it is high quality? Or how do you really do that in an objective way?

Petra Dalunde:

Well, there are standards for this, and don't get me started in that, because it's so many and there's so many categories and undercategories and standards and methods that you need to use. So I think you can, and that is.

Anders Arpteg:

I think it's very subjective actually.

Henrik Göthberg:

But let me lend you a proposal. I mean like so what is data quality? Is really use case defined? You know, you can have fuzzy data here and you can have a fuzzy logic and you can have a fuzzy insight. That is valuable. But over here we are matching accounts in finance. It needs to be perfect. So in that sense you are fully with you there. But you can standardize and professionalize data quality work, can you?

Anders Arpteg:

I would love to see it. I haven't seen it yet, though.

Petra Dalunde:

I will give you the report. It is written by my colleague, Nishat Mowglad.

Henrik Göthberg:

Okay, I'm humble, maybe you're right, but my argument here is basically aren't there fundamental practices, like professional practices, that you can follow and then guide you to do the right thing?

Anders Arpteg:

Okay, so how do you remove biases from data? I mean I think it's such a stupid thing to write an article term. I mean bias is one of the most important things to have in a data set for it to be useful. If you remove all biases, it is more or less completely useless. Okay, so what type of bias then should you have? I mean, these are so many questions that is super hard to define and that is not easy to standardize or automate.

Henrik Göthberg:

I would say no, and the argument, and it's not easy to standardize and automate what is data quality, but you can raise the level of professionalism and practices.

Anders Arpteg:

So you can actually ask Sure, we can be better at it, we can be better at it, but it's super subjective, I would say today, and we don't have, I haven't seen any good, you know best practice for it.

Petra Dalunde:

Yet, Ah, you're right, Because we haven't started all of us to contribute to this.

Anders Arpteg:

But I hope that you will fix that.

Henrik Göthberg:

But this is the real challenge, right? The people are charged with, as policy, making up the legislation, and people are even charged with the idea to write data quality on these topics, and they are perfectly fine and very useful, but then it is reality, and now we are talking about a hardcore engineering reality, and this is also a tricky point. Now we need to co-create between these dimensions, in my opinion. So, talking about data quality, or talking about the AI Act, or talking about the fundamental real engineering problem, Is it an engineering problem?

Anders Arpteg:

I'm not 100% sure.

Henrik Göthberg:

No, but somewhere it becomes an engineering problem.

Anders Arpteg:

Then it's easier, I would say. I think the hard thing is really before you can automate it in some way.

Henrik Göthberg:

Elaborate. You're on the right track.

Anders Arpteg:

Take the bias thing, it can be a different thing than biases. It should be relevant to what you're training it for and so forth, and representative and blah blah blah. But if you take the bias thing, what should we take Some kind of medical application. Let's take x-ray scans of brain tumors or whatnot. I mean whatever. It could probably be classified as high risk as well. It's an important application of AI. Then, okay, should the race, for example, of a person be considered a bias or not? Maybe, if it's really, you know, imbalanced in terms of race of people that is used to train a model, that could be an issue, that could be a bias that is inherent in the data set and that should be fixed. Potentially, that's rather easy to see. But then if you go further and say that perhaps people over a certain race have a certain tendency to get brain tumors, what happens if you remove that?

Anders Arpteg:

Then we actually remove important information from the prediction that we wanted to make. This is not as easy to say that we want to have 100% equality of races in the data set no, then you actually could be removing very useful information.

Henrik Göthberg:

I love what you did now and I then realized I need to rephrase the way I say this is an engineering problem. You are absolutely right, but I'm also right. It's an engineering problem in the sense that you know when someone has decided what is the right or wrong, then it's a quite easy engineering fix. But what I meant, which you highlighted now, I think you highlighted beautifully. You made my point, by the way. You need engineering competence on how AI models work in order to at all understand what is the useful type of policy. That's what I meant. I meant you need an engineering skill of very depth in order to understand how the fuck should I interpret bias in race? You know, no one can do that without you, anders. I can't do it, anders.

Anders Arpteg:

No, I can't, I can't either.

Petra Dalunde:

But also, if you remove all biases, it becomes bias.

Anders Arpteg:

Yes, yes, exactly. Oh, I love that Because it's not representative anymore. Did everybody get that joke?

Henrik Göthberg:

That was a deep joke.

Anders Arpteg:

That's a very good one, that's a good joke. Oh that is so. That's a t-shirt. Quote.

Henrik Göthberg:

That's a t-shirt. If you remove all bias, that's bias, it doesn't become representative anymore.

Anders Arpteg:

Oh, I love it. Okay, this will be an interesting topic to discuss more, I think. But before we do that, very welcome here, petra Dallunde. You're coordinator of testing AI at RISE, right? Or please describe your background. Who is Petra Dallunde?

Petra Dalunde:

Okay, so my title at RISE is both coordinator of test and evaluation of AI Evaluation as well is both coordinator of test and evaluation of AI Evaluation as well, and I am also coordinating one of the four TEFs at EU level in Sweden TEF, I think you need to, and TEF is testing and experimental facilities.

Henrik Göthberg:

Okay, there's so much to unpack here. What's the difference between test and evaluation?

Petra Dalunde:

Okay, so I added evaluation, because to test AI can both mean to try it out. It can also mean to test it towards standards, compliance methods and such. And also I have realized that talking about testing AI or test AI could mean develop AI to test AI. So I added the evaluation part to make it clear that it's about testing and evaluation of AI models.

Henrik Göthberg:

I like that explanation so much.

Anders Arpteg:

Thank you and TEFS. Is that a part also of the EU-funded operation or is it a more RISE, specific no?

Petra Dalunde:

it's 2.2 billion euros spent on four TEFs over five years In.

Henrik Göthberg:

Sweden or in Europe, in Europe? And what is a TEF then?

Petra Dalunde:

A TEF is the EU Commission's way of helping the SMEs with all legislation that is coming to come to these experimental facilities and get help with compliance with testing in real test beds and make sure that the AI that is developed and will be on the market is safe, compliant, trustworthy.

Henrik Göthberg:

And maybe we should. What's TEF stand for?

Petra Dalunde:

Testing and Experimental Facilities.

Henrik Göthberg:

Testing and Experimental Facilities. You heard it here before the first. If you haven't heard it before, this is an important topic for medium-sized business in Sweden. Don't go home and do this alone.

Anders Arpteg:

No, is this related to the sandboxes that people speak about in the AI Act as well, or is it a different thing?

Petra Dalunde:

The sandboxes will be connected to TEFs. Okay, and since it's so much money of the Digital Euro Program complete budget for 2022 to 2025, we are connected to almost everything concerning AI. So the tests are, yeah, they're hoping a lot on us and still, what we are going to try to do is to create a bilprovning for AI systems.

Henrik Göthberg:

I love when someone makes an example and a way to communicate that everybody could sort of understand. Bilpruvning, okay, and what is the English word?

Anders Arpteg:

for bilpruvning.

Henrik Göthberg:

Yeah.

Anders Arpteg:

Car test.

Henrik Göthberg:

Car authority certification. What you do with your car. You take it somewhere to have it tested, that it's road worthy. I have to ask Google about that. Bilpruvning in Swedish.

Petra Dalunde:

Yes.

Henrik Göthberg:

I love it.

Petra Dalunde:

Great. So that's what we are trying to build and also giving access to real test environments. And there are four tests One for smart and sustainable cities and societies that's where I work. And then there is one for health, which is in like medic, diagnosis, medtech and such AI within that field. And then there is AI matters, that is, production, manufacturing, industry.

Henrik Göthberg:

Is it a reason or a thinking? And of course it is for having domain more oriented. Testsfs.

Petra Dalunde:

I'll come back to that, I'll just say the last one. And the last one is agri-food TEF which is food production and agriculture.

Henrik Göthberg:

So the four main TEFs. Sorry for interrupting you, so we do it properly.

Petra Dalunde:

Smart cities, health industry, manufacturing, that is, and then agri-food.

Henrik Göthberg:

Agri-food.

Petra Dalunde:

And yes, I think there is a connection to how they created the data spaces. I think they regarded all the TEFs being domain related to the data spaces.

Henrik Göthberg:

But I think it goes back to the opening here that you know what you need to build up a prejudicat. You need to build up a case understanding of how to deal with bias or whatever. So I think it makes total sense that you cannot have domain competence in all different areas. So how can you understand bias in all different areas? So how can you understand bias in all different areas? You can't. So it makes total sense to me to have a more domain-oriented understanding of this in order to get further.

Petra Dalunde:

And just one thing more, because they planned all this before chat GPT came.

Anders Arpteg:

So everything has changed since then. Just saying that's a good point and yeah, let's see if we can get back to that shortly, but let's finish just your introduction as well. So you have been working at RISE now for how long?

Petra Dalunde:

Almost two years Two years. And before that I worked at AI Sweden for three years. That's where I learned the domain AI. I worked a lot with understanding the transformation that organizations need to do to be able to use AI, and before that I've done a lot of things, but the one thing the epic fail I did was building a smart city test bed from within. I never understood why the customers never came.

Anders Arpteg:

Let's get back to the smart cities. I think we need to dig deep into that shortly as well, Perhaps we. Just before we do that, can you just elaborate a bit more about the current role you have at RISE?

Petra Dalunde:

And what is RISE?

Anders Arpteg:

to begin with.

Petra Dalunde:

RISE is Research Institutes of Sweden. It is several research institutes that are put into one and we have five divisions. 3,500 employees scattered all over Sweden. Headquarters in Borås.

Henrik Göthberg:

Borås, borås I grew up in Borås and I have an understanding for the institutes and starting proveningsanstalt etc. That is still a big part, of course, if you look at it For someone who doesn't know RISE, I think more people actually understand or know about Startensprovningsanstalt. It's such a massive institute.

Petra Dalunde:

Yeah, so it's ACRIO SIX Startensprovningsanstalt, victoria Institute, a lot of research institutes.

Henrik Göthberg:

And then many, many small ones Exactly. It's a 10, 20, you know when they started to pull this together.

Petra Dalunde:

Exactly so now we are five division and I work in the digital system division at the prototyping society department and connected society unit. But that is kind of irrelevant because I have colleagues in my team that comes from all over ICE.

Henrik Göthberg:

And maybe what is more important than is to understand how it's funded and how it's brought together. So, typically, if we take the stuff that you are working on now, what is the gel that sort of pulls this together as one cross-functional team, if you like?

Petra Dalunde:

Okay, so we are funded 50% by EU and the rest is funded by DIG, the Swedish authority, through Vinnova.

Henrik Göthberg:

Yeah, and then you're working on some key objectives, that sort of gel. So you're all together. How to understand it? As you explained, you have colleagues that come from different parts of RISE, but what makes the gel? What makes this? You know, what is the team, so to speak, the KPIs for the project of Smart twin, experts and also project managers. If I'm taking on sort of an enterprise corporate, can I understand this as a program? Are you collecting resources under the umbrella of a program, or how can you define it for someone who's not used to working in this way? Or a project? Yeah, exactly.

Petra Dalunde:

So we have realized we started January 1st 2023, last year, and now we have realized what we want to do. Well, we realized that quite quick because we talked to a lot of SMEs. What do you need in relation to AI Act? Do you need a physical test bed or do you need legal support and guidance?

Henrik Göthberg:

Or everything in between test bed, or do you?

Petra Dalunde:

need legal support and guidance, or all, everything, all, all or all of the yeah yeah, as fantastic, and your role more specifically. If you just were to elaborate on that, yes okay, so I'm coordinating, I'm leading the team I think we are 12 people now and I'm also responsible for work package five in this fantastic project which is market and communication. So I'm also head of communication in the large EU, on EU level, so I'm in the management group as well.

Anders Arpteg:

And I guess we should move into the TEF as well. Just explain what that is. Perhaps that's a good point to do so.

Petra Dalunde:

This is the TEF we have been talking about now, the smart cities. I was thinking yes.

Anders Arpteg:

Okay, so please elaborate. What is the smart cities and societies, or what is it called?

Petra Dalunde:

Smart and sustainable cities and societies, or what is called Smart and sustainable cities and societies. When EU designed it, they were talking to two networks One, global, owasc, open and agile smart cities.

Anders Arpteg:

Open and agile smart cities.

Petra Dalunde:

It's a network, global network that are focusing a lot on MIMS, minimal Interoperable Mechanisms where AI, data format APIs and such are included. And the other network was Living in EU and that's many EU cities that has created a network that work together and their aim is to make digitalization to add value for citizens. So the EU Commission asked them, given what is coming in legislation, ai Act and so on, what does the cities, smart cities need? What help does the SMEs that deliver smart cities.

Anders Arpteg:

Can you give a concrete example for a typical smart city kind of actor, yeah, or actor or application or something?

Petra Dalunde:

Municipal administration common customer to an SME that has developed an AI system that could add value, and it could be within school education or social services or traffic prediction models.

Henrik Göthberg:

So, to understand who the TEFs are, for, who we are helping, we are thinking about small and medium enterprises, smes, who are doing things, developing products, developing services, in order to accelerate the path towards smart cities.

Petra Dalunde:

Exactly so.

Henrik Göthberg:

They are the ones that we can subsidize and help and they are the ones that we offer the services we have developed, and we have developed services that we have understood that they are asking for, and this can be incumbents, who's been around as an old company for 20, 30, 40 years, and it can be startups, but the common denominator is they're planning to do something here within their innovate, so to speak, and it has some degree of data and AI in it.

Petra Dalunde:

Exactly, and also we can receive paying customers, and that could be anyone.

Henrik Göthberg:

Okay, so someone else can ask.

Petra Dalunde:

Yes.

Henrik Göthberg:

But could we a little bit go into the subsidy, because I think this is very important to understand, like the target audience and sees, but it's actually money on the table for the smart people to go and get help. So what is framing the subsidy and how do you qualify for that?

Petra Dalunde:

To qualify you have to be an SME registered in Sweden and you have to not have already taken the 300 000 euros you can receive in state aid during the pandemic Three years. So there's other things you can apply for. Exactly, and if you've done that already, then you can't really double dip. No, you cannot, so you will have to fill in the minimum declaration. It's called, and there you also sign up to be an SME according to EU legislation and also within the smart city domain.

Anders Arpteg:

And the state aid. I guess it's through Vinnova, then you get that, or is it some other way? You get the state aid Exactly so through this.

Henrik Göthberg:

There is then a different call that you can then apply for, or something like this no, we have services online now.

Petra Dalunde:

A service catalog and if you find one that you think is interesting, you contact us.

Henrik Göthberg:

And then you help the process.

Petra Dalunde:

And then we get things going. That's better.

Henrik Göthberg:

Now, just to be sharp, when we talk about a TEF like this, is this for Sweden only Because we have listeners, of course, when EU is asking you to set up a TEF, how broad is your EU scope here? That is a very good question. In this TEF, we broad is your scope here? That?

Petra Dalunde:

is a very good question. In this TEF we are 11 member states that are working, divided into different nodes, but so far we haven't been able to figure out how a Swedish company could go to the Danish TEF and get subsidized. We haven't figured out the agreements and everything on that, so for now you have to go to your home country.

Henrik Göthberg:

So the simple advice always go through the single point of contact. The SPOC should be through your local, your Swedish one. Exactly, the SPOC should be through your local, your Swedish one.

Petra Dalunde:

Exactly.

Henrik Göthberg:

But then when you look at the TEF's total capability, it's broader than Sweden. It's actually 11 nodes in this TEF. So it could be that from your SPOC in Sweden you're guided then through the MACE, so to speak.

Petra Dalunde:

Exactly. Is that a good? That is a very good explanation or description.

Henrik Göthberg:

To start local and then from the local you get access to what the TEF really can do.

Petra Dalunde:

Exactly. And also I have to say, rise has three TEFs, so we also have agri-food TEF at RISE and health TEF at RISE.

Henrik Göthberg:

Okay, and the fourth one.

Petra Dalunde:

Manufacturing, not in Sweden.

Henrik Göthberg:

Interesting. And if I'm in a manufacturing company, what do I do then?

Petra Dalunde:

You can contact me. If you are listening here, you can contact me and I will guide you, or you can look it up online.

Henrik Göthberg:

Because I can see how you only have nodes in Sweden on three of them, but there needs to be an access point for the fourth TEF. Then I argue, that is super simple, is that?

Petra Dalunde:

fair, yeah, fair, but I don't think any of us yet can receive a customer from abroad.

Henrik Göthberg:

No, and so to be figured out.

Petra Dalunde:

Yeah, definitely.

Henrik Göthberg:

Yeah, it's all new. It's all new.

Petra Dalunde:

Yes, and also that we actually can charge money for services in a EU project. That is also new, interesting.

Anders Arpteg:

So you will be a for-profit then, more or less.

Petra Dalunde:

Yes, eventually. And yeah, we are figuring that out as well.

Henrik Göthberg:

And I think that's a good thing. In these topics, this is hard stuff and this is a service, and I said it before I actually made the proposition without knowing all the details that I'm learning now that it excites me. I said to someone who is in parliament you know what, who is in in in parliament, you know what. We should be the world class or the european best in class of smooth eu act. If we can be smooth on eu act and smooth on financing and vc, we get people here. So from that perspective, I was like you know, there's a no regrets investment going into this and I said, and damn, if the fucking consultants and the lawyers get this bill when Rice should have it or someone else should have it, we should make money on this. So I think it's a good idea that this is a for-profit, eventually thinking that this is mature and you know what, henrik, you agree?

Petra Dalunde:

I agree that this is mature.

Henrik Göthberg:

And you know what, henrik, you agree.

Petra Dalunde:

I agree and I want us to in Sweden, start figuring out together what is it that will be when the TEF project has ended. Who will own the bilprovningen?

Henrik Göthberg:

Exactly.

Petra Dalunde:

Who will be able to sell services there? Yes, exactly, we need to figure this. I cannot figure this out on my own, so I need you guys.

Henrik Göthberg:

Because it needs to go, as we have said so many times, beyond the pocket, beyond the project, and become operation, become product, become something that is sustainable. I fully agree.

Anders Arpteg:

And if we just were to go a bit more concrete if you are an SME and you want to potentially be connected to the TEF, what can they get help with? If you are an SME and you want potentially be connected to the TEF, what can they get help with if you were to get some you know concrete thoughts about that?

Petra Dalunde:

okay, so one service that we actually have delivered on is policy evaluation guidance. It's called that. So we take an AI system from an SME and if we find that it ends up in the high risk level, Can you help assessing that as? Well, yes.

Henrik Göthberg:

I have a legal expert. That is a huge value.

Anders Arpteg:

Just that in itself. I think a lot of people or companies would pay for it.

Henrik Göthberg:

Yeah, pay for it.

Petra Dalunde:

Yes, so that is something we do, first together with our legal experts, and then we go through the whole the data quality, the process, the usage applications and everything and we see what the company needs to do in order to be fully compliant.

Henrik Göthberg:

I rather see RISE as a governmental cash cow on these topics than consultants and lawyers. To be honest, I really do. Why not? It's awesome, so and okay, so this is already now there. And okay, so, this is already now there.

Petra Dalunde:

So already now you can go and get subsidized money to get consultant support from RISE, rather than going to Just want to say we call it policy evaluation guidance because we are not consultants. We cannot advise who can. Yeah, so we only Fuck that.

Henrik Göthberg:

Sorry, my lingo Management consultants. Fuck them. They cannot do that either, by the way.

Petra Dalunde:

Okay, but we are spelling it out clearly.

Henrik Göthberg:

You're too modest. I'm getting upset because you're too modest, because you're doing exactly what everybody else is doing and you're probably doing it better because you're closer to real regulation and you have closer to the real channels. So what you are doing is way better than most management consultants and lawyers are doing in town and charging a shitload for. So I'm just giving you a boost here that you are on the right track and you should not diminish that.

Petra Dalunde:

Sorry, I'm getting excited because you are so on the money, thank you, I hope so, and the beautiful thing also with working at RISE is that we have research projects at all times that follow what's happening at all times, because this is going to be moving.

Anders Arpteg:

Yes, this is moving targets, Okay. So I mean that by itself, I think, is super useful, and so many companies would love to have help both with their risk assessment by itself, but also some guidance in how to become compliant. I mean that's super useful. What if they? How do you, you know, then do the guidance, Because I guess is there some testbeds and stuff as well that you help working with, or is it more from a theoretical point of view, so to speak?

Petra Dalunde:

it is theoretical, uh, but we use the customer's data and ai model as example, so to say, but we don't test it in any platforms or anything so yet no, in this service, so it would be a next service.

Petra Dalunde:

So now we are looking into if we are actually going to test the same AI system. What do we need to build in order to say, yes, you are compliant. We are not there yet because standards are lacking and blah blah. We don't have enough experience, but eventually we want to get there. But what we can do is that we can let companies come and collect data and support them to develop ai systems from the data, to learn them on how to get the data, just if you want to develop something for the social services or school or something. So we have a fair data lab for that.

Henrik Göthberg:

That's fantastic.

Petra Dalunde:

We also work a lot with cybersecurity, so we have cyber range services also. We have also access to.

Anders Arpteg:

Cyber range services. I'm thinking about gun range here, but I guess in this case it's like a cyber attack range.

Petra Dalunde:

Yeah, cyber security services.

Anders Arpteg:

Yeah, I love the name.

Henrik Göthberg:

Cyber range, because what's beautiful with that name is there's a range of surface areas, of course, where attacks or problems with cyber security you can have, so cyber range is good.

Anders Arpteg:

I was thinking more. It's cool to have a gun range, it's cool to have a cyber range.

Petra Dalunde:

I want to have an AI range. Yes, awesome.

Henrik Göthberg:

But I get this. You put a picture in my head with Bilproningen and I think no wonder we are starting Bilproningen from scratch and we don't even know what are the facilities that should be included in a bilproveningen. So if you make that metaphor, an analogy to the mags, okay, we can theoretically look at the car and then we can write policy on what you should look at. Secondly, okay, how do we do a brake test? How do we do the engine test? How do we do the CO2 test of the car? All this needs to be figured out. How do we do a brake test? How do we do the engine test? How do we do the CO2 test of the car? All this needs to be figured out and in the end, we can package this in my head bilprovningen Exactly. So we are, of course, starting with what we should test, then how we should test it, then the guidelines for how to test this, then how do you productize this and then, ultimately, how do you make the toolbox? How do you make the billboarding facility look like?

Petra Dalunde:

Exactly.

Anders Arpteg:

But I guess we don't know exactly how to do it yet, right? No, but I think, and it would be fun to hear what you think about this. It's rather strange to pass a law that is enforced before we know how to be compliant with it. Isn't that strange? Isn't that normal?

Henrik Göthberg:

Maybe know how to be compliant with it. Isn't that strange? Isn't that normal? Maybe it's strange, yes, but have we?

Anders Arpteg:

done it in any other way in the past.

Henrik Göthberg:

For I mean, if you shoot someone, you know it's easy to say that but honestly, when we, when you pass a law as a policy and then before you have a case law or you have things how to apply it and you have toolboxes for it.

Anders Arpteg:

Is there any?

Henrik Göthberg:

difference? I'm not sure. I'm too uneducated.

Petra Dalunde:

I have had the opportunity to co-develop a law, EU legislation, and that was on drones the drone legislation in the EU. I was participating in and the EU Commission were inviting the airports, the business, the air companies, the airport responsibilities, the authorities on air.

Anders Arpteg:

Luftfahrtsverken air administration.

Henrik Göthberg:

Luftfahrtsverket in Swedish.

Petra Dalunde:

I get it and also the drone business and the cities so I was there representing Stockholm and we met every six months or something and agreed on writings and talked and learned from each other and looked at all the perspectives and when the legislation was done after two years, everyone was there.

Henrik Göthberg:

But it was different in the sense if you compare it now to how you interpret how the AI act come about. Yes, did we know then how to act on the law, or was that also a work in progress, or is that different?

Petra Dalunde:

I think it's different. And I never thought I would say this, but drones are only three dimensional. Ai is so many more dimensions. So, I think it may not be possible.

Henrik Göthberg:

So the point was, when you were drafting the law, you were able because only three dimensions, if we take that example to be more concrete, and you actually knew where this was leading. Whilst, when we look at the AI Act, it's on such a high level compared to your hardcore engineering example.

Petra Dalunde:

In fact, it's on such a high level compared to your hardcore engineering example, and who would you invite from the authority level in the member countries?

Henrik Göthberg:

No, they don't know it Exactly. They don't know it.

Petra Dalunde:

So there's a huge lack of understanding. There's a difference, right? So?

Henrik Göthberg:

I guess you're right. Then it's a spectrum. I think it's a good point.

Anders Arpteg:

It's a spectrum. Some are more concrete than others.

Anders Arpteg:

But there should be at least some threshold, I think the legal uncertainty that are available for any law that's passed. I think you're right, henrik, and we need to have case law to really know where the difference is or where the limit is. But if it's too much uncertainty then you have no idea what to do or what the consequences of the set of demands. It's very easy to set up a set of demands. It has to be high quality, it has to be non-biased, it has to be representative. If you just say that without really trying it out in some real use case to know if it even is possible, that's strange to me.

Henrik Göthberg:

Yeah, but this is a really, really, really good conversation to be had, because I can put people into camps now. I can put people in camps that I listen to and really smart people, really respectful people who argue oh, it's great that we now have the guidelines on top level, so now we can actually have a framework for how to work. And this is the argument that it should be easier even to innovate when we need the frame guidelines, and I believe that is completely 100% true. If I take the drone example, where we are more concrete. But I'm not so sure what happens when you have the same argument, when it's so many layers of uncertainty underneath. You did the sign language as I'm talking, so the whole argument oh, this makes it easier or this makes it harder. Oh, I don't know. For this case, it's not that simple. I think that's a big, big insight. Do you agree? Are you sitting here nodding?

Petra Dalunde:

I agree, and it's not simple, but what would be the alternative?

Anders Arpteg:

Exactly. I think we can all agree that we need regulation in place. I mean, we can't live in the wild west, but some regulation is necessary.

Henrik Göthberg:

I think we all agree on that.

Anders Arpteg:

But then the question is really what is the level of concreteness that you need to have in the legislation for it to not cause? What GDPR actually did, in my view, is that the level of uncertainty caused companies to not even go there to remove the data they had and the services data could provide. And if that's the case, it's really dangerous. But then, on the other hand, if we have services like the TEFS and what you're doing at RISE and we are better than that in Sweden than perhaps other countries it would actually be a benefit for Sweden. We could make this an opportunity potentially. That's my argument that Sweden, actually Swedish companies, are having a big upside.

Henrik Göthberg:

Sorry for jumping back and forth and jumping in and you need to steal the table, pietra because I'm so excited, because what we are saying now to me is profound, because my takeaway is that, yes, we need to have regulation, but we need to have extreme humbleness to when it's concrete and certain. We need to be very humble to this spectrum and then maybe we need to treat, we need to do it, but we need to maybe do it in different ways when it's a concrete spectrum to a high uncertainty spectrum, and maybe that's the problem right now, when we have treated regulation and gone into it as if it was concrete, when it's nowhere, near concrete.

Henrik Göthberg:

So it's not then if or what, it's how we do it in relation to the VUCA volatility, uncertainty, complexity, ambiguity of what we are dealing with that it needs a process and, as you said, maybe you need to stay on even a higher level or not trying to treat it as it's clear, because it's not.

Petra Dalunde:

Well, no, it definitely isn't, and I think we need each other. We need different use cases and we need to learn together. I think that is crucial. Otherwise, no one will be able to do this alone, and we are talking about an emerging technology. It's changing so fast.

Henrik Göthberg:

Emerging technology on top of that. But, petra, because you said, okay, but do we have any other option? In principle, no other option In practical, no other option In practicality. Yes, huge options on how we tackle it. So how would you elaborate on let's not dig in the back, but moving forward, what is important, because I think there is a lack of humility in relation to how uncertain this is emerging technology, all this compared to other situation when we do law. So you know. So. Do we have another option, me and Anders? 100% no, we don't.

Anders Arpteg:

I mean we can compare to other legislations out there. The UK has their version that is different than the European one, american has a different one, china has a different one. They all take different approaches and we could compare it and see, you know, what is potentially a better one. I don't have the answer for that, but there are different ways to approach this for sure.

Petra Dalunde:

And if someone wants to take lead on on that, I will definitely have large ears, but I don't think it's in my description.

Anders Arpteg:

So no, no, it's not, we are speculating yeah okay, should we take another topic and potentially and um, the sandbox has been spoken about a lot and we mentioned it briefly before. It's connected to the TEFS partly, and I guess we have more. There will be like a Swedish official sandbox as well, right, or can you just elaborate? Okay?

Petra Dalunde:

So there are sandboxes and there are sandboxes Again. We have terminology. So the AI EU regulatory sandbox that is the one that is named in the AI Act. Every member country will get financing to get to establish one and in Sweden the government has put on an investigation to see who and how.

Anders Arpteg:

Is it the EU that will get it.

Petra Dalunde:

I don't know. It's an ongoing investigation and we are terribly late on this Bård, so the investigation will be done in like a year or before summer next year, and then we only have one year to get things going because the AI Act gets trades into force on the certification demands and high risk.

Henrik Göthberg:

Is there a reason why we're late on it?

Petra Dalunde:

I don't. If there is, which it probably is, I don't know.

Henrik Göthberg:

Because I asked the question? Because if you understand the reason why we're late on it, then you can fix that reason and maybe start earlier. No, I don't know the reason, because the tricky point is that you have a very, very short time to execute. On operationalizing a sandbox, yes, but there are.

Petra Dalunde:

Emu is on the ball, so they have already established one. The thing with the EU AI regulatory sandbox is that it has to have certain criteria. You have to have the authority involved, and it's when you use GDPR data. It has to be like secure, and here you can violate the legislation when training the model, and then you have to prove that the model will not relieve sensible data. So that is what it is, but then you can have a regulatory sandbox where you play around and learn. It may have to be safe and secure, because you're working with sensible data or GDPR data, but you don't have to have the authority involved in that. You just need to learn, and then you can have a regulatory sandbox. Some call even experimenting together.

Anders Arpteg:

Is that the sandbox?

Petra Dalunde:

Can you set?

Anders Arpteg:

up your own sandbox? You think in some way.

Petra Dalunde:

Yes, but it won't be the EU AI regulatory.

Anders Arpteg:

I heard someone saying it will be more or less one official kind of sandbox authority in some way, but then startups or other companies can apply to be certified sandboxes in some way. Is that true or do you know?

Henrik Göthberg:

I don't know enough about that, but let's take one step back here. What is the objective and the goal of having sandboxes or what problem are they there to solve? Because now we're looking at legal sandboxes and we're looking at technically and we're looking at experimenting and maybe we put something up which we really I mean with GDPR. The whole idea was that I got no comments until I did something and then I could get comments. That's the point, right. So no one would comment if this was good or bad until I built it Live there, been there, done that. So I guess it's in that strain that how can we build something super fast or simple that we can get comments on Exactly, it's that simple.

Petra Dalunde:

Yes, and you don't need to get comments if there is no GDPR data in it or sensible data. So that's the case, and the EU Commission wanted the TEFs to be able to provide these services. So now we are looking into that as well. These services so now we are looking into that as well. It has to be safe, secure and you have to have experts there to be supportive on the task, so to say. But also, from the beginning, the tests weren't supposed to test general purpose AI or large language models either, so that has also become something handed on to solve. So there's a lot of things happening as we go along and I don't think it will stop.

Henrik Göthberg:

No, it won't stop.

Anders Arpteg:

I think the whole work on general purpose AI. Can we take that perhaps later? I think it's such a big topic. I would love to go a bit deeper there.

Petra Dalunde:

Me too.

Anders Arpteg:

But okay, so just going back to the sandbox, and we don't know yet who will operate that in Sweden, but at some point, and hopefully sooner than later, we will have some more clarity around that. And then if you have company X, they want to become compliant but they want to experiment here in the sandbox.

Henrik Göthberg:

Or even they want to understand if this is worth pursuing. I think even earlier it's a valid point to go to a sandbox to understand the implications of what you're trying to do, to understand if this is legal. Is this an Act 10 type problem or not?

Anders Arpteg:

I guess you could think about different stages here. If you have a really new idea, it's nothing being done. Yet you have no proof of value or proof of concept or prototype, certainly not a product. Could it be more like a checklist kind of thing where you say, can I test if this idea would work and then very quickly get a no and potentially you can just scrap the idea?

Petra Dalunde:

is that potentially what the sandbox could do, or is it later in definitely yes and also give you like input as hendrik example with you can get response yeah, but I.

Henrik Göthberg:

I really see the way I understand the sandbox. If you really want to have a, you need to go back to a use case lifecycle. So what is the sandbox equivalent of the ideation? So it's like a funnel of crazy ideas that we want to narrow down to good ideas and AI for good, and maybe some needs to be weeded out early with no cost, and then you take it one step further and then there's a full-on type of testing before deploy. I don't know. So I fully see this as a funnel.

Anders Arpteg:

Sounds like we should have an AI chatbot that you can just operate with and ask is this a good use case or not?

Henrik Göthberg:

As the first sandbox from the first topic, why not? But then you get deeper into. Really, I think it's a good idea. My example, and this is Vattenfall. Right, getting a straight answer out of the lawyer without showing something is impossible. Right, without putting a gun to their head no, no, no.

Henrik Göthberg:

It has nothing to do with that. It has to do with how this structure is set up, that they need to comment on something they view. So if you can't define exactly what the system does and who it's for, how can I do gdpr? I think that's fair, but it really puts then an effort in. Okay, I was like okay, can I use sandbox something so I can show you the app I'm thinking about and you can then read the names that I was thinking about, and then you can say what's wrong with this, and then I can fix it. And I think it's that simple, if you know. So we need to be able to do something that feels real with minimum effort, in order for the law to work. I think it's that simple.

Petra Dalunde:

Yes, it's not rocket science, no.

Henrik Göthberg:

But we have and I remember in Vattenfall how some people say oh, let's not talk to these guys, let's talk to them last. No, let's start with these guys.

Anders Arpteg:

Flip it, use it as our innovation engine. I think, if we take a comparison to like AI, regulatory compliance and cybersecurity, you know, in cybersecurity, you have this concept called shifting left and cybersecurity, you know, in cybersecurity, you have this concept called shifting left, and normally it's. You know, when you want to apply security to some kind of software system that you have built, you traditionally do that last. So you build a system, then you think, oh, perhaps we should just, you know, do some testing in terms of security as well. And then they do that. You know, once the system is built and that's not a super good idea the more you can shift it earlier on towards the ideation stage, even potentially shifting it left, so to speak the easier and better it will be in terms of security. And shouldn't we apply the same when it comes to, you know, compliant in terms of AI regulation?

Henrik Göthberg:

This is how I do it. On a daily basis, I use the term data governance shift left, we say data governance.

Anders Arpteg:

Shift left, we say data governance shift left.

Henrik Göthberg:

Computational governance shift left. We are using that terminology, which is a software terminology like this, but we are using that on a daily basis to understand data governance by design or compliance by design. So shift left is the right way. Anders, I saw the light on exactly that, and that means then you need to get it into the software development lifecycle. It's that simple.

Petra Dalunde:

It's the next t-shirt Shift left, shift left. Yeah, it's an old t-shirt by the way. It's an old t-shirt in software. Also it's the right way. It's the right way, it's by design right.

Henrik Göthberg:

By design, you do it right. And if you then would dream of what a sandbox is all about and what a center is all about? The whole center is about shifting left. Do it right or do it. I say it like this do it right, actually, but ask the question when it should be asked, in the right moment of time. That's what shift left is all about. I love it.

Anders Arpteg:

Awesome. Let's hope that we get some more clarity around sandboxes and they actually have a nice implementation and that we can be leading potentially in Sweden in terms of having good sandboxes and tests to make AI companies.

Henrik Göthberg:

Actually, that is the main principle. What you guys are doing, shift left. It's actually a very good input, anders, in terms of putting it in the head.

Anders Arpteg:

Yeah, you as well, henrik, but perhaps we should go a bit into the general purpose AI as well, and you phrased it very well, I think, petra, you know when the AI Act started. I guess it started like 2020 or 21 or something.

Petra Dalunde:

Yes, and the tests were beginning to be designed 2021. I think the call was out so 2020, together with the AI Act. The call was out 2021 and everyone was applying 2022. And in the end of 2022, you knew which consortium that would go ahead and we started January 2023. November 2022, something happened.

Anders Arpteg:

Yes something happened in November 2022. That was a big thing and I mean that's a problem. Of course we were speaking about ChatGPT. Of course that was released then and I guess you know that caused a big difference. I mean that's a problem with regulation. I mean it is much slower than the technological kind of speed of development that we're seeing. And if we were to just unpack the term general purpose AI, I guess I mean to some extent, of course, the chat DBT is very much more general than a much more narrow AI type of application that you used to have. And then we get into this kind of foundational models and frontier AI models and whatever they call it these days, and that's changed a bit right, or how would you say that the whole chat TBT moment changed, potentially connected to the AI Act?

Petra Dalunde:

From what I heard, six months into the project summer 2023, the EU Commission was shocked and, yes, they had kind of panic. So they were asking the TEF coordinators on EU level will you be able to test general purpose AI? Or well, they called large language models then.

Anders Arpteg:

Yes, and if we were to just think a bit more? You know why is it so? I mean, I think a lot of people realize that this will have a huge impact on society and it will change so many things. Um, and perhaps the whole, you know, thinking when they design the ai act to start with, will change a lot. But and and I I want to explain it myself or get a bit further here, but if we just take one example, I'm trying to think, you know, not to be leading too much in my questions here, but but maybe, maybe that's okay, anders, because this is really tricky.

Henrik Göthberg:

So why don't you frame a stage, a context, and then we put the question in there and then we all work on it. I think it's okay.

Anders Arpteg:

I think we can separate the development of AI in at least two, perhaps more, but two stages. One is, you know, the traditional one is simply you build an AI solution end-to-end yourself. You take the data, you train a model and you put it in production. And bam.

Anders Arpteg:

These days not so much I mean today we have other companies, like the big tech giants, primarily building up these kind of foundational models or LLMs to start with now more multimodal foundational models and they do not build the application, they build the foundation of it. That's why they're called foundational models or general purpose models, so they don't know, when they develop their models, what it will be used for. So then, if you have a regulation now, should it be the providers of the general purpose model that is accountable for everything it's used for, or is it the other company then that then starts using it through the APIs of OpenAI or whatnot, that are fully accountable for everything that happens? Or how should that accountability then be divided? For me it's not obvious. That's at least one difficulty that I see with general purpose models. Would you agree with that description?

Petra Dalunde:

Yes, I agree with the description, but it's even more than that. And since the rags now are coming strongly, we have another layer on top of that, but how the AI office and EU Commission are trying to solve this is the code of practice that is being developed together with the actors on the market.

Anders Arpteg:

And that's very much happening right now, right?

Petra Dalunde:

Yes, it's happening right now and they want the code of practice to be confirmed by parliament, by the parliament, in the spring, early spring, and they want it to be viable from May, because 2nd May it should be used, because that's when the legislation on limited risk legislation trades into force, which is general purpose, ai, large language modules and such. They have limited risk risk, category 3. Isn't that strange to say that the general purpose AI, large language modules and such they have limited risk risk, category three.

Anders Arpteg:

Isn't that strange to say that the general purpose, without knowing what it's going to be used for, have a certain risk associated with it?

Petra Dalunde:

Yes, but then we have seen and now you cannot worry. Remember that I'm not an engineer, so now I'm yeah. Okay, so my technicians are saying that you can use a large language model to sort data in a security system.

Anders Arpteg:

Sort data in a security system.

Petra Dalunde:

To classify data or prioritize data or do something. It's a large language model and it does something.

Anders Arpteg:

It categorizes yes.

Petra Dalunde:

There you go. It's a high risk.

Anders Arpteg:

Yes.

Petra Dalunde:

Yeah, so there are examples already now being talked about that we really don't know.

Henrik Göthberg:

So you gave an example now that clearly the domain or the application is all about high security and there's many things going on. But one portion of this is that we're using a large language model to do a subtask at some point and that subtask is like sorting whatever. But as soon as you've done that now, you have sort of kind of dragged the whole thing now into a class 10 type system potentially.

Anders Arpteg:

I remember the early days of the first versions of the AI Act and they have classified chatbots as a thing you know and as a technique.

Anders Arpteg:

I would call it. They may have called it a use case, but I think it's a technique and they said I think it was limited risk at that point. But to say that chatbots in general are limited risk is really weird. If you use a chatbot to, for example, have children that are having some kind of mental disorder and you use it as a therapist, I mean that's so far away from limited risk as you can come.

Petra Dalunde:

It's a high risk. It's a high risk.

Anders Arpteg:

So I mean, that's what Henrik and I have been speaking about so many times. You can't take a technique and say this has a certain risk.

Petra Dalunde:

You have to always speak about the application. Yes, it's the application that tells you.

Anders Arpteg:

And I think you know so many people are saying that the EU Act is use case based. I don't think it is. I think it's very much technique based.

Henrik Göthberg:

still, they try to make it more use case based, but I think it still is very much, I would argue. The tricky point is that they're aiming for use case and we are talking, when we're explaining it, use case for use case, and we are talking when we're explaining it, use case, but we, as engineers or reading it, you are not referring to the use case my friend. You're referring to the technique. You're muddling the waters here. It's not really stringent, even like remote biometric identification.

Anders Arpteg:

That's not an application. It's a technique that is specifically classified as a risk, you know, independently of what it's used for. I think it's a technique that is specifically classified as a risk, independently of what it's used for, I think it's wrong.

Henrik Göthberg:

It becomes really really hard to follow it.

Petra Dalunde:

Yes, but okay. So in May we will have the code of practice, I hope, and then in August we will also hopefully have process for evaluation of these general purpose models as well. That has to be used before putting a general purpose model on the market. So perhaps that will also that will be use case. It will be case by case so. I don't know how this evaluation process will look, but they are developing it, so that will be use case.

Anders Arpteg:

It will be case by case, so I don't know how this evaluation process will look, but they are developing it. How can you develop a general purpose model without I mean it is general purpose because it's not use case based right?

Petra Dalunde:

It can be for so many different uses, so you have to evaluate it on the application from an application.

Anders Arpteg:

But you don't know the application in advance.

Henrik Göthberg:

Yeah, but I think I follow you, petra, here, because if you can imagine that there's a practice now that sort of articulates what's the process, shift left or whatever, how you're supposed to do this, and this is then put in not in relation to Microsoft but in relation to someone using the large language model, then it becomes one practice. And how you do it, if you go straight, only for the large language model as sold as the API, that has to be evaluated on a completely different perspective. So one thing is how Microsoft can be applicable for what they are doing, but then I think it's the other way around. When someone is using the language model as part of their product or part of their use case, did I get you right? Because clearly Microsoft will be subject to this on their broad level, and then we have the people using it.

Petra Dalunde:

Yes, and, but still, I think Anders is onto something, and here is also where the rags come in. So I mean, it's easy.

Anders Arpteg:

when it comes to the use case, you know the companies that use the APIs or the GPI model, then we know they have to comply with it, no question about it. But the problem is the other thing. The problem is the providers of the GPIs. What should they do? Should they enforce in some way that you have registered the use case and they have control of it? What happens then in open source and a meta approach where they actually are able to just do it without having any kind have control of it? What happens then in open source and a meta approach where they actually are able to just do it without having any kind of control of it? Will open source be banned for general purpose AI? That is, I think, a potential consequence here.

Petra Dalunde:

And not only there. So the open source idea that has been so beautiful for quite some time now, and we have all strived to come away from all these end-to-end services or products that we cannot get data from and like co-create from or such, and now we get back there.

Anders Arpteg:

I wish we could answer it, but there is work going on.

Anders Arpteg:

Yes, now I hope we get some more clarity on this, because it's so many open questions and we have it available today. We need it today. We shouldn't really have to wait until May next year, but I hope we can have some more clarity as soon as possible. It's time for AI News brought to you by AIW Podcast, cool Time for a short middle break of the podcast, and at this point we go away from the tests and the testing and evaluation of AI models briefly or perhaps not, we'll see but to speak a bit about some of the latest news that have happened and I think we'll all speak about the same thing, but we'll see if we have something else.

Henrik Göthberg:

What do you mean? We should not call this AI news, but we should call it the Nobel Prize week. Is that what you mean? It?

Anders Arpteg:

is what else can really beat that kind of news? I guess Nothing. Anyone that wants to get started on this?

Henrik Göthberg:

Do you have an angle on this?

Petra Dalunde:

No, I don't have an angle on it, more than I think it's interesting that the committees have had to squeeze so much to get the prizes out to AI. It's a bit kind of squeezy.

Henrik Göthberg:

It's extreme. Okay, elaborate, elaborate on you, you're making it.

Petra Dalunde:

So the physics one is kind of physical-ish.

Anders Arpteg:

Is it?

Petra Dalunde:

Well, I'm not an engineer so I don't really know the details, but that's my understanding. Yeah, so for me the take is do the committees need to evolve the prices?

Henrik Göthberg:

Yeah, exactly. Ooh, do we need other types of categories?

Petra Dalunde:

Categories I don't know.

Anders Arpteg:

Let me perhaps give you a background.

Henrik Göthberg:

Yeah, I like your frame.

Anders Arpteg:

So, to start with, two Nobel Prizes of this year 2024, has been awarded to let's call it pioneers of AI. The first one in physics, in some weird way were awarded to John Hopfield and Geoffrey Hinton, primarily, but that was you know. The question then is you know how can you connect what Hinton and Hopfield did? And just to give a brief background around this, hopfield was this kind of network called Hopfield Networks. It's like a simple kind of recurrent network. It's not the type of network called Hopfield Networks. It's like a simple kind of recurrent network. It's not the type of neural network we work with today.

Henrik Göthberg:

I can go more into depth if you want to know how that works, could you please do the whole setup correctly now, amish, because I don't think many people know exactly for what type of their work they were awarded. What is the Nobel Prize? Nobel Prize also highlights exactly what piece of their work they did a distinguished career. But what is it that we're giving the Nobel Prize for?

Anders Arpteg:

Let me do a quick overview and then I'm happy to go more into depth into what it means. But in short, it was the Hopfield for his Hopfield networks. And then Hinton Jeffrey Hinton, that is one of the godfathers of deep learning, and he was working at Google for a long time, with AI for a long time, and he quit for various reasons, but anyway he invented something called Boltzmann machines and that is what they got the prize for.

Henrik Göthberg:

So the Hopfield network and the Boltzmann machines and the Boltzmann machines is sort of in the deep depth of how we understood how to build neural networks.

Anders Arpteg:

It was one of the first neural networks, but it's not the base of the networks we have today. So the interesting thing is that Hinton did something else which is much more profound that you think, that he thinks as well, because he did the backpropagation algorithm that is still in use today. All of the big neural networks, even the chat EPCs, are using backpropagation algorithm. That is still in use today. All of the big networks, even the chat EPCs, are using backpropagation. He even during the press conference you know he got the question. Hinton got the question from a journalist during the Nobel prize press conference asking you know, how has your invention influenced the AI of today? And he said ah, not that much. They're not used, but backpropagation, which I was part of inventing, that is still in use today. That's much more impactful.

Henrik Göthberg:

So, actually, he would like to have it for the other topic.

Anders Arpteg:

I think he should have.

Henrik Göthberg:

The problem then is that.

Anders Arpteg:

how can you connect the backpropagation invention to physics? He doesn't have a physics background, he's a psychologist and computer scientist, but Hopfield he is a physics. He is a professor in physics. There is a clear connection to physics when it comes to John.

Henrik Göthberg:

Hopfield. What is Hopfield's thingy?

Anders Arpteg:

Hopfield network is a type of network where you have one to zero only as neurons in a network and it's basically associative memory. I can go more into depth if you want to.

Henrik Göthberg:

Who is first Hopfield? Hopfield is earlier than Hinton right.

Anders Arpteg:

So Hinton built on Hopfield's work. So the Boltzmann machine is like a specialization of the Hopfield network, where you have the visible neurons and the hidden neurons and you can basically make it learn patterns from what it's exposed to in an unsupervised way. But it's not using gradient descent. In the same way it's not using back propagation, which everything is using today. It didn't have like a clear, like error function that we have today and things like that. So it it was a different thing and to me it's kind of similar to how Einstein got the Nobel Prize for the photoelectric effect but not for the theory of relativity.

Anders Arpteg:

This is repeating itself a bit again, I think, and even when he said it himself during the press conference. That's hurting my stomach a bit.

Anders Arpteg:

But also and the second thing, of course, then yesterday we also heard that um demi sassabis, with some other people, also rewarded the nobel prize in chemistry for alpha fold, partly the amazing thing that we're able to predict the 3d structure of proteins and you know, normally it takes like four or five years like a PhD to come up with a single proteins 3D structure. Now AlphaFold, and especially AlphaFold2 that they had could predict 200 million proteins 3D structure and they did it and they put it out freely available for any researchers to work with. So there's so many people now in biology and other fields producing this.

Anders Arpteg:

It's amazing and he truly deserves a Nobel Prize. But in chemistry, should it be, biology Should it be? I don't know.

Henrik Göthberg:

So here we have. The chemistry really goes to research lab of DeepMind.

Anders Arpteg:

Yes, but DeepMind is owned by.

Henrik Göthberg:

Google. But DeepMind is owned by Google.

Anders Arpteg:

And then comes from Google as well, so it's a big Google win.

Henrik Göthberg:

Yes, it's a big Google win, but okay, hinton was never at DeepMind and I guess he was. Was he at Google when he did the work?

Anders Arpteg:

he was a waterfall no, I think he was in Toronto. He was in Toronto. So I mean he is one of the mafias of Canada.

Henrik Göthberg:

Yeah, this is real mafia Canada.

Anders Arpteg:

So Yandikun and Joshua Ben-Johan.

Henrik Göthberg:

But the distinction, the Alpha Fold, that was deep mind period. Yes, that is purely out of deep mind. And Hassad and Demis, you know he was part of it as well.

Anders Arpteg:

So he is a big part of it, so he truly deserves a Nobel Prize for that. That really has impacted science and healthcare in so many ways.

Henrik Göthberg:

Yeah, and of course the ideas there also helped us with the COVID, you know, to develop vaccines faster and stuff like that, or pop strains to it. I mean maybe not one-to-one, but we learned from. Yeah, I don't think directly, but no, but then, and of course, what was the last thing they released now used recently this year, that almost got missed.

Anders Arpteg:

Oh yeah, the same lab.

Henrik Göthberg:

AlphaFold, the step three, or you know what the hell was that all about? It was almost it's huge, but it's lost in the drama of chat GPT or large language model. It is a bit it's too many AI news Because they've been moving alpha full one, two and then they had a quite big release. Was it bacteria? No, may, I think they took the idea of what they were doing with proteins and they applied it in. I think. Was it viruses or bacteria?

Anders Arpteg:

I think the interaction of molecules in general. So it moves up trying to understand how the interaction of protein works with, you know, large scale molecules in different ways. In that way you can really like simulate how using these kind of newly defined 3D structural proteins will actually impact the molecules that you're using them for. So the whole field of research and drug discovery now can use it for more than just the 3D structure of protein, as I see it, but I'm not an expert here.

Henrik Göthberg:

But can we take it back now to Pietra Dalundis a little bit pointy introduction to this. Do we need more? Is this really fitting into the categories? Is this new categories?

Petra Dalunde:

Are they outdated? Yeah?

Henrik Göthberg:

because when you framed it you framed it with a little bit of a tongue in cheek there. Could you elaborate on that? How do you think about that?

Petra Dalunde:

They need to squeeze now. And how does the Nobel prize, how does the category need to be defined in order to really promote the scientists of today with today's well science?

Henrik Göthberg:

What is relevant or what is most impactful today.

Anders Arpteg:

There should be a computer science or data science kind of category. Because it's so impactful for society. And. Ai will impact perhaps more than any science, I would argue, in the future.

Henrik Göthberg:

But if we were used now AI? What do you call it If we were? If we're trying to be AI pure, which Nobel prize is the best one for AI scientists to get? Should it be math, or should it be some? You know it doesn't belong in the physics, or does you know?

Anders Arpteg:

I mean you can claim that. I mean it doesn't belong in the physics, or does you know? I mean you can claim that. I mean some people say AI is just statistics, or even you know AI is just math. I mean, obviously you can say that anything you can be traced back to math, you know originally. But to say that AI is physics or AI is, statistics, it's strange, I mean it's not the accurate description.

Henrik Göthberg:

I understand how it actually makes more sense to me that they get the used AI computational approaches in order to do something amazing in protein.

Anders Arpteg:

To me, the chemistry case is actually not that strange, shouldn't it be biology more than chemistry is actually not as strange, shouldn't it be biology?

Henrik Göthberg:

more than chemistry. That one I have no comment on. Maybe, yeah, maybe, I did chemistry in school, you see, and you have physical and organic chemistry and all these proteins and all that. Where do you really do that? Do you do that in chemistry?

Anders Arpteg:

It's much closer, it's much easier to swallow, so to speak, the connection between chemistry and alpha-phobes.

Henrik Göthberg:

Do we have a Nobel Prize in biology?

Anders Arpteg:

I don't think so.

Henrik Göthberg:

No. So then what's the closest Chemistry? I don't think we have a Nobel Prize in biology, do we? I don't think we do. I don't think so.

Anders Arpteg:

That's also in literature, which you were talking about before, anders. When will ChatGPT get the Nobel Prize? Wouldn't it be super funny if the Nobel Prize in literature goes to ChatGPT?

Petra Dalunde:

That would be super funny, or if someone actually managed ChatGPT to write new fantastic Shakespeare novels or plays, why not?

Anders Arpteg:

It's just a question of time, I guess.

Henrik Göthberg:

Did you see the joke? Sam Altman for the Nobel Peace Prize? That was just a joke. It was just a joke. Someone was copying and made it look really official.

Anders Arpteg:

It was just a joke oh yeah, Okay, the Peace Prize.

Henrik Göthberg:

The Peace Prize as well. Yeah, but back to. I have another angle that sort of hit me which is the same one. Oh, I wanted to be a fly on the wall on that discussion. You know how the hell did you? How were they? You can imagine the conversation they would have had, the arguments it would have been so much fun.

Anders Arpteg:

I mean, I think the Royal Academy has so amazing scientists sitting there in a Nobel committee and they have one professor after another that are super good. And I listened to the press conference and they tried to describe what the Boltzmann machine was and it was so awkward and they didn't know what they're speaking about. And then some journalists were trying to ask questions and they couldn't answer. And then some journalists were trying to ask a question and they couldn't answer it and it's oh, you can't have, you know, a prize if you have representatives that doesn't even understand what they are speaking about.

Goran Cvetanovski:

So there are six categories in general Physics, chemistry, psychology or medicine, literature, nobel Peace Prize and then Economy, economy, yes. Swedish Risk Bank right Prize for Economic Sciences in memory of Alfred Nobel or something, so, economics.

Anders Arpteg:

So there is no biology on it. Psychology would be better, perhaps for Hinton.

Goran Cvetanovski:

No, but isn't it physics a little bit more? No, it's not that.

Anders Arpteg:

Hinton is a psychologist by trade or interesting education.

Henrik Göthberg:

No, if you were there to look at the trade and commemorate who is the most famous psychologist. Even if Hinton is doing work on AI, he's probably the most famous psychologist there is. So you know, so you could argue.

Anders Arpteg:

you can argue In any case, I'm super happy about this. I think it's cool that you know, even the Nobel Committee is recognizing the progress and the impactfulness that AI has on our society today.

Henrik Göthberg:

I guess that's the reason for them choosing AI yeah because that's back to the sort of joke of like oh, I would want to be a fly on the wall on that discussion. How do you think the reason is? Because it's clearly so impactful in society so they kind of they cannot ignore it and now they need to try to fit it into the categories.

Petra Dalunde:

But not only in society, but in science.

Henrik Göthberg:

In science. Of course it's impacting all sciences.

Anders Arpteg:

Yes, yeah, okay, big news, and we'll see if it will have as many interesting follow-up discussions as some other peace prices I won't name them. There's been some controversial Nobel Prizes in the past as well. We'll see if this one will end up in that category, but I don't think it's that controversial.

Henrik Göthberg:

I mean like the controversy is in the sense to squeeze it into the category, but I don't think anyone from a deserving point of view would think it's that bad. I'm not talking about deserving, or you know, I can figure something better. It's more about squeezing into the category problem.

Anders Arpteg:

So many other AI news to speak about the dev days of OpenAI, et cetera but everything is taken by this.

Henrik Göthberg:

Should we just leave it there? Yeah, I think we do, because then it became a Nobel Prize commentary. We can have another clickbait on that part.

Anders Arpteg:

Okay, going back a bit to Tetra and what you do as well at Rise and the TEFS and the Sandboxes as well, perhaps, if you could, just to understand a bit more concretely, do you have some use case, some example, some project, some company that you work with that you can speak about, just to give some example of how you collaborate with different organizations?

Petra Dalunde:

yes, I won't name any names yet. No, not today. We have an sme that wants to become a subcontractor in the teff, the TEF and the SME wants RISE to co-create the service that is going to be offered and the SME wants this to be done with their customer another SME that actually needs this service.

Anders Arpteg:

So we don't experiment like dry swimming, we do it with their real customer and is there a connection between the two?

Petra Dalunde:

So it could be some kind of unfair advantage they would get no any SME can become the customer if they want the service, and they can also be subsidized, so there is nothing unfair.

Anders Arpteg:

And anyone can be a subcontractor if they have something, but if they're in their turn, have another customer that they are connected to and has a customer and have some kind of economical kind of dependency or gain from.

Petra Dalunde:

I mean, if an SME comes in and becomes a subcontractor and then they can bring their customers to TEF and we can subsidize them, but we won't be able to subsidize an endless list number of SMEs, so they will not have everything. They will have a couple of customers and then they will be able to sell their service for paying customers. Verified by RISE.

Anders Arpteg:

I'm just thinking. Let's imagine I create a company and I become a subcontractor of the TEF and I can sell my services to other companies and I get the payback or some kind of feedback from them saying you know, we just want some kind of certificate from an official subcontractor of TEF to show to our investor that we are compliant. Now, because a subcontractor is saying so, that could cause some kind of bias potentially.

Petra Dalunde:

Do you see this kind of no, the verification will come from RISE. So the service can be performed by the subcontractor, but the certificate, or Because we are not there yet, we don't have any certificates yet. And we don't even have verification intyg yet. We don't have any certificates yet and we don't even have verification intuig yet, but we are working to create that because that has been asked for.

Anders Arpteg:

Yeah, and I think that would be super useful as well.

Goran Cvetanovski:

But, I think that's a good thing, because the subcontractor then can't really do that.

Anders Arpteg:

They simply help them to later become certified.

Petra Dalunde:

Exactly.

Anders Arpteg:

Okay.

Henrik Göthberg:

But what I take out of this is actually since you're having real cases to work on, which are real substance stuff it's not theorizing. So even to the point where maybe you are highlighting segregation of duties, challenges we don't know until we try it out so I think to go this path and do it for real, then you can maybe take a step back. There is something fishy here. Yes, they can be a service provider, but we need to have chinese walls here and here and here and stuff like this. But but so you have. But could you elaborate a little bit, not going to details, but what is that process all about? How did it start? What? What are you doing so more concretely, so we get a sense of what is that process all about? How did it start? What are you doing so more concretely?

Petra Dalunde:

so we get a sense of what the work is all about. That is a good question. Okay, so we don't subsidize 100%, we only subsidize 90%.

Anders Arpteg:

That much. That's more than 90%.

Petra Dalunde:

Yes 90% this year. Yeah, yes, it will change.

Henrik Göthberg:

But subsidy in relation to what becomes really interesting to understand.

Petra Dalunde:

Okay, so we subsidize 90% of the price of the service that we deliver to the SME. Okay, so if we are performing it, we report the time in the project.

Anders Arpteg:

So is it mainly personal cost? Do you add some kind of margin on top of that as well?

Petra Dalunde:

We have to price-wise. We cannot dump the market Right, no, no.

Henrik Göthberg:

But what is the pricing model now, when it's all new? I guess it's almost like time and material, that you're framing a project together and if we do this project we will get to a certification one way or another with you, or we get to a report.

Petra Dalunde:

Exactly Certification is. It will be a service in itself when that time.

Henrik Göthberg:

So now literally what the concrete topic is. They want to have a report to understand. You know what is the deliverables.

Petra Dalunde:

Yeah, it depends on the service. Different deliverables on different services.

Henrik Göthberg:

But in this example.

Petra Dalunde:

In this example it's too technical. It will be definitely a report in the beginning because we will need that ourselves, but eventually a competence and a fix at the SME that they will be able to use.

Henrik Göthberg:

So a technical so there will be some recommendations or something like that, but it will be something yes, you're good here here here, but you have these fixes, so you need to data quality, you need to do this, so something like that Exactly, and the thing is that usually the beautiful thing in developing something new that isn't on the market yet is to do it with a real customer.

Petra Dalunde:

And it's this three-point collaboration thing that I think is interesting. I've never experienced it before and I think it's innovative and I think it's flexible and I hope it goes well. And it's the first.

Anders Arpteg:

I'm still looking forward. We're going to have Anton Uzicka here. He built this GPT engineer repo, which basically is trying to help software engineers to build whatever kind of web application or game or whatever, and AI is doing a lot of the work together with you, but it becomes super efficient and easier for people that don't have the skills to build application. I mean, I'm still longing to have a GPT tester, like a GPT engineer. That is basically making it super efficient to really validate and test your application. Wouldn't that be awesome? Yes, it would. I think the one that makes it will make a lot of money as well.

Petra Dalunde:

Yes, me too, but also the human in the loop.

Anders Arpteg:

Ai testing, ai yes human in the loop, but still easier to do right Definitely.

Henrik Göthberg:

Yeah, to even imagine the whole bilprovening, the whole facility to be very much agent-oriented or chat-oriented. It's an appealing thought, actually. Someone who can guide you through the process.

Anders Arpteg:

Awesome you know it's an appealing thought actually Someone who can guide you through the process? Awesome, and the time is flying away here a bit and we're trying to not, you know, do the too long podcast. We usually fail. But if we were to move into a bit more futuristic kind of thinking here and we see today, you know we have a certain type of doing testing and evaluation of AI models If you were to speculate a bit on the future and thinking three, five years ahead, how do you think the testing of AI models will look like at that time?

Petra Dalunde:

Yeah, it will. It depends on how the how actors on the market will behave, how they choose to go. I think. Because if they go Hendrix way we do it by design, compliance by design I think it will be easier. But it also depends on if we have new jumps, technical development jumps, so it's really hard to predict.

Anders Arpteg:

I guess also a question can be, because right now we're seeing a bit of a device between the American tech giants and they're not releasing models in Europe because they are uncertain about the ramifications of the current regulations we're seeing. If that were to continue, that would be a strange future. So I mean, I guess, one speculation here are Europe going to become more segregated and disconnected from other areas of the world, or is it going to come closer? I think actually not in the future and I'm hoping I'm certainly hoping that it's not going to be the case. But it could be a consequence and if we get a court case, some case law coming soon that is trying to put a big fine on Meta or someone for putting up an open source repo that is being abused let's say that Lama 3.2 now is abused for building a biological weapon or a new coronavirus or whatever, and they are then responsible for that and EU is forcing them to pay for it I think that could have huge ramification. That would be very dangerous.

Henrik Göthberg:

Let me give my two cents on that question, because you also dragged me into this when you said, if they go my way and I think I understand what you're saying, because I can even see now, without the EU Act, how different companies are trying to deal with fundamental data governance AI governance or data governance and I see a very distinct, I see really two camps right now in the industry, and even with my colleagues and consultants around this, where some people in my opinion, then because I'm clearly belonging to one camp here, so I'm biased, I'm subjective I think this can only be solved through computational governance and ultimately, to get to computational governance by design, shift left what Anders was referring to as well.

Henrik Göthberg:

So it then means you need to get these topics embedded in what is referred to as the software development lifecycle or the AI development lifecycle from an engineering perspective, and it's simply part of definition of what we do around here. So you can't be an engineer without understanding the compliance dimensions of this. Now, what I see, then, is so this is actually a very small part of the market. Is this mature both from consultant point of view and from practitioner point of view, like this? But I have the fortune to work with these consultants, who know how to build this. You can enforce data governance in code For them. For us, in this context, this is no difference.

Henrik Göthberg:

It's just adding policy layers that we haven't thought about, but the fundamental mechanism we have solved, if you go to the other camp, then you're still a little bit more on the pure policy side and you haven't started to understand the prerequisite of translating policy to code, or you haven't figured out how you make engineering out of policy. And so if you look at what this future of an AI test bed and all that could be in five years, it should be the first one. But if the market is not there, you will be stuck in PowerPoint land and Word document land. It's going to be very, very different. I don't know.

Anders Arpteg:

I think yes and I'm hoping it will go that way and perhaps we'll have a new type of job role in the future. I'm thinking like a governance engineer.

Petra Dalunde:

Yeah, that's a beautiful title.

Henrik Göthberg:

Right. No, but this is super important, anders, because we highlighted the role a couple of years back data steward. If you read data governance, data quality documents, we have a data steward and I installed one of the first data stewards in Vattenfall in 2016, 17, Luz Regner, and basically he was already then towards what we are going for now and in the sense that he understood the master data, he was the kind of guy who knows the role level names in SAP set tables, so he really knew the data in and out, but then he wasn't in the business, so he can understand policy to access and different stuff like this, and we used him as a quality assurance person right down in the engineering team, so he was really much riding the policy, but he was a unicorn that he could then work with engineering team. Have you enforced policy?

Anders Arpteg:

I'm thinking a bit different.

Petra Dalunde:

I want to elaborate on what you said under this biochemistry weapon thing. So there are two things to this. Ai Act doesn't apply to research and it's when you put it on the market you can get sued on it.

Anders Arpteg:

So if you don't put it on the market in Europe, but that's kind of a sad conclusion because then you know obviously all bad actors they ignore the regulation anyway. And if you use it for military purposes, which is really scary these days. Of course that would be horrible and AI can have nothing to do with that anyway, and we can go into the whole bad actor kind of situation, but I think it's such a sad topic to go into. I think it's such a sad topic to go into.

Anders Arpteg:

Because you know, in some sense you know bad actors if it's like cybersecurity ransomware actors or whatnot, you know they couldn't care less about this.

Petra Dalunde:

But it is the principle that is interesting. Here you have this general purpose AI. If someone uses it to do bad stuff, who is responsible for the consequences?

Anders Arpteg:

Not super clear and the question is if a company like Meta that is trying at least to be open about it, if they are considered to be accountable for this, they will have to stop the whole thing. That would be a sad situation.

Henrik Göthberg:

But we have this joke, right? Well, not joke. Someone is selling a hammer, someone is designing a hammer and, in its simplest form, are we trying to make them accountable for how they use the hammer? Are you trying to make Canon camera accountable for porn, for children's, child pornography? Right, if I used to do it. A very stupid but extreme example of the tool versus what it was used for.

Petra Dalunde:

And this could be super clear in a clear product legislation, product regulation legislation thingy. But the AI Act is a mix of product legislation and safety.

Henrik Göthberg:

Yeah, but this is the problem. Maybe. Maybe this is what me and Anders and some who is really expert in this is like oh, you're mixing up the beans.

Petra Dalunde:

They are kind of mixed in the AI Act and that is problematic.

Henrik Göthberg:

Maybe that's the problematic point that sometimes it's better to modularize it and have sort of APIs between the different parts, because now, when it's a mixed can of beans….

Petra Dalunde:

Maybe, you should volunteer to develop AI Act.

Henrik Göthberg:

I offered my services many times to several people and I tried highlight that. You know, put me to work and I will work on these topics, I think and Salla Francine, by the way, is one of our friends, she says this really you know if we, you know who she's, you know chief data scientist at SMB before you know Salla, maybe.

Petra Dalunde:

I know Salla.

Henrik Göthberg:

We need to take a responsibility as real experts here, because it's not fair either on the other side. But I don't think that is kind of working and I don't think it's the expert's fault. To be honest, I don't think you are and I'm not talking on you. I'm talking about how the system is set up. The question doesn't come to the right people. I'm sorry. We have the AI commission now and I have my objections.

Petra Dalunde:

You mean AI office?

Henrik Göthberg:

No AI commission by.

Petra Dalunde:

So the AI commission and the AI board.

Henrik Göthberg:

I'm talking about the commission set up, by the way, with Svanberg leading it. Okay.

Henrik Göthberg:

So when I'm looking at that profile profile, it's not so much of I'm having any problem with who's in there, I'm just looking at the blind spots. Who's what angles? Are not in there, okay, and I think it's over and over again. When we talk about the regulation I, I think it's more like that, like, but of course, who am I to say right? Right, I just know what kind of conversation we've had here in 130 podcasts. They're quite good some of them. Some of them are quite sharp.

Petra Dalunde:

And we who try to be in this context need to be open. Have we have? We have to have large ears and also use use use cases to understand implications and effects and consequences but we said this is a cross-disciplinary problem. Yes, and there is no one at least two disciplines are crossing, probably more more.

Henrik Göthberg:

Probably more Okay, legislation, engineering, ethics philosophy. Definitely Okay.

Anders Arpteg:

Not easy. You are in a very interesting field, I think. The field of how we actually should test and regulate and evaluate AI models, I think will be one of the hottest topics coming years here.

Henrik Göthberg:

Yeah, you're in the right spot.

Anders Arpteg:

So not to you know, put you on the spot, but you have important work ahead.

Henrik Göthberg:

I think yeah, but Peter you are the right person because we don't know each other, but simply by discussing and you know the openness and what I love when we talk. I'm not an engineer. You're not trying to be something, you're not. You're rather trying to co-create with someone that can feel what you are strong at and what others are strong at. That's the mindset we really need. So you're the right.

Anders Arpteg:

You're in the right spot and I'd just like to reiterate as well I think no one of us and certainly not me is is saying we shouldn't have regulation. We need to have regulation in place. We need to have testing in place. The way it's working today is not satisfactory in any sense and we need to have much better ways to do that. And I think one of the biggest problems we have today is the uncertainty of the regulation. Otherwise, the regulation is actually having awesome intents, but we need to fix the regulation. Otherwise, the regulation is actually having awesome intents, but we need to fix the uncertainty. And by putting better standards in place, by having these kind of tests and the sandboxes to make it really concrete, I think that's the best way forward. So I hope more people were investing in this.

Henrik Göthberg:

I'm really wishing you the best of luck to continue this work, and my two cents to add to that is that clearly what our conversation here shows this is a truly cross-disciplinary effort and we need to be humble towards that and we need to then staff according to that. It cannot be solved in any other way.

Anders Arpteg:

Petra, let me ask you a simple question. I'm not going to ask you a simple question.

Petra Dalunde:

I'm going to ask you a really complicated question.

Anders Arpteg:

Imagine we will have an AGI at some point One year, five years, 50 years or never. But assume it will come whatever time frame. At that time we will have AI that is at least as good as an average co-worker, perhaps even a super intelligence, better than everyone, but it will be at least as good as an average colleague that you have. What do you think that future will look like? You can think about two extremes. It could probably be on the spectrum, but one extreme could be it's the dystopian nightmare of the Terminator and the Matrix and machines trying to kill us all and the paperclip kind of scenario and so many other horrible things that could happen.

Anders Arpteg:

Or it could be the other extreme, which is the utopian future, where we have AI that's cured, cancer, it's fixed. We have fusion energy free for everyone to use as much as they want. We have services and good that are more or less free to use. We have a world of abundance, as a lot of people call it, where you're free to pursue your passion and creativity as you see fit. I may not have to work 40 hours a day or a week at jobs that you may not like. Where do you potentially see the future come in this kind of spectrum. Is it more towards the dystopian or more towards the utopian future?

Petra Dalunde:

I'm more towards the utopian future, but I don't know if we would be happy in a perfect utopic society. I think we humans need some resistance and problems. We need to be needed, I think we humans need some resistance and problems.

Anders Arpteg:

We need to be needed. I think we would always have stupid humans right?

Petra Dalunde:

Yeah, let's hope so.

Anders Arpteg:

Exactly.

Henrik Göthberg:

Let's hope for stupid humans to continue to exist. I want to test Sverker, sverker Jansson. He was here last week and he actually made an interesting comment. Oh, we will have both. He was the first time who said that, like, look at society today we have poverty and we have richness, and we have fairness and we have unfairness. What do you think about his comment that there's going to be a bit of both?

Petra Dalunde:

Because no one said it like that before him. It's very interesting. If it's possible, we probably will.

Anders Arpteg:

But you know one way I usually phrase. It is a bit that you know I'm rather happy if we just make it until the time we actually have an AGI, but I'm really scared until we do. And I think, with the world as we see today, it's it's a bit scary if we have humans in control that have bad intentions and they are empowered by the super powerful AI that we have today and we'll have tomorrow and next coming years. That's what I'm really scared about. But what do you think about about that? Are you afraid about the current situation in the world?

Petra Dalunde:

Yes, who isn't? It's really scary.

Henrik Göthberg:

Because your way of framing it is not really a spectrum on reflection on AI and how AI is a reflection of humanity and how we control and steer it and the actors and the leadership.

Petra Dalunde:

Yes, but the consequences, the risk if the wrong people are coming into power and can use this technology in their way to do their deeds.

Henrik Göthberg:

And how to think about that. Should we then be very scared of innovation? That sounds crazy, but why don't?

Anders Arpteg:

more people speak about that specifically. I mean you know, thinking about AI act. Yeah, well, good actors will try to be compliant, but it doesn't solve this problem at all.

Petra Dalunde:

It does not solve the problem. Yes, that is true.

Anders Arpteg:

So, yeah, well, I'm hoping for a super happy future and I'm really happy that we have recognition from the Nobel Committee now, with the power of AI, so for me, I'm positive at least, and I'm super happy to be part of promoting and marketing such a very important community service.

Henrik Göthberg:

Yes, that is being developed and I want people to go here first. Then they can use lawyers, then they can consultants, but go here first, that's my take yeah, petra daluda dalunda, I'm super happy that you are doing what you're doing.

Anders Arpteg:

I'm wishing you the very best luck. I will support you the best I can. Thank you so much for coming to this AI After Work podcast.

Petra Dalunde:

Thank you, anders, thank you, thank you.

People on this episode