AIAW Podcast

E171 - Banking On AI: From POC to 10 000 Users - Mattias Fras and Olof Månsson

Hyperight Season 11 Episode 12

Follow us on youtube: https://www.youtube.com/@aiawpodcast

Henrik Göthberg:

Of a new release. Let's talk about that. I felt very honored because it means that I'm more important than all of Nordia.

Olof Månsson:

Totally, totally. I mean, I would would not miss the world to be here.

Henrik Göthberg:

Perfect.

Olof Månsson:

But tell us a little bit, like what are you putting in production right now? So the key thing I think is we're making a rather big change to our uh internal general chatbot that everyone has access to. And uh I think it's going out to what is it now, Matthias? Remind me, 5,000 users?

Mattias Fras:

Yeah. We started with three and now we're moving out to 10,000 in the next uh phase. So three to five to ten.

Olof Månsson:

And uh which release is this? So we're trying to make the system a bit smarter and adding more Nodea knowledge to it. That's what people have been requesting since day one, basically. And they always want more of that.

Henrik Göthberg:

Oh yeah. And what is what is the sort of most wished for type of Nordia knowledge that they want the internal chatbot to have?

Olof Månsson:

I think the key one is usually like where people document stuff, which in our case is on Confluence. Uh also our internal intranet is a popular topic. Um, and maybe it's a hot topic, but I'm actually questioning the usefulness of it because I think there's a lot of conflicting information on especially confluence.

Henrik Göthberg:

And uh and and was was you know, I've been part of a couple of production releases. Sometimes it's uh it's a breeze, and sometimes you're sweating because you know how how was this one?

Olof Månsson:

We had a great plan and a great roadmap, but for various reasons, we ended up in a situation where the release became an old or nothing situation. And uh that was not really a desirable spot for us. So we tried to take a step back and split it up into multiple parts so that if we only got 80% of the way there, all of those 80% would still go to production.

Henrik Göthberg:

So you actually don't have an old or nothing kind of one-shot, zero shot approach. You have it like if if it fails, it will fail here, and it's still eight percent. Yes, that's the intention. So we tried to do it a bit more iteratively. That's cool. And and I I need to ask, and maybe Unity, how you know when did you start experimenting with your first internal chatbot? And and uh how many releases is is it uh is it 2.1 or is it 1.3.5? Or when did you start experimenting when when you go to production the first time?

Olof Månsson:

So first experiment was done by me and my colleague Philip, who unfortunately isn't here, but uh it was two and a half years ago now, roughly, when we did the first one. And that one quickly actually got the attention of upper management, and it led up to the internal AI conference actually, which was a bit over a year ago uh two years ago.

Henrik Göthberg:

Yeah, yeah. We we we need these lighthouses. Can you remember which underlying model was at that point in time? Two and a half years ago. I mean, like the first one we used was actually uh Llama 2.

Olof Månsson:

Llama 2, yeah. That uh did not live very long for us. This is Eons back in AI time. Yeah, feels long ago for sure.

Henrik Göthberg:

And and uh could you say something a little bit about what is your fundamental, you know, when you go into production now? I you know, assume AWS Bedrock, you know, what's what's what's the core underlying cloud sonnet 4.5 primarily. Um you're taking it through AWS Bedrock. We use AWS Bedrock. Oh, so much to talk about. Anyway, I really want to uh welcome uh team Nodea here from AI Hub and of course Matthias Frost, you've been here before, I think maybe even the first season, first or second season. So it's a couple of years back. Yeah, you've been there twice? No, no, you haven't. I don't know, but I'm the old guy. You're the old guy, right? We are the guys with the beard, right, and glasses. And then we have the new kids on the block, the really cool guys. We have uh Olaf Monson. So we have Olaf Moonson and Matthias Frost from Nodea. So very much welcome here. Uh we're gonna talk about the AI innovation in banking and how to do it properly with the proper governance, doing the new stuff with the proper governance. That's the thing. But before we jump into all of that, I think we need to have the you know the five-10 minute background a little bit about you guys, you know. So um, I'll start with Olof, you know, the youngster. We we you know what do you say? Age before beauty or beauty before age, I can't remember. Brilliance before age. Old and tight. That's the way I'm gonna go with that one. Yeah, but so so tell me a little bit about your your background, your role, uh, you know, how you ended up there, sort of thing.

Olof Månsson:

So my background is in engineering physics. I majored in scientific computing, and I guess back then it was machine learning and deep learning. Uh, today I guess I would just say AI. And from there I moved into consulting, did a bit of uh yeah, general software engineering, data science, quite a lot of cloud engineering. And back then I didn't really appreciate it or understand the point of it, but uh I learned a lot and it became incredibly useful for everything that happened after that. From there, I moved into a role as a machine learning engineer at uh HM, doing large-scale demand forecasting for HM's global assortment. I was fascinated by the scale there. Uh small changes have big impacts, but I wanted to do something like from scratch, and I think that's where uh I decided to join Node.

Henrik Göthberg:

Um do you remember what sort of uh you know, why what was it that made you make the move more than this? Was it bold guy? You're the brilliant guy.

Olof Månsson:

Do you remember? Well, I was definitely tempted by um starting over more from scratch uh compared to where where I was before. Uh I looked into Matthias, I listened to his first episode here on this podcast actually as part of research.

Henrik Göthberg:

I was gonna get that. You you killed the whole punchline now because it's like obviously it's because of us that you choose to go to Nordia. That was the punchline. No, but so yeah, okay, so you but you but also from retail to banking. Did you think about that a bit?

Olof Månsson:

Or I didn't think too much about that actually. I I wanted to make something or be in a large enterprise because I think the the scale makes things very interesting. Uh you can have a big impact, and some problems are very simple when the scale is small, but became become extremely hard when the scale is large. Yeah.

Henrik Göthberg:

And how would you summarize your role and your main responsibilities and contribution in the team?

Olof Månsson:

Um, I think uh I would say I'm the been the lead developer for an internal Gen AI platform um over the past two and a half years on on paper. I'm a data scientist. Uh I don't think that's very descriptive of my day-to-day work. Uh I guess I prefer like everyone else in 2025 to call myself an AI engineer. Yeah, we we shifted that, didn't we? Good job on that. Uh uh, but uh yeah, I do a bit of everything that needs to be done as part of it.

Henrik Göthberg:

So it's really your baby that is going to production today. Yes. You're not home when your baby needs uh pampering.

Olof Månsson:

Well, it's not the first time it goes into production, so we we've done this a few times now, so we're it's a fairly well-known machine.

Henrik Göthberg:

You're already a pro, I hear. Let's switch to Matias. Uh, why don't you do the same? Your background. All the way back? Go all the way back for half an hour. No, but do the short version and and a little bit like uh and yeah, a little bit.

Mattias Fras:

I mean the short the the my career is kind of like I've spent all my 25 years or so in large corporates, large enterprises. So but I've only switched job once. I probably worked in 50 or 60 large corporates, but switched job once because I was in consulting before and joined Node. So I was with Accenture for um the first half working with change with the old school way, shared services, lean, IT, process improvements, so on. And then I joined Nodea as a I was actually joined as a CFO of the Swedish live company because I wanted to try to have a real job, see what I was like, you know, have an old PL, uh be in a management team of a company, driving the strategy, having you know lots of um great people to manage. So I did that for four years, but then I I went back to consulting, you could say, but in the context of Nordia. And since 15, I've been driving change the new way. So and what how do you define change the new way? What do you mean by that? Yeah, so I mean that my first assignment back in 15 was to look into how we could move roles from the Nordics to Poland to get a wage arbitrage, which is the old way. But in the fall of 15, this thing called RPA came into the market. It was like it had been around for a while in some, I think it originated in in like Barclays and some of the big banks where they built this, and then but then in 15 it started to have like a product market fit, I guess. So UiPath and UiPath and the Blue.

Henrik Göthberg:

Um yes, I remember them. Prism or Africa.

Mattias Fras:

Blue Prism. Blue Prism, yes. Um came about. And then we um so we looked into that and we saw that uh well, you know, this is uh what do you what do you robotize? Well, actually, the well-documented, rule-based, um, you know, well-structured processes, and that's pretty much what you're also looking for when you want to move stuff abroad, right? Offshore. So we saw there's a big overlap here, so let's try to go to to go that route instead. It's cheaper, it's faster, and it also makes our people grow in terms of being more digital and how to use digital tools. So that's what we did actually, and that's when kind of the pin um dropped dropped for me. Say, this is new technology we can use as a bank, even though we have lots of legacy and problems, we can at least use some tools to to leapfrog a little bit.

Henrik Göthberg:

I mean, I know these scripts are and then you've been on bandmates, but but at least you could you could make some but that was the starting point. And I think when was the first time you even talked at the date in? I mean, like we've known each other for years because you were at the end of the year. It was probably 16 or 17 or something like that. Yeah, and then and already then it was machine learning, I think. Was it only RPA then?

Mattias Fras:

No, it was it was because the chatbots came in 16 or 17. So the introduction of I mean deep learning came came like um as a thing in I think 15 to the industry in general. Yeah, so so I think in 16 the chatbots came, everybody talked about chatbots, and we'd implemented that, and that was sort of the second second new tech that I did what you're talking about the chatbot, the customer service one or the chatbot on the internet, stuff like that.

Henrik Göthberg:

Amelia, Amelia, right? And so you had many, I mean, like there's several different types of techniques underneath here, and not the LLM technique then as we are doing it now. But and and then but because for me, you have been Mr. AI in in Odea for quite some time. I mean, like there may be many many we take the claim, and then and then you move towards um there've been a journey. I mean, we're gonna talk more about that later, but now you're in an AI hub. And that keeps it.

Mattias Fras:

Yeah, so we we did that, and then and then we we worked a lot on governance also in like in the late 20, like 18, 19, 20. And then in 20 we realized that we need to have a AI and data lives everywhere. You can't sort of run it from one box. So then we said we let's let's set up this AI hub that sort of is a layer on top of all the silos uh to attract like problems, people, and use cases and everything else into like one place. That's how we that's why we established it in 2020.

Henrik Göthberg:

So the AI hub has been going from since 2020. Yeah, or 2021, yeah.

Mattias Fras:

Early 2021. So so but but now it is no more. So so no, we have just uh reorganized. Oh, then we we take that in in a second. So I will get to that. But but basically, this has put us in a position where we have a very good overview. We've been able to work a lot with stakeholders. We we have like a good sort of foundation now as we because then 20 then November 30, 2022 happened. Chat GPT, it didn't happen immediately, but I would say in March, April 23. That's when it sort of everybody that's when we presented it uh uh in in our highest uh management team the first time, demo the first ones, and then and in in the fall of 23, uh Olaf and and the others put out this vision and started the prototyping. And then in 24 we did a lot of pilots on laptops, and in the fall of 24 we put the platform in production, and then this year, early this year, we went live with the first use cases, and so so now we are in a different era with with the whole Gen AI.

Henrik Göthberg:

Um we will pick this apart later, but you just as a summary then what's your role now and what is your main focus? My role since Monday.

Mattias Fras:

Oh, what a scoop is head of AI adoption in the bank. So we are now sort of reorganizing a little bit, so we have um um execution, adoption, transformation, strategy.

Henrik Göthberg:

Yeah. Oh, we could we get go back into that. Okay, so great stuff. I'm so excited to have you guys here, and now let's do a setup of the theme because we were sort of uh you know ping-ponging back and forth. How should we frame this theme and how should we take AI governance? And then Matias came back with what I think is a brilliant, you know, title of this uh podcast. I'm gonna read it internally. And we we put AI innovation in banking, and then I love the the payoff, you know, the art of building the new and ensuring proper governance. So that was almost poetic, Matthias. So you know, could you sort of what what why did you want that title? I love it, by the way.

Mattias Fras:

Because I think what since I'm the old guy, I've been around for so long, right? And it in in the old days, I mean in the old days, like before Genii, even, I would say that the tools we had, like the recommender system, the chatbots, and uh and other tools, that you could you could put them on top of things. Um, you you of course need to have data available for for the models, but and to train the models and so on, but you didn't have to integrate as much. But now, uh, as we go into gen AI and again tic AI, everybody talks about uh and we are as well, you need to integrate AI into the the core, and then you cannot sort of um you have to change things, uh, which means you have to like rewire many different internal functions. I fully I fully see what you're meaning, and and so that's like so so doing innovation properly, properly, like and so and and then change these things at the same time is like an art. And you need to have smart guys who are thinker doers, like this guy and some others.

Henrik Göthberg:

And because the art here is another way of putting the word art in in an enterprise way, is navigation of all these different facets, is is an art to balance all these different constraints, the politics and the challenges and all this. Yeah, this is an art.

Mattias Fras:

And it's it's like realizing that I cannot innovate unless um I embrace that stuff. Because if I don't embrace the governor stuff, then I should go work for a startup.

Henrik Göthberg:

That's true. And and I think in another topic here, I see people who are sort of we can get into deep rants here almost, like like how I see some people are trying to do AI adoption coming from HR. So they have a people perspective, and all that is great, but they don't have an deep understanding for this art, and they don't have a deep understanding for that art in terms of engineering behind it. So then it becomes superficial, in my opinion. So this is one angle, and then you have the other angle, which is how we are all engineers and we are trying to engineer our ways through this adoption technically, without any people skills, so any socio-technical approaches, and that won't work either. So this is this is this whole balancing act where we become hybrid people. This is what I read into the art.

Mattias Fras:

And and the art requires like we often talk about the the the multi or cross-functional teams, like all this. So unfortunately, these guys have to spend more probably probably 80% of our platform is like governance work, yeah, and 20% take. Yeah. Or maybe I'm exaggerating for you, but but in general, there's a lot of and when I say governance, I don't just mean like protocols, but like rating the policies and and yeah.

Henrik Göthberg:

And now and Olaf, you're gonna are you gonna help me pick apart proper governance? What do you mean proper? What what what makes something proper and not use governance?

Olof Månsson:

I think for for for us, uh or for me at least, we have the innovation part, which is basically how do you do how do we come up with new ways of doing things in a smarter way or more efficient way? And the governance part is how how can we scale this? How can we make it you know, governance sounds super boring, but it can also be quite engineering heavy in terms of resilience. You have the proper monitoring in place, you need to prove that it's secure, you can't just say, yeah, you know, I know what I'm doing, it's safe. You need to be able to prove it and convince the right people that it is actually a good solution, a well-built solution.

Henrik Göthberg:

And and I I there's so many ways to uh define proper. I would immediately go to I want to see computational uh governance. I don't want to see another PowerPoint uh governance model, I don't want to see another Excel sheet governance model, I don't want to see bureaucrats governance, I want to see computational governance, hardcore, cardrails, monitoring, etc. etc. Or, you know, if you have many agents, how do we observe them in real time? Yeah, you know, this is not a paperwork governance anymore. So I read a lot into proper when when, you know, and this is why I think it's a brilliant uh theme. Um how do you tell that story, or how do you, you know, if if you were gonna explain, uh, we're gonna talk about innovation and proper governance, but you're gonna go with you you you're in the elevator with the CEO or like one of the business unit heads. How how do you talk about these things um with the with the with the how should I put it the normal banking people or the sort of business people? Are they are they fully into this or not? Or do you need to figure out a way to express this? Uh if if we we start with the theme, but you're at least elaborating on it, how we what's the elevator pitch for this, you know, how and how we talk about it internally.

Mattias Fras:

I think you you need to start to talk to people who who deeply understands the the field. If you if you talk about uh IT security, or um if you talk about data um uh uh compliance or data privacy, or you need to bring the people who like know their field deep because then because They are the ones that can best sort of assess whether what we want to do that we have fit for proper governance for that or not. Because if the if you can convince those guys, then they will convince the others. But it's it's hard to like talk to the average risk person who has not been exposed to AI or so and start this conversation because it yeah, it's hard basically. But if you find the ones that really truly understands like the risks uh and convince them, then you kind of get these network effects. Yeah, if this guy thinks it's okay, then yeah, it should make sense, and then and then you and then you follow the the corporate policies, uh processes. We we have good processes in the bank, as long as we understand how to use it. Yeah, but but and that's what I mean by rewiring. Some of the these you need to tweak a little bit, like you cannot validate a model, an AI model, in the same way as to validate a credit risk model, yeah, for example.

Henrik Göthberg:

Exactly. Yeah, but but the fundamental idea is there where you need to figure out how do you rewire to do what we're supposed to do for our context. Or I love it.

Mattias Fras:

And and um and you have you need to have people like they know what they're talking about. So you I cannot go in and talk about how to deploy uh cross-region inference for uh uh uh for us uh with the cloud team. But but Olaf can.

Henrik Göthberg:

Yeah, I I I and this is why it this is also a little bit back back to the whole thing we were talking about, doing it proper, how you need to have a cross-disciplinary team, and how it becomes very problematic when we have sort of the people talking about AI and adopting AI, uh, which have an HR background over here, and the people adopting AI with a complete tech background over here, and then over here we have the real people knowing banking, and all of a sudden now, oh, these three things need to work together. I mean, like because it becomes very superficial, and this is why it's such an art, I think, as well. But I was gonna ask you this if you this theme. How my take is that that the uh importance of this theme and the understanding of this theme, we need to work on it even well, like with agentic and the way the trajectory is going now. Yeah, I think you said it. We need to learn much, much deeper how we get coherence or integrate all these different parts. So, my my my underlying question when I get to it is like this theme, you know, is it you know, is it important 25? Is it more important, less in, you know, how we scratch the surface on this topic? Because I think it's a brilliant topic, but I think it's just we use looking at the beginning of how we need to do way more on to understand this.

Mattias Fras:

One one thing that I am really proud of of what we have done, and is kind of a no-regret move that we have taken, is that we have built this platform because the learnings that we have gotten from stitching it together. I mean, we're not building everything, of course, but we're stitching together and we're developing, of course, also but stitching together components to make something like this come to life with actually thousands of users in the bank. The learnings that we have from this, knowing that we will not build all this stuff right in the future, we will buy, we will sort of most likely kill much of a code. Um but but but but uh and and and bit stuff, but I think the learnings that we uh made as a as a team, I think is super valuable to cut through like the cut through the the um the vendors, if you will. Cut through the vendors, cut through the noise, cut through the mud.

Henrik Göthberg:

The noise, exactly. Because what what I take out of that is it's one thing to put tick in product, you know, to take put tick in a pilot or or in a prototype and show it only to the board. It's a completely different ball game to take something out to 3,000, 5,000, 10,000 people. And when whatever robustness in terms of how we define product operations, etc., this is product to me. I mean, like to the technology side of this, if if if it's so little to make this work. So the learning here is of course the meta product, you know, all the stuff around this that you needed to figure out that it's now sort of ah, we have kind of a blueprint how to think about that. Because we tested it with security, etc. etc.

Olof Månsson:

I was gonna say, I think it's not only the learning for us, but it's the learning for the whole organization. We've gone through the process with with everyone, with internet security, with cloud team, with everyone.

Henrik Göthberg:

And and that's that that is the real product work, in my opinion. I mean, like so the tech work is super important, and we can build the tech from scratch ourselves. But it doesn't matter if you build the tech from scratch yourself. It's just a tiny piece that you then need to put the whole thing in a wrapper that works for the bank. So this is gonna be a great conversation because if we have that as a some sort of backing point, so we can we can reference, we can use this a little bit as a learning case study. Uh like please use examples all the way. So I I I really want to dig into this now. And um I thought the way to start digging into that is a little bit like if we take this project, we take this, let's call it product, the internal chatbot, and the learnings to take that from prototype to 5,000 users and doing that with innovation, doing that innovation with governance that that gone through all the red tape, all this. Um, like the first my first question is a little bit like what was the context of the AI hub when you started? And what's the context where you've been working on this? I mean, like you you literally changed reorg on Monday, so we I guess it's an AI hub thing you've been doing. Could you give the listeners and me a little bit of the context where this came about? How are you working with these things? What is the AI hub where this started, so to speak?

Mattias Fras:

Yeah, I think it's a little bit back to the the the story that I started telling that where in in the fall of 23 we presented a vision uh that uh that we then tested uh on a number of pilots in 24 on laptops that we then put into um proper infrastructure, production infrastructure by end of 24, and then we did bit then we started to build uh use cases on top earlier this year. So I think it's the it's and and because we did it in this way, we we could also find the ways to apply fit-for-purpose governance. You put governance around the platform first.

Henrik Göthberg:

So, what do you I could what do you mean with governance from the platform?

Mattias Fras:

Yeah, so let me start with first you have to have uh uh um an LLM, right? So we have bedrock, so we created a pro an application for access to uh models via bedrock.

Henrik Göthberg:

That's in one application, then you make sure that you have proper governance around that, that is according to Nodeia's um so now we're down into the technology AWS, bedrock, uh LLM, and now the first layer of the first layer of that governance onion, so to speak.

Olof Månsson:

Yeah, it's basically how do we make sure that anyone who wants to access Bedrock can do so securely without the data going somewhere it's not supposed to go.

Henrik Göthberg:

So that was hardcore at the core of how you use bedrock, access management, data, data security type. A lot of networking discussions. Yeah. And who who was who is typically involved in that kind of conversation in the bank? Who I mean, like you you need to talk to the devil and his mother, right? Or how does it work?

Olof Månsson:

I mean, we talked to cloud team and security, and to some extent, I think legal were involved as well. Well, GDPR.

Henrik Göthberg:

Of course. So and just to put the point, it's like duh, that's what you need to do. And if you're not ready to do the work, you're not ready to do this innovation.

Mattias Fras:

And it's it's fairly well documented. I mean, it it's a pretty straightforward process, but there's a lot of lot of work to be done. So that was the first piece, and then you you on top of that, then you put like the infrastructure, like where you have orchestration and you have uh user interface and you have uh databases and um access management and so on. And then that's the platform, and then you do the same there, you do proper governance on that, and then you just point to the first application, the LLM. We're gonna use that for this, and then you sort of put the proper governance around that. That was what we did in the last like a year ago. Then what does proper governance mean on that level? I mean, it's pretty much the same, uh, but it's maybe more because you still don't know what you're gonna use it for because you the you the use cases come on top, so you you you you you but you need to put enough governance for it so that you know it can uh yeah you can be safe that it's will sit in a good place in production, and all the pieces, so you do the testing uh and everything else. Um so that then then that's that. So the so what I'm getting to is like this the stage process. So yeah, and then you we have we you do a business uh use cases on top of that, and they also go through proper governance, so then it's like a laid approach, and and when you're done, you you don't you know you have you have done like uh governance in in a laid approach, which is also very modular and very sort of yeah, worked well for us.

Henrik Göthberg:

So please.

Olof Månsson:

Yeah, I was gonna say that may maybe it's worth highlighting also that I think the use cases for us were a big driver in like developing the platform. Um it's not like we just sat there and built our platform and then, oh, by the way, we should throw use cases on top of this. But rather they were part of the development, and that quite early on we were talking, having discussions with like, okay, how can we validate this and prove that this is actually built in a responsible way and behaves the way it's supposed to?

Henrik Göthberg:

And what was the first use case that sort of that you used as your catalyst? Or two or three, or what if you if you now how would you frame the use case at this point?

Mattias Fras:

I mean, I think wasn't it? I think it was so correct me if I'm wrong here, but the the the the the use cases that we piloted on your laptop in 20 uh four was some of them were then the first use cases on on the platform. So we had one use case uh that uh is a chatbot for internal guidelines, and we have one use case that supports uh research. Um for marketing, tone of voice.

Henrik Göthberg:

So you had your use cases, but I I think that there's some big uh uh uh learnings here today. You you took some sort of like modular, almost onion approach of of uh how do we secure and govern the core? And how do we govern the orchestration leg here? And how can we now we've done that? So now we can add use cases on top without I guess redoing everything from scratch all the time. The use cases can kind of leverage the leverage the so it's truly a platform self-service, you know. Uh but but why did you go that route? Was that obvious to you from the beginning? Was it a lucky shot? Or how did you get to that uh architecture? Uh, because this is architecture to some degree.

Olof Månsson:

I mean, I think you have to sort of slice the problem in smaller parts in order to solve the full thing. Um trying to do everything at once, it would be hard for us, but I think it would be incredibly hard to sell to everyone else as well, to convince someone that this is safe. It's a lot easier to prove that smaller pieces are safe, and then you just always point to the previous work you've done.

Henrik Göthberg:

Okay, so this this is some fundamental patterns here, how to do that. This is super obvious. What is not always obvious is what's the smallest way to slice. So I think I I haven't thought about it through, but instinctively, hmm, it's a pretty good way of slicing it. I I I can come up with other ways of slicing it, but I think that you have a core because I have an onion picture at home somewhere of the AI compound system, and you're following it. It's it's this I think it is some some degree here. Okay, that's cool. Um so and what was the context now? Did you do all this from the within the AI hub? Or how did how did you reach out your tentacles or or could you could you could you basically drive this from the hub as it was when you started this? Or how how did how did that work in practice?

Mattias Fras:

I mean we the hub is in terms of FT is quite small. So we have uh we have worked through others or with others and teamed up with uh you know agile execution teams and uh uh business teams and uh bunch of different group functions. So it's definitely we have been the orchestrator, I would say, and a driver, but we have been working uh also because we we were not from the beginning saying we're gonna big a big platform and we're gonna have uh 10 use cases and we're gonna have 10,000 users. We were like as Ulo said, taking step by step, and each step required an agile execution team and some uh outer, you know, some some more people, but it was never like a huge factory to but maybe you you're triggering me all immediately into how this was organized step by step.

Henrik Göthberg:

Because you have, you know, if you have you now you even acquire the adoption title here. So we have we're having a situation uh you starting on a laptop, right? And then what was the first uh in what increments did you because now I think you're about 5,000 using the platform, roughly three, five?

Olof Månsson:

Yeah, so I think like where we started was I mean, now you now you're driving adoption, but we've been fighting for adoption for two and a half years now, I would say. And for us it started like, okay, where do we see potential? We we went out on a roadshow, talked to the I don't know how many people in Urda we talked to. Everyone must have heard us talk at one point or another, and just understand the business, see what they do, and then from there try to put together some use cases to just showcase okay, what's possible here, what could provide value.

Henrik Göthberg:

So you started that journey literally before you had your first thing in production. Oh yeah, oh yeah.

Olof Månsson:

And then with these like key like lighthealth use cases, we try to convince people like we can do this, we can scale this, this can become something useful. And then all the time trying to iterate, improve these use cases, and prove to yeah, well, the organization, like this is something worth betting on and trying, I mean, like you said, we're uh we've been a very small team in terms of headcount. Uh so we tried to tap into resources wherever they were available, uh, from time to time. It's been short term, shorter term or longer term. And uh yeah, I think that was the right. And uh we're trying to scale from there.

Henrik Göthberg:

So if if we take this, I mean like because I think this is so much learning into this process, right? Because people are dreaming about it and they are fluffing around on it, but not that many has done it. I mean, like there's a couple of large corporates that are doing this, but it's still you know, to it's one thing to just have an enterprise license of OpenAI is a completely different ball game to to start adding services and use cases onto it. So if you remember, you started. Can you elaborate on the journey, how how you got the sort of first ideation, and then what was you got some budget to do it, and what were sort of the decision points or stepping stones from from you know going from a laptop to 5,000 users if you if you take it a milestone or roadmap view on it, like how you sold it or how you approached it in some ways.

Mattias Fras:

I mean you I can I can say it like this. We had an internal AI conference in October November, October 20. It was right when uh whenever the the Nordic data science and uh machine learning summit was in 23. Three, the fourth, so October, November. Yeah. Olof and Philip was on stage and they were demoing the first version of or the first widely demoed version of it anyway. And they had a vision that we should put this in the hands of everyone in the bank, kind of solution.

Olof Månsson:

I think already then the idea was to build something something modular and something.

Mattias Fras:

Yeah, so so the but the the idea to do that, and then of course getting a lot of help from Chat GPT and everything else that people are using at home. So people were like already excited before we started, which is like different, which is why Gen AI is such a is is so hard and at the same time so easy because you have people, they know what it is, they use it, but it's hard because you know we don't have chat GPT uh in the DR. We have something that's uh something else, so you have to manage expectations. But so that was like, but then how you how you thought about the modular approach and everything, I I don't know. I mean, you're gonna have to explain how you came up with the the approach to it because that was you and Philip.

Olof Månsson:

Yeah, I think that the key thing there was I mean, these the underlying models are general models, they can be used for anything. And I mean for us that's the the hypothesis back then was that a lot of the business logic would be possible to hand over to the underlying models. And since it's general, that means that we can build a pretty general system that can handle a wide variety of tasks with only some quite simple configurations. And we have the next step was to prove that, which we did a few months later, actually. And that's where we showcased the first version of these around, I think it was around five different use cases we showcased.

Henrik Göthberg:

And and uh decision making wise, uh uh uh there there are there there are there are budget or you know like enterprise decisions we should have this service, some money involved, but it's also technical decisions here, architectural choices, and there are security uh decisions, and there are risk decisions here, I assume, or legal. So, how how did you navigate those decision arenas? Did they just come logically from we want to do this, and then you go around the circle and get all the stamps, or how how did you how did that work for you guys?

Mattias Fras:

So that's what that's when I can contribute a little bit, even though I'm the old guy, because I've been around for a long time and I know people. So we were able to identify the people that were really sort of skilled uh uh and and and also could make decisions that will help us. So if we can convince them this is the right thing to do, they helped us. And and uh we we got a lot of help from you know um security architecture, from uh legal, from compliance, from data privacy. It's like we found the people that also understood that this is gonna be a thing, and we will help you.

Henrik Göthberg:

So it becomes a huge co-creation effort and it's about finding those key actors that want to co-create. Is that a fair summary? Yeah. Okay. Yeah. And so I don't know. Just to back out again. We are talking about innovation. We're talking about something that is super cool, but then we have then we are talking about the onions of doing that proper. And and so far we we come up to with a modeler architecture. And with the model architecture, we can take a model approach into governance. And with the model approach into governance, it's also how we slice it and take the complexity down, I guess. Uh that we can actually talk about one thing at a time with with our stakeholders. Is that a fair summary so far?

Mattias Fras:

Yeah, but I would also say one big driver was that we didn't want we wanted to avoid any lock-in. Like we wanted to avoid locking ourselves into something that we didn't know was gonna last. So being a modular, being agnostic was super important to us because who do you bet on? Who do you bet on today, even? But back then, like who do you bet on? We don't know. So let's put the pieces together ourselves. And and when whatever we're gonna be betting on in a few years, we can go there.

Henrik Göthberg:

Because I that's it. You have confirmed it then, because technically now you have built, you have made the governance like this. So now, okay, if I want to switch from one underlying model to another one, you can. Maybe, but still in bedrock, I guess. That's gonna be trickier if you want to flip it's gonna be trickier provider. But that's also possible, I guess.

Mattias Fras:

Yeah, we I think we can we don't know how much right now it's bedrock, and I think that's all we can say.

Henrik Göthberg:

Yeah, yeah. Okay, but but but the underlying topic is actually about the underlying LLM, that that is actually quite replaceable if you wanted to find it.

Olof Månsson:

Yeah, absolutely. Uh I think and I think one interesting thing I want to add there as well, is like for this process, I think a key thing was also that as part of the development work for each phase, we had to keep track of what's what's the key enablers to initiate the next phase. And I think that was something we were really good at, which made the timeline to go from hard demos to production reasonable.

Henrik Göthberg:

Okay, but it's so if we back out of this a little bit now, because now we have a great use case and real innovation and real proper governed innovation, but it's one type of AI innovation in terms of the chatbot approach and the internal chatbot approach. So if I if I back out a little bit, like how have you defined AI innovation? Is it is it is it this part? Is AI now in the bank? I remember how we talked about this. What is traditional AI, what is new AI? I mean, like, and it was and and all of a sudden I was almost easier because everything gen AI, or like what which is using LLMs, that was AI. And now we can talk about machine learning over here. But but how how when you say AI innovation in the bank today, are we talking about machine learning or is that not part of the game anymore? Or how how how broadly do we have this storytelling today? I remember we're having this conversation years back, right? What is AI? What is analytics? Because it's almost like it cleaned up a little bit, like AI is the gen part or the agentic part, and then machine learning, that's something else. I don't know.

Olof Månsson:

I think for sure both are very relevant still. And to me, they're very top of mind. I don't think I can go into details here, but uh there are use cases in the pipeline that that will for sure include both.

Henrik Göthberg:

I'm super happy to hear that. Um I have a real problem when I get AI reduced to chatbots. I'm serious. There is the the the inference to take on AI, I think it's almost that. But for you, when you say AI innovation, you know how how broadly do you cast the net? I would put it like this.

Mattias Fras:

I would say that the innovation we call it in AI innovation, but the innovation is like is like doing things in new ways with AI and data. So when we when we look at the business process today and we want to apply innovation, we look at the process and we want to understand what what what is the output, what is the required data, and then think like a startup, how would you design that process today with AI and data or machine learning or gen AI or or whatever? So the innovation is more the innovation is more like having AI and data first mindset. That's kind of the innovation. Like think about problems in an AI first approach. That is kind of what my team has. That is our contribution to to this so far. Uh so OLOF is, I think you say that you do mainly sort of application. I mean, you you're like a software developer, we have the AI, someone else has already invented the AI, so it sits there. We can consume it and integrate it, and we can support business processes. And how you do that, that is kind of the the innovation. So the innovation is more like how you think about the problems. Yeah. But that's that's like that. We I I will I can I can't say that we are there, but that's where I want to get to.

Henrik Göthberg:

But let me do you agree, or do you have another angle on this, or do you want to add something to that? I think it's a fair point for me.

Olof Månsson:

I think of it very much like just it's one more tool that we can use to redefine. And in some areas, that unlocks huge new possibilities that wasn't there before. Uh, some processes probably should have been redesigned a long time ago and say you don't need this new fancy tool to do it. Uh, but for sure it unlocks new things.

Henrik Göthberg:

Because I I want to test something, because I have uh struggled with this uh view, or actually not struggled, but I I have I think I have somewhere grown into a more and more well, what the hell do we mean with AI innovation? You know, you know, is it even is it is it even smart to lead with the tech into in the into the question of innovation? Tricky one. But for me, the way because I'm working with this today, and this is so this is terrible stuff I'm I'm testing on you, but I'm I'm with the latest views now of building again systems and and where this is going product-centric and all that. I realize, and I'm testing you, that I think we're going to a place where, first of all, innovation is also about is always about the workflow. That's the problem, right? It doesn't matter. And now we have a very nice toolbox of techniques and technologies. But I have started to lean on Jan Bosch, the one of the professors at Chalmers, he was at the pod. He said, like five, 10 years ago, what is digitalization? And then he made a super simple uh definition that stuck with me software data AI. Early. So if you can imagine what we are doing now when we are building AI compound systems, when you're building anything, there's part, when we say AI engineering, it's part software, part UX, part integration, and then it's a shitload of data. And by the way, knowledge and contextual engineering, blah, blah, blah, blah. And then it's techniques, machine learning techniques, AI models, ML, LLMs, SLMs, TRMs, whatever you want to call this, right? So I'm I'm getting I'm getting more and more found in this uh uh Berkeley AI Research Lab, AI compound system definition. Like it's all systems, it's all products. And in the end, we need to when we we innovate with data and AI, but it's a damn software as well. Do you follow this logic? And I uh and the reason I'm getting so passionate about this conversation is like, damn it, they don't understand. They think we can buy and fix a model, and then it's done. So do you see what I'm talking about? Like so, because all of a sudden, I why do you need a data engineer, a data scientist, or AI engineer, and a UX guy, and a software integration guy? Uh, you know, that's that's the that's the that's the work, that's the work. So that was a rant. But can I have your feedback or input on is that sort of because this is what I'm I'm used to extrapolating what you said. I'm not sure we are talking about the same stuff.

Mattias Fras:

No, I understand what you're saying. But don't you have a conference that's called AI and Data Innovation Summit?

Henrik Göthberg:

The Data Innovation Summit, that's the core story here, right?

Mattias Fras:

No, but I uh so the way I think about that is that um I agree that AI is uh distraction and and agentic and agents, even more so. I deliberately didn't use the word agentic in my speech in the I know that because I wanted to try to avoid it. You're joking about it, but then some other brilliant guy uh at another conference for another bank, uh he avoided AI also altogether. I think that I think that helps because I I agree with you that it's um uh it's it's it's not about the models and it's not about uh agents, it's about uh the workflows.

Henrik Göthberg:

But but I and I don't want to diminish the AI stuff. I think it's I think this is really driving the innovation, all that stuff. It's just that it's not you you need the data, you need the context, you need the scaffolding, you need the UX. Of course, the AI is is the core here that makes us doing completely different things.

Mattias Fras:

But it's but it also it's also like it's also confuses people that that and like if we if we if we if we like we collect a lot of use pay case uh use cases in the in the group, and um uh and we got we got like hundreds of use cases, and and just because in some cases, just because it has an AI in the heading, you know, it's there, but if you would remove the word AI, nobody would care about the use case, yeah. You know, so it's like so what so instead of of of so that's one thing, like remove the AI and you know, and then you may may uh end up with problems that people care. Let's take problems that people care about and then apply the AI. So I I see what you're coming from, and I think uh it's not about the model, it's about the operating model and how you kind of work together, and um but I think we're in that phase now, right? You know, we're still in the hype. We're in the hype phase, right? We are we are leveraging that. We are leveraging hype.

Henrik Göthberg:

So I would let's not so it's a matter of leveraging the hype. But if we want to do a proper job, if we want to do the real art of innovation and the real art of proper governance, we cannot we we we can leverage the hype, but not but we are not allowed to fall in the trap, right? So that and that is why, as you said, you need to have the real engineers around you, otherwise this doesn't work. And maybe a couple of different types of engineers, even.

Olof Månsson:

But I think also the hype will will only take us so far at some point, you need to start to prove value as well.

Henrik Göthberg:

Yeah.

Olof Månsson:

Yeah.

Henrik Göthberg:

And I think that time is probably approaching rather fast. It's going rather fast, and then we're back to basis. In what way did you innovate the workflow? It's nothing, all right? I mean, like if if you're not getting an innovation in terms of experience or productivity or whatever, effectiveness. But it's hard, right? Because it it to do it proper, then you really all you need to really excel in innovating workflows. So it's very hard to be in an you know too far away from the core business. So you need to get that that whole innovation topic then becomes oh, you need to be working very, very closely with the domain expert or the workflow uh owner, so to speak, right?

Olof Månsson:

Yeah, you need to understand what you're going to redesign or and be very close to them to see the possibilities. Or empower them to see it, see the possibilities.

Mattias Fras:

And you need like the everyday AI stuff, you need uh these uh co-pilots and uh enterprise agents, as our internal chatbot is called, and and these tools that that people get at their fingertips because that will not drive ROI, maybe not in the short term anyway, but it will drive adoption, and adoption drives learning, and learning then takes you closer to Horizon 2, which is workflow and integrated AI. And I think you kind of you cannot go all you straight to disruption. You have to sort of with a large organization, you have to get adoption and get to sort of learning and then get into integrated AI, and then you will see the can can you slow down here?

Henrik Göthberg:

Because I I had a as a question, Mark, around the differences in governance and innovation when we talk about the internal chatbots, copilots, internal enterprise systems, and even customer-facing systems. And I think you have a deeper model and view on this. I mean, like it maybe doesn't fit perfectly to what I said, but you've been talking even in at the conferences, I think, about you uh phase one, phase two, phase three. Could you slow down and take that? Because I think it's quite important. Because it also sets the arena for what is what type of governance are we talking about, what type of innovation are we talking about?

Mattias Fras:

Yeah, so it's it's it's not uh any anything sort of rocket science, but it's it's a way to divide stuff into different uh horizons, as we say. And the different horizons will uh will have different requirements and they will have different investments, they have different training needs, then they have different tech requirements. And so Horizon One is like your everyday AI, like these general purpose tools, co-pilot, chat GPT, closure enterprise that uh that allows you to get individual productivity up. And that that is a right now, maybe a way for you to save 30 minutes per day. I don't know if you're skilled, maybe more. But it's it's not something that's gonna you know hit our income cost income ratio this year or next year.

Henrik Göthberg:

This is this is the personal productivity case, super important for the comfort zone and super useful, but it's not an enterprise grade AI core workflow reinvention. That's the distinction you're trying to make here, right?

Mattias Fras:

It's how you use Chat CPT at home, right? You you you're getting into it and you're getting better and and you learn. And so this is Horizon 1. I mean why why do we need that? Why do we need that? Basically, you need that um you need that to um uh increase your AI literacy, you need that to understand what you need to light up the art of possible in the light in the heads of people.

Henrik Göthberg:

You need to light the spark. You're lighting a spark, right?

Mattias Fras:

Lighting the spark and and understand that one learning that I have done, one thing I'm proud of what these guys have done with enterprise engineers that they've been very feedback focused. They they follow very closely the usage, uh, what people like, what they don't like, and and have a very structured way to let that drive the prioritization of the next features in the next release, which is about now for this one. So so it's also a way uh so so that that's also important to understand what people really sort of what what problems they want to solve and what integrations do they need. So that's that's also something that we learn in Horizon 1. Uh and Horizon 1 to be scaled, you you basically need to have fit for purpose governance um this style of this style, and then you can kind of fail. So it's it's it's that is my view of anyway.

Henrik Göthberg:

So so because this is super cool, because now we have a fit-for-purpose governance and we to some degree we we peel the onion on it uh because that's the first horizon AI innovation you've been doing. How would you now frame Horizon 2? What what is that?

Mattias Fras:

So then Horizon 2 is more the the workflow optimization. So in Horizon 2, you look into a a process that that you care about and that you think can be improved, like an onboarding process or like a mortgage process, or more like like some process that is that is sort of uh or workflow in the in the bank bank or in the whatever business you're in. Yeah, claims process or something, and then you you try to figure out how that can be not how Gen AI can improve that process, but reinvention. Yeah, try to yeah, reinvent, try to um uh almost like disrupt the way you do like think like a well how would a startup if we had all the tools and we had the data, oh we have tools, oh we have data.

Henrik Göthberg:

Okay, we might like how would Google do this process, yeah.

Mattias Fras:

Back in the back, now it's like how would uh Lava Bell do this? How would Lava Bell do this? I mean, like, but because what we're really talking about. That's hard, that's that's hard, and that and that because and that's the uh where the agentic AI comes in. That's one technique, yeah. Yeah, so that and but that's like the um it's interesting because a year ago nobody talked about again tick AI. I don't know if you remember, but nobody talked about it. I mean, I did some some guys did, some guys didn't know.

Henrik Göthberg:

You can listen to Data Innovation Summit and talk about it for two years, but I'm I'm early. I'm early.

Mattias Fras:

But but I mean if you look if you talk about like the consultants or LinkedIn, it was it was not about even uh even um you need to go to Data Innovation Summit to have, but anyway, but now everybody's talking about it, and I think it's just um But now it's almost bullshit because we're always selling it in different ways.

Henrik Göthberg:

And when we uh sorry, I'm I'm using the no, now it's a marketing term. It is, isn't it? Yeah, it's a marketing term. So because what's all hard here? I mean, like, so let's really break down the innovation in with the chatbot, the first horizon. You're coming up with a cool technology, but then like, hey, it's up to you to be clever with your personal productivity today. Horizon 2 innovation is a completely different ball game because you cannot do that if you don't have deep domain knowledge and deep AI knowledge. It's impossible to do, in my opinion, if they're not working as one.

Mattias Fras:

Yeah, first of all, I would say even Horizon 1, you could I think you should you should equip yourself and and and the leaders of teams should also equip themselves with tools and and incentives and and KPIs and whatever to to drive like change. Because I think you can you can also get change with this productivity, this, this, the this horizon one stuff.

Henrik Göthberg:

So you can teach them how to use tools better, and you can help them help use cases, you can find use cases that would benefit 5,000 in the bank, uh, and and in that way, yes, at some point. But I I think what you're on to is the deep reinvention of core flows requires if it doesn't matter how good understanding you have of the core flow if you don't have that uh understanding of the art of possible, and vice versa, if you have the understanding of art of possible, but you don't know where and how to apply it or where the bottlenecks are. So this is what this is where this ball game of innovation becomes slightly different, I think.

Mattias Fras:

Yeah. And that's what I'm thinking about now, like as sort of focusing more on the adoption. How do you like if you want to put together a team to do a Horizon 2 project? Like what roles do you want to have there?

Henrik Göthberg:

This is a good this is a good conversation. Now we're talking about AI Innovation Horizon 2. This is I have I I've been um I've been ranting a lot around the difference between personal productivity AI and enterprise grade AI, and I've been giving people a lot of shit. When people understand the difference and they are treating it like that with respect, I think it's perfect. I think it's smart. But when we are starting to sell Horizon 1 like that's the shit, I get frustrated as hell. Do you see because I see that too much in the media? What do you think? Or am I am I too grumpy? I'm all grumpy, maybe.

Mattias Fras:

I mean, first first of all, I think Gen AI is not that old, it's only uh I mean Gen AI might be older, but ChatGPT was in November 22.

Olof Månsson:

So let's say it's like I was just about to hit you on the fingers on that.

Mattias Fras:

Three years, huh? Yeah, yeah, I know, I know, I know. That that's you saved yourself. I'll take that. Can you can we remove that? But Chat GPT was released on on November 30, 22, and as like in in in in in general, it was adopted started to be adopted maybe in 23, mid-23. So we are so young. So that's not that long ago. No, so you can't really expect people to be like in Horizon 2 already in general, but of course, some companies are there since a long time. AI first companies have been there for a while.

Henrik Göthberg:

But I I'm more I'm more I mean, like so maybe also uh ship in here, Olaf, because I'm getting a little bit frustrated with the the I mean like the temperature on LinkedIn, the media conversation around this, where I think we are immediately having a very I mean I hear obviously we're gonna talk about Horizon 1 and Horizon 2, and I I see a huge media circus that doesn't really distinguish between these two things. Or am I uh maybe maybe I'm over there there are two sides to this.

Olof Månsson:

I think in one way, like for sure there are some level of overselling today, but things also move so fast, so by the time they are adopted, it's probably not that overemphasized. I mean, look back at uh first version of uh Microsoft 365 copilot, what they sold versus the reality. I think the discrepancy there was huge. But I think it's a lot. I mean, the improvements have been immense since then.

Henrik Göthberg:

Yeah, but what are you saying? Do you think we can get to enterprise grade or horizon two type innovation with just waiting out the personal productivity approach?

Olof Månsson:

No, I think that's that like the enterprise to me that that applies a different scale, implies a different scale. I think if we're on enterprise, then by definition, it's not just gonna be what can improve. Even if I can become 10 times as productive, I don't think that's like an enterprise thing to me. To me, that would be like revolutionize the whole. And you mentioned like loan process. That's that's that's like an enterprise.

Henrik Göthberg:

Because I made it hard for Teradax. I I I didn't care about e Horizon One. I've only been trying to do Horizon 2 and and it it it narrows down the field, who is ready for it, but it also drives trying to figure out how do we now is your time, I think.

Mattias Fras:

Now is your time. Maybe but we're getting there.

Henrik Göthberg:

I'm getting there, but I but I think also the the difference in understanding what it means to do enterprise create innovation and and um yeah, we are we're getting there very fast now, we're maturing very fast.

Mattias Fras:

Some are some are getting there, I think.

Henrik Göthberg:

No, and and the interesting point is I'm having uh fantastic conversation with with organizations who are in a way not data savvy as a legacy, but are asking me, we don't want the co-pilot, we want to do how how do we industrialize and think about this? We we need to get on the game, not to do everything, but we need to learn how to do uh core workflow innovation. Not to be perfect at once, but actually explicitly stating uh a horizon two type. Can we start working on that?

Mattias Fras:

I think that it's but then horizon one you can do also data is important, yes. You need to access access to data, and if that the data is not in order, uh you have less use of it. But in horizon two, it's even more, right? So I saw some interesting study that they they they measured, they had a bunch of banks and they saw how they had moved in terms of productivity. I can't remember exactly what how they measured, but it was productivity. Uh how much they have moved productivity from the the the ChatGPT movement movement to now, to the moment to now. So within the next two years, three years ago, how have these banks relative to each other then moved? And and the conclusion was that the banks who had spent a lot of time before ChatGPT on getting their data in order, they have obviously been really been able to quickly capitalize on that with when new technology came.

Henrik Göthberg:

I think that's that that's that was uh sort of um to think about it's still back to the yeah because because with enterprise grade, it's then we have so much back to data again, right?

Olof Månsson:

I was gonna say, Matthias, do you think that is due to that they had their AI or their data in order, or do you think it's due to like improved data literacy that came with the sorting out your data? That's a good one. That's a good one.

Henrik Göthberg:

I didn't know you have to say one more time. No, uh please, I think I got it, but that was pretty but it was pretty was too smart for more.

Olof Månsson:

So, like you you said basically that organizations that had sorted out their data, they their productivity increased more relative to their peers when they got access to AI tools. And the question then is is that because they had their data in order or because they had the mindset or had learned how to work with data or how to sort out their data?

Henrik Göthberg:

Oh, okay. Yeah, I don't know. Well, uh it's probably both, right? Because on the one hand side, it's I I see how we can go deeper on on a little bit techy on this. Because on the one hand side, sure, you sorted out your data, but in the end, this is all compressed uh Tableau data, and in order to do proper AR, you need knowledge. You need to pipe things together. You need to pipe things today, you need you need a graph, you need you need you need context engineering, right? Exactly. You need the access controls and so the point is this just because you know what your data is, it's just half the battle. Then you need to know the relationships. So when you worked heavily with this for five years to sort out which table is joined with what, this is knowledge, and now you have context engineering at your fingertips, and that was your point.

unknown:

Yes.

Mattias Fras:

So maybe maybe so so this is cool stuff. Good, good, good question. And now I understand it. But but it probably has to do then with being like you know, data sav you understand how you make data consumable for AI, like the the whatever, if it's like data in order, understanding how to but I how to pipe the data or whatever it is, it's like your data literacy. If you have good data literacy, and then then you're faster to scale.

Henrik Göthberg:

Yeah, but but you are putting now a very, very interesting spin on data literacy. And I'm not sure everybody who's doing data literacy courses, you know, up on the street here. Or you know, you know, because you are really now talking not only about data literacy, you know, in the sort of in the general business intelligence, no, know your analytics or whatever, you're talking about understanding relationships, understanding the difference between knowledge versus compressed data, circulation, you know. And I think this is data literacy, and I think this is context engineering, and this is what we should be talking about. But I know I don't really see that in a data literacy program in general. You you are you're way beyond in maturity in terms of how you talk about data literacy. Am I am I over? I think he's I think you're on the money, but is that what we are meaning when we say oh we have a data literacy program?

Mattias Fras:

Yeah, I I guess it could it could mean just as you have an AI literacy program, it could also mean a lot of things. Uh it's about being yeah talking about what whatever is most relevant to you. We talk a lot about data products, everybody talks about data products, but what is data? Like what is it? Like what is it? I never or I I I thought about this a long time when the hype came with the you know, with um what was it called? Data mesh. Data mesh. Thank you. I've been deep in that three, four years ago, was it four or four years ago? I mean, O'Reilly came out with this book, and I thought then that's it. And then I thought a lot about it, but then I think not much have has happened in general since then. But um it's different in different places, yeah. Yeah, but but maybe that's the point. I mean, maybe you have to make it yours.

Henrik Göthberg:

But uh you have to make it yours. My data uh uh question. Uh I I felt into the whole context of engineering. I uh it it it's so obvious, but I I haven't really understood it deeply enough until maybe one or two, maybe one, one and a half with the Gentic, with the whole that story. How we need to really differentiate with between what is you know the traditional data or compressed data, data points versus knowledge. I mean, like I of course we knew about ontologies and and and knowledge graphs and all that. But I think for me, it has shifted a little bit, like how we need to know this much better or understand this much better, or have different techniques and how we are doing that. To some degree, when we started doing rags and from rags to vector, I mean, uh all of a sudden it becomes obvious. Like the the the digital twin doesn't work on data alone, it needs to have relationships. It's kind of obvious. But have you shifted here? Because I'm I'm right now like I'm I'm getting humbled again. Oh, I know data products, so I know everything I know need to know for AI. I need knowledge products, you know. I'm I'm I'm you know, I'm I'm I'm back to humble, I'm super humble again. On on now on the knowledge dimension. What do you say? I mean, like all of this also.

Olof Månsson:

I mean, I think the field moves fast. So if you're not humble, you're gonna be left behind, I think rather fast.

Henrik Göthberg:

Yeah, because I'm ranting a little bit here. I'm I'm I'm I'm searching myself for what is happening here. Data, we knew data, data products, and all of a sudden now innovation here with AI, and I think this is the enterprise grade story. When we're getting into Horizon 2, data becomes super important.

Olof Månsson:

But what is data, what is knowledge, and what is different techniques that we haven't even touched, some of us or I mean the the MCP has been all the hype for a year, I guess. Yeah. Uh since it was released. And with that, there's come like new challenges. So first was like, oh, this is the best thing ever. And then people were realized like, oh wait, this is actually rather uh token heavy. Yeah. And and all the models, I mean, that you talk about knowledge, they can only take so much knowledge before before it's full. And then you need to remove something and put something new in. And if it's filled with protocol overhead or unnecessary stuff, you need to be smart about how you trim that out.

Henrik Göthberg:

Well, let's go here because we're talking about about the we are going in from chatbots to Horizon 2, which I think is cool. And we're talking about doing proper innovation. It's not only the, you know, it's all this, right? So, what what what is your sort of learnings going down this rabbit hole with I mean, like let's take protocols and knowledge and context engineering and knowledge graphs a little bit. You spin two cents on that because I think it's for me, this is like what I'm trying to get to. This is a compound systems. Wow, this is this is where we're innovating right now, and this is not only the model. So can you please help us all boys to follow up?

Olof Månsson:

You you have limited space and you need to make sure you use it usefully. I think one uh there's been a lot of interesting development now during the autumn, but like uh tool search tools where you don't give all the tools to the to the models directly, you let it search for tools so you don't blow up the context window, or you use code instead of chain things together. I think that's also super interesting. Um, I think if we think of it like looking at tabular data, that can be a lot of data very fast. And if you're gonna present all of that tabular data, that's probably not what the LLM is actually interested in. It would much rather look at some sort of aggregate metric or do the analysis. It wants the output, it doesn't want the raw data. Um, and I think a lot of data pipelines, and there's been a lot of uh a lot of evolvement there during the autumn.

Henrik Göthberg:

So, how how could we summarize this? Could we summarize this that okay we have some models we you know, small or large, we have some MCP things going on, we have some scaffolding and really clever engineering how we want to do things, and then we have data and we have the compressed data, and then we have the relations and you know, vector graphs or whatever. So for me, now it becomes a this is why I think it's it's it it is several moving parts.

Olof Månsson:

And we haven't even and then you have the problem and then you have like the the like sub agent setup where you try to divide and conquer the whole problem. It's again you take the whole whole thing and split it up into smaller parts because the smaller parts are easier to to to handle.

Henrik Göthberg:

And and and then then we're coming back to uh Matthias's question, you know, what's the team here that's sort of because it's obvious to me now that this is so we're getting into several. Is it one engineer who can be a unicorn around all this, or how do you see this? Most likely not.

Olof Månsson:

I mean, humans also have limited knowledge uh to some degree. Uh, but I think the the thing is today, with a lot of the tools that become available, you can do a lot with few people if you have the right tools, because they can really emphasize like help you do more.

Henrik Göthberg:

Yeah, and and and this has to be a lot of experimentation to stitch this together at this point, right? This is not out of the box at all, I think, when you get to this level.

Olof Månsson:

I mean, when you when you get into our company and and um I think a lot of the cool cool stuff we see out there, people are like, oh, I bit this amazing system, you know, I just hook up this uh sorry, I hooked up this MCP server and everyone wow that sounds great. Let me check how many of our internal systems that have MCP servers. It's uh pretty close to zero. But that does not mean that we can't build them.

Henrik Göthberg:

But how how do we see the difference in governance between I mean, like because we're talking a little bit like, okay, we need to do proper governance? I think you nailed and to some degree is figured out the pattern for horizon one, right? And I think you can scale that, you can tweak that, but you you you've done the hard yards now to get that to work and you can leverage that. How does it how do you need to think about horizon two governance? I mean, like have you gone there in your head, even thinking about do you have any use cases which you're sort of pushing you because now you need to have governance around the MCP stuff, you need to have governance around the you know the context engineering, you need to have even if you you know multi-agent governance, you know, have you gone there at all? Because I I think we don't uh I don't think anyone has properly, right? So how do we experiment in this? I mean I'm super humble.

Olof Månsson:

Yeah, I think there's actually a lot of opportunities here, and and there there of course are a lot of engineering challenges, but I have a feeling that we can work our way through this. I think we can too. Oh, that that I'm quite confident on. The most interesting part, and and I think this should be right up your alley, Matthias, is is like we get back to like the the responsibility and the ownership and the accountability for what ex you know can if we if we do Horizon 2, we completely rework a process from scratch. It's like okay, this is the the ideal state, everything just works. A lot of autonomous parts. Who's accountable?

Henrik Göthberg:

When this goes wrong. But this is this is but that's gonna be the tricky part. No, it's not the tricky part. I mean, like, so this is this is uh of course it's super tricky, but this is my ball game that I've been doing the last two years. I think you need to start within the agency of the team, team agency. So when you have bad agency around how the workflows are set up, you have systemic stupidity, good luck bringing AI into that mix. So as you are growing into the fundamentals of what is the agency of this agent and this multi-flow agency, the only way we can make that work in the old bank or whatever is that at the same time understanding who is the domain owner, what's the agency of this workflow who has ownership of this workflow and ask those questions and allowed sort of a team agency, human agency, and artificial agency getting aligned. Then you can understand, okay, within these boundaries, now I can work on my decision data workflows and I can add agency into this. But if if you're not thinking about agency of the team and the mandates and the decision mandates at the same time, uh you you're gonna have who owns this and you're gonna have blast radius you hadn't thought about.

Olof Månsson:

But do you see what I'm going to do? Yeah, I see. I think it's interesting also because you to some extent you draw the parallel to like, well, it's just another coworker or just another team and something into the existing organization. Or do you disagree with that?

Henrik Göthberg:

No, yes and no. I mean, like I I I had Cassie Kusorkov, decision intelligence director, whatever, she said that you know first you have personal productivity, and then in the end you have enterprise guard. I mean, like when you go when you go enterprise grade, you're gonna go get get back to this the safety, as she calls it. And and typically what I'm talking about now is like even if you theoretically can build an agentic workflow that goes across many different mandates and decision mandates, uh it's gonna be a nightmare to get that done or or manage or govern that. So the way to learn about this is to basically start working on the agentic team agency. I I don't I think the person is too small. I think we can talk about team functions. And when you put that into the mix here now, okay, we need to be cross-disciplinary teams, fixing the workflow, fixing the data, fixing this and this and that within lending, you know, within some you know uh risk uh credit decision. I I'm just making it up. So you can build a frame around that. And I think it's it's it's the sort of uh interim how we understand this together. Then you will figure out after two, two to you know, when you've done that, that is a fairly stupid way we s we cut that agency. We should probably cut these teams differently. But I I I think it's tricky when the uh agency of the teams and the manage are not aligned with whatever technology agency we're putting together. So I don't I don't see how it could work safely uh without going that middle step.

Olof Månsson:

Do you have other restrictions on how you put together like a team of humans versus a team of agents?

Henrik Göthberg:

No, I don't, I think I I haven't thought about that. So a team of so when we talk about proper agency, you need to have purpose, mastery, and autonomy within the team, right? So you need to have like so we we haven't cut organizations division of labor, everybody of the same flavor business control is over here, everybody of the same flavor over here, and you know, and we are and now we are now we're having ping pong from hell. And I and I think like like like domain-driven design, microservices, uh, we need to have an agent-oriented view of separation of concerns, and we need to be we need to master within that agency. So then then you're starting to cut the old views of organization already. So I think this needs to happen at the same time. So so in essence, then you can do this virtually, you can have the AI hub, but you need to then look at into a domain. But I I I get to I think this is really, really tricky uh with agentic systems if you're not getting the agency right. Does it make any sense what I'm saying? Because I I think this is hard stuff to imagine, really.

Mattias Fras:

No, I mean I think we we are discussing these things internally a lot now. I mean, in one in one sense, it is similar. Like if you're an individual, you have a standard operating procedure that you read from this is how I do this job. That's kind of the governance, um, and then if you have a process for how we um how we do um evaluate uh loan applications for a certain segment of customers, you have like policies and and procedures for that as uh so so for this, how they see them. Then we have applications, application owners, you have protocols for how you run applications, you have protocols for how you run platforms, but there are some new things coming in now with like agents. So you have agents for individual productivity, like I create a bunch of agents that I use in my daily work, and then you have agents that maybe work more in an integrated workflow. So you need to back again to the rewiring. I think you just need to make sure that we have, I don't know exactly how, but I but I think you need to just work out how do the existing governments that we have also ensure that we can do this properly. And I also know that there is like there's a lot of like um solutions being also developed for this, like Microsoft is developing. How do you monitor and govern, you know, uh agents and other platforms have the same. So um, but let me let me and it's used to work. I mean, it is workflow with microservices, and you need to make sure that you have governance around those microservices. So to some extent, like we have robots today, we have 400 robots, each robot has uh as a human owner um to sort of ensures that it has the accountability for for that specific RPA robot, it's been around for a while, but I think you will have uh some similar structure in the future where you have um you have to have clear accountability for for the agents, and yeah. In here, I also would like to say that this human in the loop is a little bit of a dangerous word because human in the loop kind of implies it's safe, but what does it mean to have a human in the loop? Uh what how is it applied, right?

Henrik Göthberg:

But I but uh I mean, like so I've been going down a data mesh rabbit hole for five years, you know that. And and and um and we're talking it stems back to Eric Evans, domain-driven designs, kind of ideas for microservices and all that. And data product owners. Yeah, data product owners, and what we're talking about here is it's data product owners. Yeah, it's interesting because uh head of research, Michael Klingwald, that works with me. We we've been framing and discussing agent-based organization before Agentic got all the spotlight, right? And and it's it's fundamentally the point of how we are describing the way we put agency around teams in an organization. So we were going at it already when you're doing domain-driven design, you're starting to build ownership around the product, around you know, connected to the business domain domain, and within that workflows. So we we've been doing this, trying to build a lingo to try to you know wrap our heads around you know, what is agentic design? You know, what is a genetic design of technology? Hmm, what does that mean in terms of designing mandates and organization? How can we have one type of organizational model that is not agentic but but matrix or whatever we are doing over here, and then we are trying to do something. So I think this, I think this we we were at EFPL at one of the read, you know, university in Switzerland and said agency rules that uh, you know, this is one and a half years ago, 23. Where we were trying to say these are, you know, I think these are design principles for how you treat how you design organizational teams, but they also apply hardcore, how you build safe and governed, uh, you know, safe uh agent workflows with a managed blast radius, uh so to speak. So where do you cut the end start and end of an agentic workflow? Do you do one monolithic agentic workflow or do you do three steps with with ways to look into what they're doing and have evaluations at the top? These are fundamental agency topics and how you cut it, right? And I think this becomes super important both for man and machine. I mean, like, so uh I'm sorry for ranting it. You should be talking. But I mean, like this this is I think this is this is Horizon 2 stuff. I think it is.

Olof Månsson:

Oh, for sure.

Henrik Göthberg:

Is Agentik the main? I mean, like, so we are going into future outlook type questions now, and I had agentic systems and context engineering as as my driver interest, and we we we we ended up by use having fun in that rabbit hole. Is that the big is that one of the big topics that we're thinking about when we say Horizon 2? Or is it something other things around the corner we haven't thought about? Or is this to me this is kind of what I'm imagining right now when we say horizon two?

Mattias Fras:

Yeah, and I and I think what another question is like what what is horizon three and what is the difference between Horizon Horizon 3?

Olof Månsson:

I forgot about Horizon 3 too.

Mattias Fras:

Oh, what is that then? Yeah I mean it's a way of saying that why you know what happens when what happens, which we saw a few weeks ago uh when uh OpenAI launched uh Chat GPT for apps. Like what happens when you when you do your um uh consumer loan application in ChatGPT?

Henrik Göthberg:

Complet the whole advertising game, completely shift the whole e-business game.

Mattias Fras:

So, like how do you think about that as a as a bank or as uh whatever you're selling?

Henrik Göthberg:

So Horizon 3 then is when you are thinking about fundamentally completely different business models out of this game. So we're not we're not on an agentic way fixing workflows in the bank, but all of a sudden we're we we flipped the whole market logic of some kind. Yeah, is that what you mean?

Mattias Fras:

I think so. No, but I think it's it has to do with business models.

Henrik Göthberg:

Very you could define quite sharply, but this one is more blurry, but it's it's there, it's definitely there, but how to put the name on it? Is that where you are at? Or do you have a name? I think what you did now was a good example, but it's hard to sort of frame it now.

Mattias Fras:

And I think you need to, that's why I think you you need to um go through like as a as an enterprise, yes, 200 years old, with sort of consisting of many other banks and constellations, you have to kind of go through one to two to get to three. It's or or or not, but I think um it's hard to go straight to two or straight to three, because one is one and two is where you you learn a lot, and then and and I think do you agree with that, Olaf?

Henrik Göthberg:

Because me and Andres Arbt, who's not here today, we have debated this a couple of times. Oh, really? In a different in a different flavor, but I'll I'll tell you. But do you do do you need to go through the hoops? Do you need to go through the levels?

Olof Månsson:

I I definitely want to say that for a company like Nordia, yes. I don't I don't see how you could leap from from zero to three.

Henrik Göthberg:

But zero to two then. Why why do we need to do the chatbot thingy to do work? Why can't we use the workflows?

Olof Månsson:

I think I think you can go from zero to two. You just need to bring in people that probably have done one somewhere else.

Henrik Göthberg:

Me, uh, um, um me and Anders, Anders was always into oh, you can go to operational AI without going through BI data analytics journey. So he was saying like you can use machine learning for uh uh you know putting a camera somewhere, or you can do some cool operational AI cases very narrow. This is five years back. And he's right. The way I thought that in in even in in Wattenfeld back in the day, I was like, you know what? We need to do BI and we need to have good BI self-service just to get people curious about the data and asking why questions. And when they ask why questions and they start understanding it, we get to more advanced questions. And I think I was right. And and and it's not that you have to, but it's it's it's the art of possible is to is to is to move the goalpost in sync with the adoption absorption capacity or something like that.

Mattias Fras:

That's why. I mean, do you do we have any examples of an enterprise that have gone from?

Henrik Göthberg:

Not a large one, a small one, yes. A small one you can you can you can you can start from a different angle, right? So but it's slightly different now because I I was doing the BI to analytics, the analytics to machine learning journey, and here we now we we're talking about chatbot generative AI versus core workflow reengineering. Maybe not completely the same, I mean like the same logic that we're maturing.

Mattias Fras:

Because I think it's like my experience, again, I'm the old guy, my experience is that it takes it takes years to change behaviors in large organizations, like whatever ever I think any organization, some are some are faster movers, maybe some are slower, but in any organization to get changed of behavior, even I think in Google or Microsoft, like you know, he he launched his uh don't be uh know it all, be learn it all. And I think they they he transformed the company, but it took a while, and I think it's the same uh for us or any other. So I think it will take time. I think it will um it will um uh um take time at least.

Henrik Göthberg:

I think Gora needs to sort out AI news because I need to go to the toilet. I didn't I didn't do this properly before we left and I'm alone.

Goran Cvetanovski:

You have done how many episodes had you done, you said well um thank you, uh Hendrik, for giving the mic. Welcome to it's time for AI news brought to you by AI HW Podcast. So, how are we doing? I don't know. Uh, I guess you have prepared some news for today. Is there something that you have uh yeah, yeah, we talked about, yeah.

Mattias Fras:

Okay, so let's start with that. Yeah, but how are we doing, you think?

Goran Cvetanovski:

Um we're doing great, awesome. I have a lot of questions actually for you because I think this AI enablement title is uh it's popping up uh rather frequently right now. And I think it's a very important transition role into moving from a technical uh uh technology-based innovation to business-led innovation. So we will talk about it later. But now for the AI news. Uh let's uh is there any news that you would like to share? Are we yeah, we are it's it's full live. Cool.

Olof Månsson:

So we read uh it was two days ago now, I think. That yeah, so so for anyone who has missed AWS have their annual conference remit, right? It's live right now, it's a big shebang. And they launch some new language models. And we're always exciting about excited about new language models, especially ones that we can easily access and experiment with. Uh so they launched their new Nova 2 family of models. Yep. Um, they promise uh good price, fast inference, and reasoning. Uh which should uh we we haven't tried them out yet.

Goran Cvetanovski:

Is it available in uh Europe?

Olof Månsson:

Uh I have to be honest, I have not checked. That's actually but they I I hope so. I would be extremely disappointed if not. We have tried out the uh first generation of NOVA models, and they were cheap and fast, yes. And to be honest, quite useless.

Mattias Fras:

So if these are cheap, fast, and high performing, it can be huge. Yeah, yeah, yeah. Improvement would be fantastic.

Goran Cvetanovski:

Yeah, it seems like they are available in uh Europe as well.

Olof Månsson:

So like oh that's fantastic. Yeah, you can we have to try them out.

Goran Cvetanovski:

Yeah. Matthias, do you have any news?

Mattias Fras:

There's so much. I mean, one thing I mentioned here was this app, uh, this um chat apps. Uh but that was that's old news, that's weeks ago. Oh, that's eight years ago.

Olof Månsson:

That's like two more than two months ago, or almost two months.

Mattias Fras:

But I think that one interesting news, the recent news, I think, was that you know Anthropic, you know, Anthropics now uh had this partnership with Microsoft, so they are getting available through Microsoft their models, and and I think just recently they uh also made a partnership with the Snowflake, so also so integrating. So I guess what we're seeing is that those guys are becoming integrated part of these platforms that we as enterprises uh use, but also that they go straight to uh so to companies now, right? So you see both uh open AI and Anthropic uh you know developing value propositions and and also going offering uh services direct to enterprises, which I think is very interesting. So uh there was some news earlier this week, I think, or not, maybe that was not news, but it was in the news about BBVA working with OpenAI, and I think there's many others also in Europe now collaborating. So I think there's a lot of uh a lot of stuff happening in the market that you kind of have to think about how how things fit together with the hyperscalers and these data platforms, and then so you have OpenAI and Tropic, and then you have Microsoft, Google, and NAWS, and then you have um Databricks, and you have Snowfake, and then it's all like becoming a very dynamic ecosystem, something happening almost every week.

Goran Cvetanovski:

Yeah, super good. Uh before I give you the mic, I I just want to uh tell you I told you so. Uh, you saw that uh Chat GPT released the shopping module, right? So in the end, it was all about like who is gonna get uh portion of the Google simpire about like uh research, uh sorry, uh search and uh and uh shopping. So in the future, you will feel uh you will find most of this actually uh what is called uh LLM big providers going into uh e-commerce and shopping and etc. Because that is where the money is. But now they have uh Gemini 3. So that came exactly. So I think that Google is actually taking the uh the fight back. Uh there was uh there was news about like uh some uh old man pushing the red button uh uh being a little bit like, oh wait, these guys are not far behind anymore. Now it's becoming a little bit serious, and he needs to and there was another evaluation about how much money do they need by 2030 just to sustain uh the development of uh open AI. And I think it's going to if they don't find a new uh revenue model that will up the the revenue, I don't think they will make it.

Olof Månsson:

Yeah, I seriously I also read about the the code red and in an open AI, there are rumors about that, and I thought that was fascinating given that Google had code red three years ago.

Goran Cvetanovski:

Two years ago or something like that. But uh, I I I mean, of course, the Open AI will not disappear, uh, so that is not the case. You you never know. Uh there are bigger channels like that, like uh you know Kodak and stuff that they have went back. But um, it's interesting how the dynamics actually change very fast. So it's uh you should not be rooting for any anyone at this point of time. Things are changing, so very interesting. Uh, for me, one question: what is AI enablement uh title when you have a chance? If you we have a chance to go through it as a question a little bit later on, all right? AI adoption enablement, adoptation adoption.

Henrik Göthberg:

Wow, so I think uh in 172 episodes I had to do two or three breaks, never alone though. That was problematic because you can it's easier to we're prepared, like we're growing, but you know what? Me and Anish had a bet who will win, who you know, we had the bet for like 50 or 60 episodes who will be the first one to break character. That that one I won.

Goran Cvetanovski:

Yeah.

Henrik Göthberg:

Anyway, I think there is a there there is a segue here, even into back to governance around innovation here. Because what we have been going around here with Agentic systems, and a little bit of what we talked about, this is the data, this is the software, and all this. I've been I've been thinking carefully, I've been uh thinking a lot about does this mean AI governance, data governance, and software governance needs to converge? Does the do they like even to the point of how we look at those two three topics? Are they different things? Or are they you know are there different facets of the same thing? How do we understand that? Because for years we've been having, you know, in a large organization, we you would have devils, you would have practices and someone governing in the way you you want to man manage your software. And then we've got data governance as a huge uh industry and function in a bank. And you have some people working with data governance, they are close to compliance, so they need they need to fix according to B what's the newest BDS, you know, whatever. You know, so this is a definition of what is data governance that lives in a bank. And here we're now we've been entering into the topic of AI governance. And where where are these software governance, data governance, AI governance? They've been living to some degree in different parts of the organization. You can you can go broader with security and and all that. My thesis has been I'm not saying they are converging, converging. I'm saying they need to be very, very coherent and work as one. So there's as a system, they are one. Have you thought about these things? Because I this is where I'm seeing where. Going with Agentic as an example, with Horizon 2 as an example. That's how we look at governance and those different aspects. I have a few thoughts about this actually.

Olof Månsson:

Please. And what I think is funny you mentioned, or funny, but I think it's interesting you mentioned DevOps there. To me, in one way, like the AI governance is or like if you if you do normal software development without DevOps, you can do that. It's probably not going to scale amazingly. It's going to be hard. And it it's similar for us, I think. If we try to do AI in the bank without governance, you can do that, but but it's not going to scale. So so you then yeah, to reach the scale, it's definitely needed. And I think you mean you talk all about the system as a whole.

Henrik Göthberg:

I think so.

Olof Månsson:

And the system it's about the system governance. It's a component AI component system governance. That's the interesting part. That's going to be like there might be an AI component in the system, and that might require some things that are not required otherwise. But in the end, it's the system that needs the governance. And a lot from an engineering perspective. I think the governance is just make sure you build a good, reliable, trustworthy system that does what it's supposed to do in a secure way.

Henrik Göthberg:

What do you think, Matthias? Because it's that this is not converging, but hell of a lot closer in terms of understanding each other and working as one than we are doing today.

Mattias Fras:

Yeah, I think you have different levels. I mean, you have, for example, you have regulation, GDPR, you have AI Act, Dora, whatever. You have different regulations covering like digital, covering data, covering.

Henrik Göthberg:

So this is the umbrella, right?

Mattias Fras:

This is okay. So on a lot of, and then you and then you have like your your company, like we have we have a we have a risk appetite. We have a uh and and that is defined somehow, and that risk appetite kind of governs how the policies and procedures that we develop for data, for AI, for uh for other things, and and that becomes our system in the bank, risk taxonomy and our guidelines and policies and protocols and SOPs.

Henrik Göthberg:

But this is okay. When you say SOP now, have we reached code yet, or are we still in documents?

Mattias Fras:

Um, I mean policies uh uh that is then converted into code potentially. But it's but it's but then when you say SOP. So so this is and then you have me, I want to develop a product, I want to develop a platform. Then you have this process where you you do you start by doing some kind of risk materiality assessment, like this product. You have these checklists, you go through whatever you have in your company to to where you kind of firstly assess what is the risk with this product from a data, from a vectors, yeah, and they're all kind of embedded in this thing, right? And then you come up with yeah, you have a high risk in the you know data area because you're using um, for example, you know, PII data, or you come up ready in the AI because you're gonna use it for something that is considered high risk, or uh, or whatever it is, right? So so the the the regulation and the internal risk taxonomy and structure uh uh puts the requirements on like the process you have internally, and that process needs to have everything embedded. So, me as a user, you know, which is actually something that we're working on to make it more and very intuitive, uh, to navigate from like an idea to a pilot to a uh to production and to scale. So I think it's that that process that you need to work on uh to be able to make it. And I think the more you bake into it and and the better. Okay. Otherwise, you have to go first to the data team, data privacy team, and to go to the A experts, and then you have to go to the software. We don't you want to avoid that, right?

Henrik Göthberg:

I think you did a couple of things now that uh I want to I want to break down, slow down, because there are two very important topics here. First of all, you started to talk about, in my opinion, governance or risk management by design. And what I mean with that is the whole idea of shifting left, of doing things, figuring things out early in the process. So from an ideation, from a development shift left is back to software development lifecycle, and it's back to we should do the things in the right order when they make sense to be done. And what you're highlighting now is that you're looking at the process, and when you say a process, you're talking about the ideation to putting an idea into a prototype into large key production. Well, it could be is that the process you're referring to?

Mattias Fras:

Well, it's a bit more complex. It can be that, yeah, but it can also be I want to switch on some Gen AI feature on my uh you know, um Adobe platform or on my uh on my uh in my application. In now I can switch on this Gen AI feature to generate uh images uh or generate text or whatever it is, right? So it can be like it couldn't be like I want to create a new product, it can be I just want to switch something on, or it can be I want to create an agent.

Henrik Göthberg:

Actually, different different sort of angles coming in. But but I but wait. Principally it's the same. It's from going from idea to production, and the simplest idea is to switch on a fl switch on a feature that already exists, that's from idea to production on a feature that happened to exist. But from a risk perspective, I don't think that there's any difference. It might be a much much more lighter version, but would you switching on that switch to say oh we're gonna have co-pilots and everything? Is it it I from a residual from a philosopher? Sorry, is it the same or not?

Mattias Fras:

I mean it's the underlying uh risk from framework is the same, but the the residual risk is much different, right? That's what you always have to whenever you go through this process, you need to figure out what is the what is the new risk that I'm taking on, and how do I mitigate if I switch on uh uh I now enable um uh so that I can search in my database for data, it's it's something that helps me find data. It's Jenny, I think. That's like little residual risk, probably. Um or compared to now I'm gonna build a platform.

Henrik Göthberg:

But here here now, I I have such huge respect for the banking sector, and you are talking about risk and discussing these topics on a different level than SCORNU manufacturing because you use the word residual risk now. So you're talking about how you understand. Can you define residual risk? I'm learning, Matthias, right now.

Mattias Fras:

This is also I think but I I do think that other companies have it as well, but maybe maybe you are drilled in it. But if you like, if you if you do introduce a change, yes, that change comes with a risk, yes, or it doesn't, but usually there's some there's some change, which means you might you have a new you need to consider potential new risks, yes, and you need to uh ensure that you mitigate those risks so you end up in a sort of a tolerable tolerable state, state that way you can tolerate. Yes. So um, and that might be from a data or an AI or or a usage or whatever risk.

Henrik Göthberg:

So why is flicking a switch in this example a less residual risk if I'm trying to learn and understand how you're thinking about this? I think you're right. I'm just learning.

Mattias Fras:

Well, when you evaluate in risk, you evaluate like the probability and the impact. Yeah. The probability that it will happen. Yes, yes, and the impact that it will have. Yes. And I think in in in lower risk cases, you have you will always have lower impact, right? Financial impact, reputation impact, whatever it is.

Henrik Göthberg:

But it doesn't have a flick of a switch can have a huge risk, by the way, then because you you miscalculated it. So I'm not sure it's by the way. Well, it depends on like what the what the use case is about. Because I can I can I can I can imagine a product from scratch that is fairly safe to put in. Even if if I use the RPA perspective, uh, it's an agent. You know, it's a very dumb agent, but it was an agent. You know, it's rule-based, right?

Mattias Fras:

Yeah, yeah.

Henrik Göthberg:

Yeah. So so what I'm saying is you put in a product, you're doing something that looks cool on the outside with multi-agent, blah, blah, blah, but you're putting it into a fucking stupid. Um I'm doing it for for categorizing the the Christmas cards at uh at Nordia. I'm doing a very cool workflow to improve the life of the of the Christmas card uh delivery, you know, very little risk, yeah.

Mattias Fras:

While I'm flicking on that could have a reputation risk if you get it wrong. So I that's what I'm saying. So in any a customer-facing use case is usually you have to be more careful with if you have an internal use case. So level of autonomy and and sort of uh customer facing is a good one. It is to show so high level of autonomy and customer facing.

Henrik Göthberg:

Boom, boom.

Mattias Fras:

Yeah, low level of autonomy, internal, less risk. I mean, generally.

Henrik Göthberg:

What I what I really like with this conversation now is that and we we we should we should um uh we've been collaborating and working a little bit with Rice together with Scania on on how to understand you used to work for Scania. Yeah, yeah, I I remember, remember. We can talk about that later. But we've been talking about uh how to understand uh AI act assessment, not what what to assess and how to assess it and and and and all that. We can come to that question later. But one of the key topics that we sort of be been growing going into is to take an AI compound system view and talk about risk vectors. I'm not I'm not I'm not doing anything in relation to how you understand what's the traditional way of understanding risk, but we're getting there that the way to look at compliance should be to look at risk, right? We we are doing all this compliance ultimately to be to compliance by design ultimately should be compliance and regulation is there for us to be more risk conscious as as an objective, right? So to take the whole risk management approach and look at the risk vectors, uh you know this is this is what I'm hearing that that to have proper innovation and proper governance, yes, compliance, but what you you are much more on the money. What I love to hear is risk management. When you say proper governance, you actually you're talking about risk, not compliance.

Mattias Fras:

Yeah. Which I love. Yeah. Okay, good. I mean compliance is compliance is like second line.

Henrik Göthberg:

They are they are ensuring that we are compliance is there to push us to be risk conscious when we are too.

Mattias Fras:

Yeah, they are there to help us to to ensure that we are where we where we should be. Uh basically. And if they're not if we're not, then they will tell us.

Henrik Göthberg:

But but this is I don't know, but for me, this is a profound moment because we it's so easy to get hooked up on oh, what is the compliance stuff, and we need to do that. Of course, we need to be compliant and all that. But if we are treating as a tick box exercise, we're missing the point. And we want to that was my point earlier, but like you need to embrace it. You need to embrace it, right? And if you want to talk compliance by design, design, you're actually talking about risk management by design. You're not talking about compliance by design. In order to build something that is compliant by design, you need to be sharper risk management by design. I think. That's what you said.

Mattias Fras:

Yeah, I think I agree. I need to just uh I'm trying to follow you along.

Henrik Göthberg:

Yeah. So what in order to be smartly safe and smooth, if you are if you are really, really good at risk management. There's another way. There's an iceberg, another metaphor for what we're talking about now. When we when we draw an iceberg, we have we have the value and we have the risk above uh underneath the water. So so one way of looking at your risk vectors is ultimately your value vectors too. You know, if you want to build a good system that has, you know, you can fuck up value uh in the in the model, in the data, in the UX, I think it's the same. If I take the iceberg analogy and you have you have a smooth to do the right innovation with the right value, has above the line above the line value and below the surface risks. And if you do that right, you should be fine with your regulation. This is big stuff.

Olof Månsson:

Yeah, the tricky part is it assumes you the regulations are clear and well defined.

Henrik Göthberg:

Ah, but then we come to the next level then problem. Because then we get to legal uncertainty and shit like this. But oh, let's go here. I I had a last question. I mean, like we're we're really moving into future outlook of what this is going. Um we are we are we are used, we're going down rabbit holes, but there is some logic here. We are following innovation, and we're following innovation in the proper way. And with proper way, we're talking about governance and prop proper way of governance, which are outlook view on the same thing. What happens when we put thousand agents in motion? You know, I mean, we haven't even started horizon two properly. We're gonna go horizon two, we're gonna start with agents, and then we're imagining horizon three. I'm just uh imagining a shitload of horizon two. What what what are we not doing? Or what what how does our ways of innovating and governing evolve? We have touch, we have just barely touched going into horizon two. Let's let's imagine it like that. But then we think, hmm, it could be we can have a lot of value by just doing a very a lot of horizon two stuff in a bank. But how how how does our innovation and governance shift? I have a hard time. It's a it's a huge question for me. I can't remember.

Mattias Fras:

The reason why I I kind of don't know how to answer that question is is a little bit like, or one of the reflections at least, is that what is an agent? We have agents. An agent can be like two LLMs reasons, you know, talking about how to best um fetch data or which data to fetch from the database given the prompt it has been given. That's that's one agent. Or an agent can be maybe I don't know, making a credit decision. But let's let's let's be very a little bit let's be what's your definition of an agent?

Henrik Göthberg:

Okay, what's the minimum definition of something that we call an agent and not an assistant? Something, I mean, like for me, it has something to do with autonomy and agency to do shit. As long as you are prompting it and it's used going down and doing something for you. I would probably give that the assistant label as a way to differentiate.

Mattias Fras:

It's a rag. Is that that's not an agent?

Henrik Göthberg:

No, a rag would be a rag, you know, as long as you are prompting, uh uh, you know, chat GPT can never be an agent on its own, right? Because you're simply prompting it and it's giving you responses. It it gives you suggestions what it should do, but you are even not even agent mode. No, let's go there. Um I don't even use agent. I mean, agent mode maybe it's agent mode, but but I think we have had these conversations before, but it has something to do with actuation. It has something to do with that you are that it can that it can you you give it an instruction on a higher instruction level and then it can go out and actuate stuff. Uh before it actuates stuff on its own. Um this is gray, but I for me that's sort of agent land compared to assistant land. I don't know what what how do you do it? What what because I I I was combing from the more actuation level of things. Autonomy, having some framed autonomy to go out and do stuff.

Mattias Fras:

Yeah, but I think that I I I envision that we will have many agents, maybe thousands of agents in a few years, but that they will they will be governed uh quite thoroughly, uh thoroughly, that you will have you will have agentic systems that do stuff but that also control stuff and ensure you have control. I think that you need at least in our industry, I think you need to have that built in from the start. But but it's back to the question again, what is an agent and and and not, right? If you have if I build uh individual service desk uh copilot studio agents, that's one thing. But if I'm I'm starting to integrate agents in workflows and they start to make maybe small but still micro decisions, then you you I think you will you will introduce uh governance and GodRas even even there, so that you can feel confident that you can scale up and still be in control. I think that will be a prerequisite.

Henrik Göthberg:

I I fully agree. Let's let's see what we can learn from what we have already done. We have you have about 400 bots in production in Node or something like that.

Mattias Fras:

I have, yeah, I don't know exactly, but we have hundreds, hundreds, right? Uh scripts, yeah.

Henrik Göthberg:

Some so let's not over it's scripts that I do stuff which is very repetitive and it's not dangerous at all. Everybody's doing it. How are we observing or monitoring or governing them?

Mattias Fras:

Yeah, I mean it's it's I think it's pretty straightforward because they are all rule-based and and you have uh you oversee them the same way that you oversee other applications, I think.

Henrik Göthberg:

One by one, or do you have an an overview layer of all the bots? Do you have do you have it like an observation panel for all the bots?

Mattias Fras:

They sit in like different domains, and each uh each bot has a human owner. Uh, and we have you know how you have um control rooms and how you uh monitor them, and you have dashboards, so you but you have distributed out to some degree, so the people managing observing the bots, managing the bot knows what they should be doing, right? Yeah, I think some are more sense, like there's like as I said, clusters of them. So you have like control towers here, and you have a control tower here, and they monitor the the the robots.

Henrik Göthberg:

So if I imagine I'm I'm not I'm just saying, is that logic feasible? I mean, like okay, we have actuation and all that, so it's way more sophisticated. But what I'm taking out of that is that you might need to have a way to orchestrate all that. So you're monitor you have some observation monitoring of the whole of all bots, but then in the end you have a distributed. Governance in relation to the teams that knows what these bots should be doing. And I I kind of I think that kind of logic would apply to agents too. I mean, like, so these are the kind of things I'm thinking about. What are the infrastructure? What are what are the observation layers? What are the monitoring layers we would need? Olaf, what do you think?

Olof Månsson:

Yeah, I'm thinking more at some point. I for to me it seems reasonable that agents would become like another employee or another team. And what do we need for that? Well, you need like good onboarding, you need some sort of metric or way to make sure that they do what they are supposed to do. And they yeah, you they have like a defined set of things they can do and cannot do. And and yeah, I guess that's that's basic data.

Henrik Göthberg:

I'm not saying it's gonna be easy, but that's so the I mean, like so uh there wasn't there was a there was this this was could have been a news piece. Did you see Sataya Nadella's long interview on YouTube like uh last week or two weeks ago, or something like that? He was talking about this. I did not go out and watch uh Sataya Nadella talking about these topics quite freely, and you can see him. Oh my god, he's gonna start charging agents as users in Microsoft. So he's thinking like this. So he makes sense. So he's like when you're starting to deploy an agent that has sort of co-work the co-worker, um uh the co-worker type um analog metaphor. So then a coworker needs what tools do they coworker need in order to work in our environment? Uh he he they need this and this and this, they and they need an email account, they need this and this and that. And then we are monitoring them and observing them in in in something like that. So this is one of the this is one thesis here, and it's not anyone who's pushing this kind of thesis. So this is hardcore Microsoft play.

Olof Månsson:

But we've been talking we've been talking about this a bit related to cloud code. I mean, meh everyone is praising it. Uh it's like, oh, it's like having an colleague or multiple colleagues, and then you look at the pricing model, and it's like, oh, it's $200 per month. I guess maybe it could be differences for enterprises. But if you start to look at what a software developer costs per month, there's a big gap.

Henrik Göthberg:

There's a big gap, right? So this is one way. I have another angle that I want to test on you on how to manage 1,000 agents. And I'm I'm I'm very much um influenced by the data mesh journey that I've been on with Skonia, where where I have like friends like Paulo Platter, who's like a genius when it comes to data management and distributed architectures and stuff like that. And he basically said, you know, AI, in generative AI, yeah, he he he he had his ideas, but he couldn't really understand. Ah, he didn't it didn't switch him on. Then as soon as we started talking agentic, this is about a year ago, uh one and a half years ago, he's like, huh, we're back to domain-driven design, we're back to your ideas, like what I was trying to pitch. Agency, because that that that logic that came with data mesh, that came with we need to have a domain and we need to have ownership around the data, is easily extensible to the agent storytelling. I mean, like you have the data product and you have the agentic product in relation to a business workflow. And of course, they they then build software. And so, hmm, we have a data, we have a data product archetype. Uh we extend it to a rag archetype, and now we are extended to knowledge products and agentic products. So his vision of this as the core CTO, like the product vision of this technology, is more or less uh how we manage data products in terms of we need to have a we need to define a data product, we put a data product in the marketplace, we need to have an SLA, we have, you know, so the way we you we've we the way we look those metaphors and semantics we use for data products, he's kind of extending it and and reimagining it from an agent view. And and I I I think it it starts to coincide with how you need to have you need to have a home for the agents, you need to have a place to onboard them so they have a data product development experience from ideation of a data product to how you deploy a data product. So this is this is another angle on this story. I have one more angle. A bandley. Do you know Henrik Kneebari? Have you heard you know who Hendrik Nibari is? So he, I mean, like Spotify, the Spotify model. Did you ever watch that video on YouTube? Okay, yeah. This is the guy. Okay. So he worked with Spotify when they sorted this out. He's an agile guru in the in in in software, but now he's a co-founder of Aboundly. We had him here, and he takes hardcore the coworker type onboarding metaphor to its fullest. So the way you set up an agent, very basic stuff that you can do now is basically how would you how would you how would you onboard a junior intern and how would you give that junior intern feedback into knowing how to do their task? It kind of works. I've seen examples. He he did the example with the they did one with uh Alexander Nuriena with his second generation you know documentary and all this stuff. So I don't know. I think this is very interesting to think about. Uh, do we need where where is the home of the agents? Where do they live? How do how do we onboard them? So what you said, Olaf, is exactly down this line. Exactly down this line. So a bundle is going that route in one way. Weedboost is going more, you know, understanding it from a data product perspective. Tataya is sort of thinking about it. How can I miss how can I sell another software license? But it's interesting, right? Because we get to that kind of computational governance of all this in some ways. I don't know. Hard to really imagine, but I don't think it's that far away. I don't think it's that far away. And I don't think if imagine we're letting all these agents lose like Excel hell. Can you can you imagine? We have Excel sheets everywhere, and people are popping up agents like Excel. Or macros. Macros, right? Macros on Excel, yeah.

Mattias Fras:

I think that's you don't want to end up in like this macro.

Henrik Göthberg:

So so yeah. I think this is this is a very real risk in a large enterprise.

Olof Månsson:

Oh yeah, yeah. Oh yeah. Shadowy is real risk, for sure.

Henrik Göthberg:

We're gonna start closing the deal here. I realize I didn't realize even the the the clock was running. We we tried to do it at seven, and we haven't even gone to the geopolitical stuff, so we can I think we should go fairly quick here. Geopolitics, that's a small philosophical question for the next two hours. I I completely lost track of time, guys. I had too much fun, too many beers. I should have the beers afterwards. I have one question that that relates to this, which is on the same I only one question, which is, and then we're gonna have a final question. Uh one thing that's been bugging me is like how do we understand different laws and regulations in different parts of the world in relation to governance and innovation? So this is going into the whole discussion on the AI Act in Europe. And I'm I'm I always thought like, hmm, this was uh I wish we could solve it like they solved it in nuclear or whatever that we that UN or something that do we need a do we need a world governing body around how we innovate and govern AI? Or uh, you know, because I I find it really tricky, both how we innovate and how we different markets go in different places and you know what is it worth? So it's a stupid heartbeat question, but have you thought about have you how can we do different regulation in different parts of the world on these topics?

Mattias Fras:

I mean, I think we I mean we are obvious we are different, right? We are like, was it Kai Hu Lee who talked about you know China is doing the AI implementation, US is doing the AI innovation, and Europe is doing the AI uh regulation.

Henrik Göthberg:

In was that implementation, innovation, regulation.

Mattias Fras:

Yeah, but now I think China is also doing lots of innovation. So uh, but uh and and and Stockholm, like there's some stuff happening here in innovation, uh the new Valhalla uh with all the companies here, so I think that we are also doing innovation. So um, of course, GDPR and AI Act and stuff like that is a hinder for the hyperscalers to maybe roll out their stuff because they are unsure about how things work. Uh but we are, I mean, we are different, like we we Europe uh we care about people privacy, and we we are maybe old-fashioned in that sense, but uh we do, and I think that's we they don't work like that in China, or also not in the US. So we need we need to so it's kind of like we can't change that. I think we we have those regulations in Europe, and I think if if I'm stepping out of my professional role as a private person, I think that that that I like that kind of I like that we care about privacy, and I like that we care about uh not doing stupid things with AI, and I and and I like that I don't like the way that social media was handled, right? With no regulation anywhere, and uh yeah, and now Australia is is uh banning it for up to 16. Let's see how that goes. But I think that with that we don't want to do the same thing with AI as we did for social media, that we just kind of let it free us and then regulate after. So so so the whole idea But I mean that's easy to say, of course, and I know that it's like Ulo said earlier, like it's it's hard to it's hard with these regulations because uh the underlying notion of them is is is great, you know, previously for the the people in Europe, but how should they be implemented and how should they be yeah that's the tricky part?

Olof Månsson:

Yeah, that's the uncertainty that makes it hard. Yeah, it's like we we need to know what good looks like. Yeah, if we need if we know what good looks like, then it's fine.

Henrik Göthberg:

But then now you're segueing to my second big like bigger question, and that that is like I I think we have had this conversation on the pod many times, where I end up in my thesis that it's not actually the regulation itself that is the problem, it's the legal uncertainty that is the problem, right? So when you read the AI Act, it is like, and if you try to read it with I I can read it with really uh critical eyes and then it's all stupid, or I can read it with really positive eyes and it's all really good. The problem is when we are trying to put the law in and there is no harmonized standards, there's no definition of what good looks like. And and so it's in my opinion, it's a rushed regul type of regulation. And I and I think this is the hard part. When you're regulating stuff that is really sharp, what it is, then it's easy, right? Then then you can put boxes on it. And and AI has been so fluid now, so it's it's this whole definition of can you know when and how do you put regulation in when when you're not when you when when you're not really understanding what you're regulating? I think I think that's the problem. People are trying to put regulation on something they that they are not competent enough in understanding how to do it, and they are doing the directive work without the harmonized standards work. And I I don't have a problem with the regulation. I'm having a problem that we haven't done the work on harmonized standards. So people are getting off easy when they when the real work started now.

unknown:

Yeah.

Henrik Göthberg:

Sorry, that was my rant.

Olof Månsson:

No, I I completely agree. I think we've faced very we face like similar problems internally, that we have uh unclear requirements. And in those situations, it's like you talk to one person, then they say this is good. And when you talk to another, no, that's crap, that's not good enough, it has to be here. And for me, when we we try to do things quickly internally, it's just if if I know what good looks like, then I can solve it.

Henrik Göthberg:

Okay, hallelujah. This is profound because then we can ask the final question, uh, the final of like proper question. How can Nodea, Sweden, Europe, Scania reduce legal uncertainty to be better at innovation properly? Uh what what are the what are the things we're going because I I I think we're we are not so good at this internally in our enterprises, not good enough. I'm not and then Sweden is not good enough. Europe is not good enough. We're not good enough in putting ourselves in problems of legal uncertainty. And what what is the mechanisms here that we need to improve? Because I think it's the fundamental same mechanism. A tight feedback loop. Ah, feedback loops, I love that. Explain that.

Olof Månsson:

Yeah, we need some way that the information can flow both top down and bottom up. So the people that go through the process can help make the process smoother. For the next one.

Henrik Göthberg:

Oh, you need to read our IP in Dadax. It's all about feedback loops. You know, systemic stupidity versus identic intelligence is about feedback loops. Yeah, it makes sense. This is cybernetics. Need some way to improve. So so the I uh because okay, so feedback loops, I like that. Is that enough? Because in the same way, then you need to then understand with the feedback loop, you get, oh, you haven't sorted this out properly. Oh, there's some more work to do. So the feedback loop then also needs to drive what is missing, what's the gap, and someone acting on it. So feedback loops with agency to act on that feedback loop.

Olof Månsson:

Yeah, yeah. Someone need yeah, it for sure mandate is required in order to improve based on the feedback. We need to measure the right things as well. That's usually a systemic problem.

Henrik Göthberg:

I think this is fairly profound stuff. Do you have an view on you know legal uncertainty is always the tricky point, and we have two different definitions or three different definitions?

Mattias Fras:

Yeah, to me, my experience is that we we are fairly or we are fairly good and we focus a lot on drawing out very good customer journeys. We should be easy to deal with. We should be like we we're very we're quite digital immature in order to the banks. So we're good at sort of the customer understands how I should how he or she should navigate. But for us as employee in a big enterprise, the employee journey is usually not something that we spend a whole lot of time on. So if I'm gonna develop something, it's of course we have tools and we have uh but but it's it's also something to do that the basis is quite straightforward. You need to have lawful basis to do something, you need to have the right data classification, you need to be able to process that data, and you need to, yeah, um, some more things, but but like those are the things that you need to follow. But but but how that is described and implemented is usually quite quite complex, both from like the regulatories, external regulatory bodies, but also from like internal experts. Like you have group compliance, you have group legal, you have uh modulus validation, you know, those cut those policies are quite complex. And um we are trying to spend more time then also thinking about the internal how to navigate as an internal employee in in in uh so I think that's also something that adds to the difficulties.

Henrik Göthberg:

Legal uncertainty and adoption as the key navigator is the way forward, and we stop there. Thank you so much.

Olof Månsson:

Thank you, thank you.