MetaDAMA - Data Management in the Nordics

4#6 - Rasmus Thornberg - Decision Science and AI between Use Case and Product (Eng)

Rasmus Thornberg - Tetra Pak Season 4 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 39:00

«Focusing on the end-result you want, that is where the journey starts.»

Curious about how Decision Science can revolutionize your business? Join us as our guest Rasmus Thornberg from Tetra Pak guides us through his journey of transforming complex ideas into tangible, innovative products.

Aligning AI with business strategies can be a daunting task, especially in conservative industries, but it’s crucial for modern organizations. This episode sheds light on how strategic alignment and adaptability can be game-changers. We dissect the common build-versus-buy dilemma, emphasizing that solutions should focus on value and specific organizational needs. Rasmus's insights bring to life the role of effective communication in bridging the divide between data science and executive decision-making, a vital component in driving meaningful change from the top down.

Learn how to overcome analysis paralysis and foster a learning culture. By focusing on the genuine value added to users, you can ensure that technological barriers don't stall progress. Rasmus shares how to ensure the products you build align perfectly with user needs, creating a winning formula for business transformation.

Here are my key takeaways:
Decision Science

  • You need to understand the cost of error of a ML/AI application
  • Cost of error limits the usability of AI
  • Decision Science is a broader take on Data Science, combining Data Science with Behavioral Science.
  • Decision Science covers cognitive choices that lead to decisions.
  • Decision Science can just work in close proximity to the end user and the product, something that has been a challenge for many.

From Use Case to product

  • Lots of genAI use cases are about personal efficiency, not to improve any specific organizational target.
  • Differentiating between genAI and analytical AI can help ton understand what the target is.
  • genAI hype has created interest from many. You can use it as a vessel to talk about other things related to AI or even to push Data Governance.
  • When selecting use cases, think about adoption and how it will affect the organization at large.
  • When planning with a use case, find where uncertainties are and ability for outcomes.
  • It’s easy to jump to the HOW, by solving business use cases, but you really need to identify the WHY and WHAT first.
  • Analysis-paralysis is a really problem, when it comes to move from ideation to action, or from PoC to operations.
  • «Assess your impact all the time.»
  • You need to have a feedback loop and concentrate on the decision making, not the outcome.
  • A good decision is based on the information you had available before you made a decision, not the outcome of the decision.
  • A learning culture is a precondition for better decision making.
  • If you correct your actions just one or two steps at a time, you can still go in the wrong direction. Sometimes you need to go back to start and see your entire progress.
  • The need for speed can lead to directional constrains in your development of solutions. 
  • Be aware of measurements and metrics becoming the target.
  • When you build a product, you need to set a treshold for when to decommission it.

Strategic connection

  • The more abstract you get the higher value you can create, but the risk also gets bigger.
  • The biggest value we can gain as companies is to adopt pur business model to new opportunities.
  • The more organizations go into a plug-n-play mode, the less risk, but also less value opportunities.
  • Industrial organizations live in outdated constrains, especially when it comes to cost for decision making.
  • Dont view strategy as a constrain, but rather a direction that can provide flexibility.

Exploring Decision Science in Data Management

Speaker 1

This is Metadema, a holistic view on data management in the Nordics. Welcome, my name is Winfried and thanks for joining me for this episode of Metadema. Our vision is to promote data management as a profession in the Nordics, show the competencies that we have, and that is the reason I invite Nordic experts in data and information management for a talk. Welcome to MetaDema. And we are back already at the sixth episode of season four, and today I have Rasmus Thornberg with me. Rasmus is from Sweden and working for Tetra Park, maybe a company you all heard about.

Speaker 1

I talked with someone about Tetra Park yesterday and she said I heard the name, I heard it and I read it. I have no idea what they're doing, but thread recognition is there, apparently. So welcome, rasmus. Thank you, great to be here, great to have you on the show, and we're going to talk about something really interesting.

Speaker 1

We're going to talk about decision science and how do you get from idea to innovation to operation, which is a really interesting topic, not only for the new AI hype that we are still in, but also for working operating analytics. So new AI use cases are almost everywhere, but how do we make the decision to go with an idea towards an innovating product, towards operating it, and how do we maintain it over time, after that and on that way, there are a whole series of struggles that could await us, and the way Rasmus has approached it is through decision science, and that is a really interesting topic that I want to talk with him about. But before we talk about the topic itself, I think it's interesting to hear more about who Rasmus is. So please introduce yourself.

Speaker 2

Yes, my name is Rasmus Thornweig. I'm an engineering physicist from training at the University in Lund, where I still live. I've always been a curious person and wanting to understand things and dig deeper and so on, so that's why I'm here.

Speaker 1

Well, I think we have to give a little shout out to Henrik Gullberg. Yes, he introduced us in Stockholm at the Data Innovation Summit through a, I think, almost spontaneous panel debate.

Speaker 2

More or less yes, and I've been working a bit with Henrik, and our collaboration also started out for interest. I listened to his pod and wanted to give some feedback and from there we started to collaborating a bit and now I enjoy work. I have enjoyed working with him immensely, yep, so hope you're listening, henrik, fantastic.

Speaker 1

So I think we're going to dive a bit deeper into your role and what the decision science group is working with, but before that, maybe you can tell us a bit deeper into your role and what the decision science group is working with. But before that, maybe you can tell us a bit more about who you are outside of work. What are your hobbies? What do you do in your free time?

Speaker 2

Right. So, as I said, I'm very curious and I want to do different things. I wouldn't say I have ADHD, but people with ADHD maybe have some other similar traits to my hobbies. I change them over time a bit. I have different things I dig deep into Right now. I'm into making, so I'm into 3D printing. I'm into laser etching, laser burning, CNC furniture making for my home renovating. That's what takes most of my time out of family. I am also the father of three, so that's actually where it's been the brunt of my free time. I'm not sure we can call it a hobby, but it's my life.

Speaker 1

This is maybe the most frustrating part for my wife, because every what two or three months I have a new hobby and I go all in and then two or three months later there's something new again. I mean, you talked about the curiosity that you have, and maybe that's also the reason why you got interested into data and decision science.

Speaker 2

Yes. So understanding how things work, like that curiosity and I kind of made it a professional trait for me so understanding how something works and then the step next is how do we improve it, how do we tweak it, how do we make it better. So once I understand something, I want to change it to work better. Something, I want to change it to work better, both. I mean in the physical world making stuff better but also in business. How do we make better decision? How do we take rational decision based on data and facts? That's how I ended up where I am and I am.

Speaker 1

I'm thrilled to work in different parts of the business, to understand how different people work and how they're part of the business. How much do people actually need to understand of what's going on? And obviously there has to be some people who can explain AI functionality and really look under the hood. But what is your take on general public knowledge or literacy in AI and data?

Speaker 2

Okay, I'm biased. I want to understand, as I said, but I don't think necessarily people need to understand the technology very deeply. What they need to do is they need to understand the use case, the application of, they need to be thinking about what's the cost of errors, if I put it like that, because all AI, all machine learning, makes error from time to time and it depends on how it's built up, where it comes from, and these errors is what limits the usability of AI. So the cost of error is central to understand, but you don't need to actually understand the technology as a lead. Interesting, yeah.

Speaker 1

I don't know if I tend to agree or not. I think there's a certain basic literacy that I feel people should have.

Speaker 2

But yes, I do feel that literacy is important, especially when it comes to data collection and also statistics. Statistical basic understanding is really important for everyone, because statistics is not something that's intuitive to humans, so this is something where I think we really need to understand. What I was referring to is more like really understand deeply how large language models work and transformers work is maybe not necessary for using them for good, but understanding how data is collected and the uncertainty and what kind of conclusions you can draw on that and from a statistical point of view is really important.

Speaker 1

Thank you Spot on, because something that irritates me a bit and where I feel like there's a certain arrogance in the data world is to run those comprehensive data literacy or AI literacy programs in organizations where you say, well, everyone in the organization needs to have data literacy and we run it in a way that is probably way beyond what you're actually using on your daily basis. So you have to find that right balance on what are the basics that everyone should know and when are we going in to towards something of more an expertise role?

Speaker 2

Yes, and again, I think it connects back to the use case and the cost of errors. Right, because you need to understand what sources of error for your use case Exactly.

Speaker 1

Yeah, let's talk a bit more about the decision science group and how you are doing your daily work as a manager for that group in Tetra Pak. What is it you do, what is your goal and how do you go about?

Speaker 2

that Good question. So we call ourselves decision science because it's the focus on the use case, the decision making. So it's a bigger take than just data science. We combine data science with behavioral science, so behavioral economy, behavioral, the user side of things in general. So it's, you could say, decision science is about the use of machine learning and AI for better decision making. And then we mean decision making in a very technical way, meaning that there is a choice between different options and choosing the best option. So that can mean anything from strategic decisions and down to is there a logo on this package or not? It is the things that humans had to do before Cognitive choices, the cognitive processes that leads to choices. That's why we call RSL decision.

Speaker 1

What I really enjoy about the description you just gave is that there is really a focus on usability. How is product, the end product used, adopted by an end user? What change would that mean? And I think that is something that I feel like has been missing a bit very much. When we started with Gen-I 2022, everyone had a use case and just starting into like kind of a fog of war, you don't know where you're getting into, you don't know what actually is the outcome of the investment that you've taken, but it's more about speed and getting some kind of use case across instead of saying, well, what use cases actually provide value?

Speaker 2

I totally agree. And focusing on the end result you want. That's where the journey starts, so figure out. What is it we're trying to change Now? A lot of the use cases for gen AI is personal efficiency and not organizational improving a specific organizational target, which makes it a little bit of a different animal in the AI toolbox. I now usually differentiate between generative AI and analytical AI to put the finger on what the target is. The goal is Interesting.

Speaker 1

I feel like Gen AI and the entire hype around it. At least, I use it more as a vessel to get other things across, to talk about AI in a broader perspective, to talk about computer vision, to talk about language processing stuff like that Not just Gen AI, but also get a buy-in for everything else we have done and get it onto the same umbrella. That's one thing, and the other thing is that I really use it as a vessel for data governance, for pushing for getting your input correct.

Speaker 2

I think those are two very good ways to use it pragmatic ways to use it. We're all riding on the coattails of hype and trying to get as much value from that hype as possible when we have the interest of the general public or inside the company. But we now have a higher understanding and it does drive a lot of possibility to change and it open up opportunities for us.

Speaker 1

Very much so, and one thing that I've realized and that one of my colleagues has been talking a lot about, is that there's a switch from push to pull right. We've been pushing AI solutions out to the users, hoping for adoption. Now it's changing because of the popularity that users come to us and ask so what can we do with AI, which gives an entirely different input to how we work with ideation?

Speaker 2

Yes, I do agree to a very large extent, but also I would say that some of the best use cases, where there's most value, we still have a need for push, because pull normally comes in the things that makes things easier for the users the immediate users, Things easier for the users, the immediate users, and so some of these very valuable use cases, it's not the primary user itself who gains the value from using the tool. It might be someone up the chain, higher up in the management maybe, or someone in the logistic chain from sales to supplier somewhere in between, and it might not be you or your organization who gains the value. And then we have a big adoption problem because of incentives. But in the case of generative AI and personal efficiency, you immediately get the benefit and then don't have a huge adoption problem.

Speaker 1

There's one thing here that I found really interesting. It's missing a lot, and I myself have struggled with that, but you are spot on Already. When you think about what use case to support, what idea to develop, you should think about adoption and how does it affect the organization at large Really interesting. So how would you start the organization at large? Really interesting. So how would you start the ideation process? So how would you find a good idea and what are the first things you would do?

Speaker 2

Well, in the ideal world, I would sit down with the business someone who is responsible for as large part of the value chain as possible and identify what is the target, what is it we're trying to improve and, through this process, what are the bottlenecks, and especially bottlenecks in terms of decision-making?

Speaker 2

That's hard decision that needs to be taken that can be optimized Like you could put it, as different kind of optimization problems in any kind of pipeline or process, both manufacturing processes, but also in business process. So, identifying where is there uncertainty currently, where is there variability in outcomes, and focus on can we provide information or through the use of analytical AI that helps that decision making, or can we even do prescriptive analytics and really take the human bias out of the equation and then, based on that, we will start ideating on how to do that and also ideating about how to drive adoption of that and we need to look into do we need to update our business processes to fit a new solution? We should never look at the technical opportunity in isolation. It's always relating to business process and the user dimension.

Speaker 1

Two follow up questions, and one is and you already said it it's all dependent on user adoption. It's about what value do you actually drive with that idea? But there's another element there and that's overall business strategy. And the thing is that I realized through time is that when you do work with the use case, when you do work with ideation, you do try to make it fit to the value proposition that you have in your organization. It's easier to go into the operational, tangible path than to connect it to strategy, because it gets abstract and you lose people on the way. So what is the right time to connect the work you're doing to strategy? Or do you think that should be given before you even start working with ideas?

Speaker 2

It is a tough question to answer right, because often the case is the more abstract you get, or like the higher high level you get to, the higher value you have, but you also have much higher risk. I mean the probably the biggest values we have as companies is to change our business models based on the new possibility of of ai that ai provides to us. But changing our business models in old industrial companies like Tetra Pak and others, that's not easy, though that we are conservative organizations and slow moving, and that comes with a huge, huge risk. And but that doesn't mean we shouldn't do it. But we need to focus on the right issues while on the operational side, the more closer we get to plug and play in current way of working, the more and less risk of developing something that won't be used, but also, most likely, the lower value because we are not taking full advantage of new possibilities.

Speaker 2

So AI and machine learning changes the constraints of business. So the constraints of the business and how we are organized are focused around the constraints of 20 years ago. Now, with new technology, those constraints have disappeared, but our organizations live still in an environment based on old constraints that doesn't exist, especially when it comes to cost for decision-making. If you build an AI, the cost for decision-making and analysis can be converted from OPEX to CAPEX. You make an investment up front and then you have basically free decision-making in this area.

Speaker 1

Very interesting, and that leads me to one of the problems I've seen with business strategy and especially data AI strategy, and that is that you look at strategy as frame or even a constraint of what you should do or should not do, rather than looking at it as something that gives direction. If you look at it as something that gives direction, you have the flexibility in the organization to adopt to ever-changing things in the market and micro macro economy, but also internally in the organization. But if you look at it as this is the plan and we're going to stick to it for the next five to 10 years, you're definitely going to go down the rabbit hole. Yes, well put. So I had a second question, and that is about something that I think is obvious, because we talk about it a lot. When we talk about project management, we talk about how do you connect to your stakeholders, but how do you find good sponsors for your ideas, people that support it on the upper level?

Speaker 2

Yes, this is a very important question. This is a very important question to solve. I'm not sure my answers are how generic they are or how specific they are to my organization, but my experience is that in order to drive change, that needs to come from tool. So if you go for these high value use cases I talked about, where that also needs a lot of change, you need to have the support of top management for that. And yeah, how do we then reach top management and educate them about this? That is tricky because they are very busy people and also take high level decision points.

Speaker 2

The direction and, I think, data science are more detail oriented for people. So so it there is a communication gap here. We need some kind of translator role to translate the possibilities of AI to these people. But in fact, we we are making a change here. So the situation today is very different from the situation five years ago when it comes to the top management, eltt, understanding of what we're doing and maybe with the aid of generative AI hype. But everything done in a company is a lot of prioritization, choices between different activities we want to focus on. Basically, what we can do is try to point to the objective facts of the advantages of doing decision science compared to other alternative activities. Now, that's not always easy. We need to walk the talk.

Speaker 1

I like that walk the talk. So you have an idea about a new solution, you have found a sponsor to support your idea. You have navigated the organization to find the right people. It's even connected to your strategy. Now you go on to the next step and you go from that idea to actually building a first prototype, building a first use case. How do you go about that? Who do you involve? And then like really basic questions that every organization should have an answer for, or a template to like do we buy a solution? Do we build a solution? Stuff like that? They come up automatically. So how do you go about that? I?

Speaker 2

guess this is an era where all organizations struggle based on their setup. Petrobras is not the data-first company, right? So our IT organization also has a bias towards buying. I would say. Meaning that's chat, or you can also say I have a bias towards building, maybe, but I think that's the nature of the beast.

Continuous Improvement in Data Product Management

Speaker 2

When it comes to optimizing individual decision challenges, building does often be and it often turns out to be the best option. But yes, how, how do we go about doing that in large company and still be agile and quick? And I don't have great answers. But again, focusing on value and what you're trying to achieve and then think about is a generic solution or buying? Does it seem feasible that that is a better option? Then we should go for that. I mean, especially, some kind of problems are generic in the terms that all companies are struggling with this, then probably they're buying that A solution, for that is probably the best way forward. But when it comes to core business that's specific for your industry, building is likely the best option. Not sure I answered your question properly now. Maybe that's the answer I have.

Speaker 1

I think you answered it on several levels and I really enjoyed that. Just to paraphrase on a bit of a higher level, I feel like from my experience I've seen organizations end up in you've heard the term analysis, paralysis, trying to find the best solution. You end up in an endless cycle of running proof of concepts and never really getting to any end product that you can actually ship out to your customer or any end product that you can use in your organization. The thing is, I've talked with a lot of people about that and some people say well, just drop proof of concept. There's no value in a proof of concept. If you have done your work correctly in the ideation phase, if you have picked the right use case, just run with it. And if it fails, well then it fails, but that's okay.

Speaker 2

Yes, I think I agree to that, mostly with the caveat that you need to measure your impact somehow or assess your impact all the time and reassess are we doing the best thing? Maybe not every day, but regularly, and try to be fact-based on your own impact. In some use cases that is extremely hard because the feedback you get from I mean, are you efficient or not it might have a lag of three years or four years, and then it's maybe not practical. Then you need to figure out can I find some leading indicators, even though they are not as easy to prove causation? And having said that, this is something I think is interesting to mention that this feedback loop, having a feedback loop, is integral for any kind of decision-making.

Speaker 2

You need to have a feedback loop and you need to focus on the decision-making, not the outcomes. So, did we make the right decision at a certain point? You can only get better at decision-making by having feedback loops and also feedback loops focusing on the information you had available at the time of the decision-making, on the information you had available at the time of the decision making. So a good decision is based on the information you had before a decision, not the outcome that you end up with because that might be coincidental. So, having this feedback loop and building expertise this is also something that was core for all data products.

Speaker 1

Oh, I love that. I really do, because I've been talking way too much about feedback loops and I've been going towards the double feedback loop as well. Right, you don't go back to what you originally intended, but also go further back and look at does it still fit with the overall vision and the overall strategy?

Speaker 2

Yes, very good, I think, having a learning organization not focusing on errors, but focusing on learning, so not trying to avoid failures, trying to build an organization that learns from failures and without being afraid of them. I think that's a key for us to move forward. Easier said than done.

Speaker 1

That's very true, and I think that's why I've been talking about the double feedback loop is that if you do your learning and you correct yourself just one step at a time, you can still go in the wrong direction. Sometimes you have to go all the way back and look at how does the curve develop over time.

Speaker 2

Yeah, and questioning your fundamentals as well.

Speaker 1

One thing that I've well, especially now and we talked already about the AI hype that we are still in, even though it's cooling down there's something happening about quality versus speed. So if something like the AI hype happens, organizations that jump on the train. They want to pick up speed, they want to have something delivered quickly, they want to come to value quickly, and often you have to do that at a certain cost. Right, you do it either at quality cost or you have to have a financial cost in having more people, more resources allocated to it. So there's an inherent struggle there. That is kind of one of the basic struggles that we talk about in project management, in situations like the AI hype, where your innovation potential really gets narrowed down into one direction. Have you experienced something like that?

Speaker 2

Yes, definitely, and I would say the most important thing to think about when you make this balancing act is, again, to understand the cost of errors right. So there is a saying that ready is better than perfect. So having something out being used creates more value than than, uh, developing the perfect product, and that is true in most cases. But if the cost of an error is huge, of course you need to be more careful. So that's, of course, why FDA exists, but I think in many cases we are too conservative and too afraid of errors and deploying something that's not perfect.

Speaker 2

That said, quality is really important, but quality is not a project, it's a process. You need to always think about improving both the quality of data and the quality of the tools you use, which is an inherent trap in having projects right, because projects don't take the future into consideration in that way. Of course, we can't live without projects. We need to make these stepwise improvements as well, but I'm a firm believer in a product thinking where you have continuous improvement that work iteratively. Continuous improvement work iteratively. Now, that's not the case for how we work in most of the cases, but I feel that that's a better approach to reaching your business targets.

Speaker 1

Very true, and I'm a bit divided here, to be honest, between Agile, lean and Classic Waterfall. I think you are right. It really depends on what you want to achieve and what your constraints are. And if your constraint is that you have to deliver something at a certain point of time, then well, maybe waterfall is the right approach to do it, probably the only approach to actually deliver something at that given time. But if you want to deliver what's best for your organization, where you have included most of the learning you, you take a decision and that has huge implications then project is probably a better approach.

Speaker 2

I used to work in construction. Close to construction and being agile with the concrete is not the best way of working. That's why they call it concrete.

Speaker 1

So you have your idea, you implemented it into a use case and now it's actually something that is an operating product that you are shipped to your customers or use internally. That doesn't end your cycle, right, because the product still needs to be maintained, it still needs to be measured over time to see the effect that you hoped for actually established. And then we come into a new set of struggles, right? How do you take care of data quality? You already mentioned it. In operation, how do you adjust to the value you want to produce? How do you adjust?

Speaker 2

for data capture, provisioning, traceability throughout. Yes, this is maybe the most important question to solve. At least something I struggle with in my organization is once we have delivered a product, that's good, how do we make sure we gain the most value out of that as possible? And that is done, like you said, by iteratively working on improving it and having a team that's focused on improving this, both from the business but also from the ad form side, or tech side, or call it. It depends on how your organization is set up, but iteratively improving it on a tool that's important Seems reasonable, right, and if it's not important, then we should maybe not maintain it and decommission that instead. If it's not worth to maintain, probably we should decommission it instead of keeping it aligned just barely.

Speaker 1

I like that you almost used improvement and maintenance as synonyms. There's obviously a difference to it. But if you have a product in operation and you need to maintain it, you also need to find ways to improve it, to keep it relevant over time.

Speaker 2

Yes, exactly. Someone once said a dashboard that isn't updated is a useless dashboard. Because it's not used, there is no improvement needs, and then you can just decommission that dashboard, and I think the same is true for any data A vital element to have something operative over time is also to have, as you said, users actively adopting to the solution and using it.

Speaker 1

How do you get people to adopt something new? We talked a bit about it in the beginning already, where you said already, when you start with your idea, you have to think of adoption and how it will affect people working with it. But now you are in operations, you have a new product you want people to get on board. How do you go?

Speaker 2

about it. Yes, and this is also one of the key challenges which I'm focusing a lot on right now, because some of the products we built, we have slow adoption, like that, and we have gone about. It might not be, I mean, the gold standard, but what we at least are doing we're trying to measure usage. I mean, that's the first step having data um to to so we can monitor it. But, um, understand measuring in the, or somehow understanding what's, what's the blockers for adoption. That's, I think, is the key.

Speaker 2

So whenever you have an adoption problem, you need to understand why do we have an adoption? What is the because? That can be many, many different things. It can be technology, it can be a business process, it can be, uh, the user himself, who doesn't see a value. Maybe it feels threatened I don't want to be replaced, and so on, but that varies from product to product and unless you understand the problem with your particular adoption problem, you won't be able to address it properly. So I think that is one thing, but of course, you also need to have an organization that's interested in monitoring and talking about adoption and not having a fire and forget mentality.

Speaker 1

Now there's one thing that I've seen, sadly enough, way too many times and I heard about it also. I talked to other professionals on the podcast and that is that we have certain measurements in place right to see for adoption, to see how the product is used, and then, if it's not used, we look at the measurements and see how can we improve them. And the problem I've seen with that is that you don't really go to the root cause of the problem and you don't even want to fix the problem anymore, you just want to improve your metric yes, and this is one of the difficulties with the measurements are like it.

Speaker 2

They, instead of becoming it being an indicator that you can use for understanding the root cause, they become the targets. I really believe also in physics. You say as soon as you're measuring something, you're affecting it. That's true in a different way in the organization as well, because as soon as you measure, you might start sub-optimizing in a different way. This is one of the challenges we have, because organization wants to. It's made up by a lot of people have different understanding.

Speaker 2

Just agreeing and understanding a measurement is hard enough, and then understanding when to disregard the measurement or, in a sense, dig deeper and not optimizing Only that thing that's tricky. So again, I guess it comes back to having a learning organization, but I don't think that would even fix the problem. It's very deeply rooted in humans. I think. Once you have something, you don't want to rock the boat. You have something you understand, you don't. People get tired of change right and there is a cost of change and any change of something that's established comes at a quite big cost. If you have measurements established and they are not helping you, you have a huge problem because they are established and change will cost a lot, so I don't have a good answer to that question, right?

Speaker 1

yeah, it's a tough one. Um, it's a bit like when you drive your car and then the light goes on at your battery is low and then you're like something has to be wrong with the light instead of saying something wrong with the battery. So you, you mentioned it already. Um, you talked a bit about decommissioning, right? So if you, if you have an unused product or a product that went out of date, there comes a stage where you need to decommission or purchase a product. Now, that doesn't always happen. Sometimes it just resides there into the nothing, into the big bucket of dark data and dark products that have been there and have been active at a certain point of time but are not used anymore. So decommissioning is really something that often gets forgotten, but I think it's really important to keep your organization fresh and vital to have a good policy on how you decommission and have that end date of products defined.

Speaker 2

Yeah, and I think something you need to think about when you build a product first or have an agreement with your organization to figure out at what threshold should we start talking about decommissioning it, because there is a cost to maintain and also, in many ways, there are costs to having products and we should focus on paying for what's really useful and not something that's useful for one or two people, of course, depending on the cost, but yeah, all right, we are at the end of it.

Speaker 1

Thank you so much for a great conversation. Before we we finish, um, is there any uh key to action or key takeaway that you want the listeners to hear from you?

Speaker 2

Yes, I would recommend to. When you have problems, start with the value, or how does this add value to our organization? Start there, what's the value I'm trying to create? And then, next step, think about who are the users who need to enable this. And then, next step, think about who are the users who need to enable this. And then, once you understand the user and their situation, then you can start thinking about how do I build the product to help them out. So don't start with technology and the idea. The solution Start with the value and then the main blocker usage Usage is necessary for value. So that's my Fantastic Thank you. So much Thank you. It's been great being here.