Cream City Calculation

Disastrous Data Decisions

Cream City Calculations

Welcome to the Cream City Calculations Podcast. We're three colleagues and friends at Love Data and to talk about how data is impacting our lives. I'm Colleen. I'm Frankie. And I'm Sal.

Colleen:

Hello and welcome to another episode of Cream City Calculations from More plans gone wrong to billion dollar blunders. Bad data has a body count and a price tag. Today we're unpacking some of the most disastrous data decisions in history and business and how you can avoid making. I thought we'd maybe start our conversation today about bad and disastrous data decisions by talking about maybe our own personal experiences with them. I know we've each worked in data for many years and I'm wondering if either of you guys have any examples of a time you can remember when you ran into a scenario where something just went way wrong or just something you see regularly that you know to sort of watch for when you're working with data. Yeah. I think for me, nothing major ever happened where it cost a company billions and million. Yeah. I probably wouldn't be employed if that happened. Right. But catching things prior to it going out or going into production when it could be, yeah. When it could have been like, oh, these numbers are decimal points off. Right. Yeah. Or like the fishy. Yeah. Or, Hey, this is rounding up to the next trillionth or whatever. It shouldn't be. And that's, I think I've had issues where that's come up and it's I think without really good diligence and understanding, kind of like your production kind of, stance, I guess, yeah. It makes it a lot easier or it makes it harder to I dunno what I'm saying. Yeah, no, sure. What about you Frankie? Yeah, for me, I can also agree I haven't had anything drastic happen. Yeah. But there have been errors that we've found in production data. Sometimes, you know, it appears to be correct and we start diving into it a little bit more. And like thinking about financial services. Yeah. Some of those wrong data points could have led to disastrous decisions. Yeah. Yeah. But we had really great analysts and they would see that and be like, Hey, that number just doesn't seem right from if I had a dollar for every time I said I heard somebody say, and that doesn't seem quite right, or that seems a little fishy, like I'd have a lot of dollars. Yes. Right. But it's that people that know that data and what it should look like, that I think makes that difference. Yeah, for sure. I have a personal example from early in my career, before I even worked with data, I was still in college. And I worked for this company, they used an Excel spreadsheet for an order form for products, and it would automatically include a percentage to mark up products. And somehow the cells, the formulas got edited. And so I was covering for a salesperson that was out on vacation, and I sent this quote out. I didn't know any different. He would've normally been this other set of eyes, and they're like, yes. And they signed this deal and I got like tons of praise.'cause it was like a million dollar sale. It was a really big sale for this small company I was working for. Turns out we hardly made any margin because this calculation in this cell in Excel had gotten modified and oh my gosh. I mean, anytime now I see anybody using Excel for something, I'm like no. You need to use something where a user can't accidentally change that because I remember that from 20 plus some years ago. So that could have been disastrous. At least we made some margin on it, but we should have really made a lot more money on that deal. I bet you there's a lot of not, I wouldn't say disastrous, but bad data decisions happening. People don't even know, or they never even come of light.'cause it's just like very little oh, I lost a little bit of margin. But no one really recalculated, really noticed it. It was like yeah. And I think about that too. Okay, that was a calculation. What if your calculation is just slightly off? Right. Or one of the values in your calculation is slightly off for some reason. Yeah. Or if you hard code a number in and forgot to go change it when you need to update it. Yeah. Yep. Yeah. That's always a common issue I've seen. Yep. And I think as you get more and more data, it's harder to find those little like number errors'cause you're using so much data. You're like, oh I think it looks. But it is amazing that like the human. Aspect of it. We're really good at identifying anomalies. Yeah. And so seeing where patterns are off. Right? Right.'cause you can feel that the pattern's off and you're like, okay, I'm not sure exactly how, but there's something wrong here. Yeah. Yeah. Yeah. Ironically though, I've also had experiences where, and this is maybe not necessarily data quality, but the by the human, where they want the data to say what they want it to say. Yeah. Oh yeah. And they try to manipulate it so that it. Can say what they want it to. Yeah. And that's always been a huge issue, especially like I've done some analyses around the number of employees that are necessary for certain workloads. And that's always really controversial'cause they don't want to have to get rid of people. And I completely understand. I wouldn't wanna get rid of people either. Yeah. But if the data's telling you that you don't have enough work for those people, it's something you have to consider and either add. Other workloads or figure it out, but it's, they never want to do it. How do you manage that? Or how do you kind of navigate,'cause either a client or maybe someone that you're working with or for ask you to do that. How do you manage it from, hey I know this is not a hundred percent right, but but you gotta give them what they want, but yet still give them the right answers. Yeah, I mean, for me it depends on the situation. If it was consulting and. Doing workforce analytics, like I am giving them the data and my analysis and the decision is up to them on how they want to come take that information and what they wanna do with it. If it's a company that I am working for, I'm much more willing to put up a fight because it is my job to Yeah. You know, represent the company and do what's right for analytics. So like in my in my experience at this time, I was actually working for the company and I went to my manager and got assistance and. And, you know, push, not pushing my agenda, but like pushing the data, depending the data and driving towards a more accurate solution. That's good. Yeah. So to loop back to some of the articles that we looked at, the Monte Carlo data. Five examples of bad data quality in business. Yeah. Yeah, these were good examples. Yeah, they were great. So I think one of them. That kinda hits close to all of us is it is false pricing data. And the amount that revenue that is lost because they set the wrong price or they miscalculated the price. As consumers, sometimes it's really good for us. Yeah. Right, right. But from the business side, that can be extremely detrimental to a company. It could probably put'em out if they don't have enough. Right to cover that. Not revenue, need to cover it. Yeah. Another example here we will talk about is Equifax. And I love this one. This one's very fun for me. Yeah. So Equifax, between March 17th and April 6th, 2022, they were producing inaccurate credit scores for millions of consumers. So as these consumers apply for auto loans, mortgages, credit cards they provide data. To some of those major lenders on the, on that person's credit score so that they can make a better decision on whether they should approve the loan or decline the loan. And it also affects the interest rate that's going with that loan.'Cause the higher your credit score, the lower your interest rate. So given that an Equifax is inaccurate credit scores, they had some scores that were way higher than they should be, and some that were way lower than they should be. And so the banks that were. Providing loans we're making inaccurate decisions and giving people loans who shouldn't get them and declining people who should have gotten those loans. And Equifax was actually sued because there was one person who had a credit score that apparently that was 130 points lower than what they should have. Wow. And they were declined for the loan that they were trying to get. It was an auto loan, so there was a class action lawsuit there. Equifax agreed to a$700 million settlement and they were charged with negligent security practices. So, you know, not having good quality data and positive data governance can be drastically impactful. I mean, that's a lot of money. And their stock price fell about 5%, which is honestly less than I would've anticipated. But still ev even, so it's a good chunk to fall. What's surprising is how did they find that? I don't know if I would know mine. Yeah. 20 points off. I don't, maybe a hundred, but I wouldn't check it enough. Like it must have been the people with the drastic differences that had kind of a baseline of where they should have been. Or if you're looking to get a loan, you kind of know more about your credit scores that time. Yeah, I think that's true. Yeah. You know, you would know kind of ballpark where you fall, or a lot of people do anyway, and if they get declined, I could see that being like, oh, wait, what? And then they go look again another, you know, couple days later and it's drastically different. Then they can tell, Hey, there was a problem here. That's probably why I got declined, or whatever. Well, and I don't know about you guys, but I check my credit often to make sure there's nothing on there that shouldn't be. And,'cause I've had an experience where I had something on my credit report that shouldn't have been there, Uhhuh, and it was like a loan that said I had default audit on it, like completely didn't pay any of it. And I was like that's not right. Yeah. So, and this is when I was trying to get my first. Home loan and I was like having to go through it. And so I checked my credit score probably once a week to make sure. Oh wow. Okay. That's that's probably just from a little PTSD, but I do think it's interesting too like, I mean, each credit company has such different algorithms. And I always wonder about that, like when you have such large differences, like sometimes I'll see like a 50 point difference between the different companies and I'm like. Something's gotta be off with that algorithm. But yeah. But I never do anything about it. I've never looked into whether there was an issue. I mean, I've used Equifax in 2022. Maybe I should have been maybe I should have had issues with this. Yeah. What's cool is now it's so much easier to see your credit score. Yeah. I remember years ago it was difficult to know, you had to like ping one of these three. Credit reporting places to get your credit score. Now almost every credit card and bank account makes that available to you. And it used to sort of ding your, like anytime you queried to see what your credit score was, whether it was a bank doing it like on your behalf when you're applying for loan, or it was yourself checking on it, that counted as like a ding on your credit. Yep. And so back in the day, people were really hesitant to do that. Because you wouldn't wanna hurt your own credit, but just by checking on it. And so. I love that. It's so much easier to see that now, and I just put, every time I log into my online baking, here's your credit score, and it's cool. Thank you. Thanks. Yeah, credit scores are wild. I know they're completely made up like numbers anyway, but Right. Just side tangent there. Right, right. Maybe we'll have to talk about bad algorithms one day. Here's another interesting one from the same article. Samsung's$105 billion. They're calling it fat Finger data Entry Error. In April, 2018, Samsung Securities accidentally distributed shares worth$105 billion to employees, 30 times more shares than the actual number of total outstanding shares. Do you think there could have been, I mean, there should have been somebody in there going, yeah, we don't have that many shares. How are we, yeah, I don't even know how that gets out. How do you make an error? On that scale, it says, the mishap occurred when an employee made a simple mistake. Instead of paying out a dividend worth 2.8 billion, one, they mixed up their keystrokes and entered shares instead of one into a computer. So as a result, 2.8 billion shares were issued to employees in a stock ownership plan. Oops. It says it took 37 minutes for the firm to realize what happened. Yeah, that's, yeah. Oh, that's a bad one. Wiped up about 300 million in market value to them. Well, yeah. Did you guys catch 16 employees in that 37 minutes? Sold 5 million shares worth 187 million. Yeah. Like good for them. Yeah. Good for you for finding that those are the people watching their credit scores. Didn't they have to read? They had to have known, but they're, I mean, yeah, I guess if you get that and you sell it, like what are they gonna do? Yeah. I don't know. I mean, it's legit like you're not doing anything wrong. Yeah. You were given these shares. Yeah. Thank you. I can't believe like a legal team didn't review it. Nothing like, sorry. Like why was it just somebody just entered that in and was like, okay, we're good. It just takes one person to hit enter and just okay. Yeah. It definitely seems like there should be person, it just has too much control. Yeah. Yeah. Some chain of like approvals there. Yeah. Yeah. And that. Dropped their stock shares 12%, wiping out 300 million off its market value. So, a much bigger impact than the Equifax one. So from your guys' history of working in data, you see some of these examples. What are some of the ways that you think that you've seen that companies should be putting in place the governance, the structure? What have you seen worked? That you're thinking maybe that these companies did not do? I mean, first and foremost, the simplest thing is to have more than one person reviewing the data. Yeah. I don't think I've ever, I think especially too, if you're working for a company that's public before you release any numbers, they should be going through multiple Yeah. People for review. And then once you come up with a formula, your calculation, put it in such a way that you cannot edit that calculation, or you'll get the scenario like. Frankie, you know, talked about earlier where people wanna manipulate that to make it look like a certain thing. No, the numbers are the numbers. But I think that's the most basic thing you can do, even if it's not going out externally. Half multiple people review that. Yep. Yeah. I mean, even myself, like I, I review, like when I'm putting slides together on information, especially around like sales. I review it like 50 times. I swear I go through it and go through it. And then the next day I'll come back and go through it again. Yeah. Just to make sure I didn't make some sort of small data error calculation error.'cause we just I just was using my calculator, like just going through this and I like to be a little bit cautious around my own mathematical ability. Yeah. Yeah. Like for us, like there's a lot of companies that use change management systems for, from a data perspective. To make sure that it's getting a second pair of eyes, it's even getting maybe a third or fourth pair of eyes on it. But it's funny'cause the business side doesn't really have a lot of that. And so that's probably where a lot of these fat finger kind of entries are coming from is that business side. So a good practice just from any analyst or any person that is putting this in, is make sure that you're double checking it. Make sure you're having a peer review it in some way. Yeah. For sure. Sure. There's one other case in this article I wanted to bring up because I don't know, it kind of just made me laugh. It's not funny at all. But this article also mentions the public health in England's unreported COVID-19 cases. The issue there was that they were using an old version of Excel and Excel spreadsheets in this old version could only take, contain 65,000 rows of data. Don't use Excel for your database people. Yeah. First and foremost, that's not a database. That was wild when I read that this was 2020 people come, this was 2020, they were reporting their COVID cases, storing the data in an Excel spreadsheet. But it was the old DO XLS version, not the xlsx. It's still either way, don't use. Don't do that. I mean, I think you should also just know the limitations of any software that you are using. Yep. For things like that, it can only contain so many rows or I don't know of another software tool that has a limitation like that X but has only a million rows. A million. So you still, it's still the limitation. Yep. That one, that's also true. But you should know the limitations or kind of gotchas to any piece of software that you use. Yeah. Yeah. I mean just the impact on a miscalculation or and again, this is like a second pair of eyes, probably wouldn't catch that because they'd be like, oh, this is the number of what it is.'cause especially'cause Excel seems like it can go farther, like scrolling wise, but it won't add anymore. I think having that and understanding the system that you're working with yeah. Is really important. Understanding. Proper data management would help with a ton of this information. Or misinformation. And I feel like there's sort of data governances on two sides, right? There's data governance to, standardize your data, but then there's also did we receive all of the data that we should be receiving? Yeah. And there you need to sort of have checks and balances there too, to easily identify when data is missing. Like, how are you gonna make sure that you receive the files from yesterday? Yeah. And then in that case, if you're receiving files still, did we get the entire file? If they're telling us there should be 10,000 rows and we got 9,000, you need something in place to catch the fact that you did not get all of yesterday's data. Yeah. Or maybe there's key fields you absolutely need to get an account number. Well, if only 80% of your rows have an account number, then you're missing data. And that could really affect how that data is maybe joined together and reported on further down the road. Yeah. So I have a question for you guys. It's a little off, off of script, but what AI data disasters have you seen recently? Because this is more I mean, today we're starting to see AI is being used quite frequently. Yeah. I have a really good example and it's not exactly disastrous. But it was in a community forum. Somebody had taken a question. The, there's, in the IT world, there's community forums where people can post a question and say, Hey, I'm running into this issue. Does anybody know how to resolve this? And other people who've solved that problem can come in and say, yeah, you gotta hit control X or whatever. You know, they'll write out the solution. And so I was doing a web search to find a solution. I thought, cool, here's an article. Somebody already asked this question, it was answered. And I'm reading through this answer and I'm like. That doesn't make sense. Like that option isn't even available in this scenario. And I happen to scroll down a little bit more and somebody said, Hey, so and so, you know, person who answered the question, please delete this and make sure you mark your answers as generated by ai if that's what you're going to do. And this person had put the original question into a, you know, chat GPT or something and just pasted in whatever chat GPT gave them and it was completely wrong. And I just happen to know, like I said, when I read this solution, I'm like, that option does not exist in that scenario. What are you talking about? I thought I was wrong. I don't know why I originally thought that was, my initial thought is, oh, I must be wrong. Yeah. No, I was not. But I've seen things like that where people just want to seem smart or whatever, and they wanna post a bunch of answers in forums online. Yeah. So I don't know that's exactly disastrous, but it could be for somebody. It could be who doesn't know and. Gets bad results. So I have an example that I'm gonna speak to you quick'cause I think it's hilarious. So, McDonald's ended their AI experiment after a drive through ordering blunders. There's a TikTok video that featured two people who were pleading with the AI to stop it from adding more chicken nuggets. And it just kept adding more and more. So, and they had 260 orders of chicken nuggets in their order. And this was in 2024, so it's not that far back, that McDonald's decided that it was gonna end their partnership with IBM's and shut down these tests after this was happening. More than just this, it just happened to get really popular with the whole TikTok video. Yeah, but it's hilarious. Yeah. I think there's been a couple of them. Even Gronk. Yeah, it actually, I think it either article came out recently around Gronk possibly getting a government contract. And then they ended up having antis semantic or made Antis Semantic, yeah. Messages or, responses, which cost them. A government contract, which is worth probably billions. Billions of dollars. Billion. Yeah. Yeah. And so just the amount of error that can still happen with that is pretty scary. But also in that case, kind of funny, I love the one sub headline in that article, Frankie AI coding tool wipes out production database and lies about it. Is it an intern? Yeah. Yeah. Is this CrowdStrike? I'm just kidding. Yeah, right. Yeah. I mean, there's another one here where Air Canada had to pay damages for chatbot lies. So it was a virtual assistant and it gave a person incorrect information around bereavement discounts. And I told him that he could purchase a ticket from Vancouver to Toronto and apply for the bereavement discount within 90 days of the purchase. So after that he went ahead and he bought the ticket. And then the return flight as well. And then he submitted his refund claim and the airline turned him down saying that it couldn't be claimed after the tickets had been purchased. So they ended up having to pay for the full price of the tickets then instead of just the discount. So, yeah, just, I mean, it seems like a small error, right? A little bit of inaccuracy in the algorithm or in the underlying views. But it's so important to have like when, especially in AI to have semantic views Yeah. That support all the questions that your users could be asking. And I mean, I've seen that a lot lately'cause I've been doing a lot of work with ai where people don't even know what a semantic view is, but they wanna do ai. It's like we need to take a step back and start thinking about. The business terminology, when you're thinking about a business chat bot or the just human language that, you know, every word can mean, could be said in another word, right? So like sales, revenue, your net sales, like all these different terms that you can use, and they need to be able to help the AI bot understand how to translate from word to word, right? Translate what a person says into actual, yep. English almost, you know. I'm kind of working on very similar things. So, one of the main things that we look at is hey, there's a lot of upstream stuff that need to be right. First of all, you have to have quality data Yeah. For AI or any of that to go off of. Right. And so if you're making decisions based on an AI chat bot or whatever an LLM telling you information, it better be referencing good information. Right. Right. The other things is having. Cataloging, having descriptions and having all that built out. All that metadata. All the metadata, very important built out. Yeah. So that can feed into your semantic views so that the LLM can understand how to reference this information. What is the join one? Faulty join could make a huge thing. And then along with that is making sure that you're monitoring on the other side of that is, hey. Let's observe these LLMs. Where are they? Are they drifting from what the truth should be? Are we verifying queries within there that it's actually doing? I think it's extremely important, and I know there's been a couple cases where like quant shops have done something like this or built out algorithms to do trades and automatically traded with it and cost them billions. Yeah. Yeah. And you know what? Like people aren't doing what they should be doing is training their model to say, I don't know. Yeah.'cause I don't know about you guys, but I think about my human decisions and like sometimes I think I know the answer. I say it, find out that I was wrong and then have to go back on what I said. Yeah. So like my own core data decision. Right, right. And so it's really hard to go back on what you said. Yeah. If you thought you knew it. It's always super challenging and so. Having your AI bots being able to say, Hey, I don't know the answer to this question. Please talk to one of our representatives, or something like that, right? Yeah. You need to be able to train your bot to say that, because I don't think you can discount the idea of what that does to your reputation. So if you're this airline and your chat bot is giving out bad information, not only can you get sued and be out more money than just the cost of the ticket. You've now damaged your own reputation because people have this perception that your airline is Yep. Shifty, you know, like they're, you're doing questionable things. I don't want to fly that airline. And so you could be on a lot of revenue and it's not always so easy to see how much that could affect your company. Right. I would also say on the flip side, using AI to help qa Yeah. Your quality audit, like your data or your. Your information is actually a really good practice it can help you identify it. Obviously, I always have the human inner oversight in that because again, it can't emotionally think a lot of times, so there's some of this stuff, can's not gonna work out. But making sure that you can use AI to help do it, but also have that human element. To it. What do you think, like what would you recommend, let's say you are a company, you've got just a really simple chat interface so that people can ask the most basic questions. How do you think people or these, you know, people who work at these companies, really I should say, what's best practice as far as how can they make sure that those people elements are involved, where it's important or necessary, but also letting that chat bot that chat interface do most of that work? Which is the reason we have them anyway. Yeah. What's a good balance for that? Or what do you recommend? I think a couple things is AI is not a hundred percent right. Right. So one thing is as these things start to do agent or agentic work that escalates. So if it's 95% correct, most of the time it adds that 5% every time, which is a significant amount in just understanding our. Let's not make, have it do every decision. This puts human in insights into different parts of that workflow. But also making sure that you're doing proper logging of the data, having proper reviews of those logs. And then honestly, I'm a big advocate of having a small cohort of reviewers that will go in and ask questions of the system. And just making sure that there's no bias or if there is bias, identifying that and making sure that you're tuning against that if there is. Yeah. Yeah. I mean, and this is another interesting use case for ai, is using AI to test your ai. Yeah. So I'm thinking about like, when you have a question that you wanna ask of it or that you think your users are gonna ask commonly, ask AI and rephrase it in eight different ways or 10 different ways and then ask it every single way and see what the answer. Is and compare it because you'll find out a lot about how your system is gonna answer based upon asking it the same question. Rephrased. Yeah.'cause I mean, AI and L LMS only work on probability, so they're gonna try to pick, most likely, the highest probability word that is next to it is not always right. It's not like a human thought where they're thinking. Just logic makes sense. Logic. There's no human logic there. Yeah. It's learned human logic. Yeah. I have to say the CIO article Frankie has got some really great headlines. Yeah. Aren't they? It's Chicago Ju Juicy Chicago, sometimes Philadelphia Inquire. Publish summer reading list of fake books. Yeah. Basically it like completely made up of a book by an author that just does not exist. What's the other one? A New York City AI Chatbot encourages business owners to break the law. Yeah, that's a good one. Uhhuh. We'll have to, we'll put all these articles in our notes for the show, if you'd like to read them. Yeah. Part of this is like a lot of these AI things like. Okay. Are you like when you're on Instagram or TikTok or are you seeing a lot more of it and you're like, is this real or is this fake? Oh my gosh. Yeah. There's so much more that you have to like, know how to identify. I was just talking about that with Clay and I'm like, I mean, I see all this stuff and I'm like, whoa. Something terrible happened. Yeah. And then even with Wisconsin's had that flooding recently and some of the videos are definitely not real. Yeah. Like there, there have been multiple videos and some of them are real, but yeah. There was one that was gushing water and like this whole side of the hill like collapsed in. And I'm like, we don't even have like hills like that. So that can't be it. But I think it's especially hard in a situation like that because you are seeing images that are wild. Yeah, city streets where cars are underwater to the top of their hood. That was a real thing that happened. Yeah, but how, so when you're like kind of in that state of shock and you're like reviewing media and all these images, like how like I think it's harder for your brain to then pick out those ones that really are those anomalies because you're like already so shocked by what's happening. I think that can be really dangerous. Yeah. I think another part of it's is like decisions that are being made based on false images. Like that. Yeah. Yeah. If people are scared to invest in certain companies. Because they're like, oh my God, look at this image that's that happened and they're part of this, and you're like, no, that's completely fake. Really good point. Yeah. I think we're like definitely in an era now where just as consumers of media, we need to be aware of what AI is and how. It can be depicted. You know, there's images, there's fake articles, there's articles where people clearly use the eye to read to, to write their content. I think we just, you know, you can't, I think of it anymore as well, I'm not an, I don't work in it. I don't need to know about that. I think every person needs to know about it because it's already so ingrained in our culture. From a data perspective and from an AI perspective, who do you think is. Responsible for making sure there is that are, is the government responsible to putting in regulations that limit AI use in certain patterns or make it illegal? Like how, what do you guys think? How could you enforce it? Yeah. It could be incredibly hard, but like but something needs to change. Yeah. Something, right? You can't just say as it gets better and better when you can't, like right now. It's relatively easy enough to figure out what's fake and what's not, but when it gets really good. Yeah. I was just gonna give the example, like you usually would know a picture of a person was AI generated. Yeah. If their hands were weird. Yeah, if they had six fingers or four fingers or three hands or something like that. But that's fixed now. I was gonna say that's probably already resolved and should it be up to the AI company? You know, should it be on open AI to put in there that this has been like watermark put together by images in some way, or, yeah, exactly. Like how a photographer watermarks their images. Should it be like that OpenAI has a disclosure that this was generated by open ai? Yeah. Yeah. I mean, but that might need to be put in regulation by the government. Yeah. That's where it is. It's sure, and I can see people too. You know, you've got an image now that's got a watermark on it. Who could you just re-upload the image and say, please remove this watermark? Yep. Yeah, they'd have to build in some sort of something in the backend, like in the code. I don't, do not remove watermarks. Like you think about too, that like photographers and stuff, if they put their images out there with a watermark, then you can just remove that and is there a legal issue there? Oh yeah definitely a legal issue. I do think that there needs to be some regulation to it because I really, I believe that it can go too far. Yeah. When people are making decisions based on this information. Yeah. It could be detrimental. Yeah, I would even say like the fact that elections and different things can be manipulated based on this. Oh yeah. And like people, if you're not well versed in it could easily be manipulated into the wrong truth. Yeah. Yeah. That's a very good point. I like how you said the wrong truth. Yeah. They could be easily manipulated by the wrong truth. Yeah. Crazy stuff data breaches I feel like are a really big Yeah. Way that, you know, things could be disastrous. Right? We've all been, probably had our information hacked and from, you know. Google, or it's some of these big tech companies, they have hacks against them, and all of a sudden all your passwords are exposed. Yeah. Or all your medical information is exposed. So I think that, you know, these companies need to do their due diligence on the security side as well. And I, I realize that hackers are getting smarter and smarter. Yeah. So it's probably an evolving area that to keep your data secure. Right. Yeah. I don't think there's like a great way, like obviously every day, like hackers are trying to hack more and then the security team's trying to push back, right? Yeah. But like from an architecture perspective I don't think we have a solution yet other than just the normal multi, multi-factor authentications and all that. But in encryption. To Hey, how do we set, like maybe structurally setting up a database where you're not having social security numbers in the same thing as the, as all the other information or having those extra lockdown or something like that. But again, hackers are gonna hack and try to get in. Yeah. Yeah. Or even system downtime, like when one of those three big clouds goes down. Yeah. I mean, have a proper disaster recovery. Yeah. Structure. Yeah. And you're trying to, your business doesn't shut down when the cloud shuts down, like you're still making decisions and you're having to use maybe out of date data, or if your disaster recovery systems up, then you can get data. But of course you believe that it's replicated to the most, you know, accurate level that it can be, but is it you don't know. Yeah. Unless you verify it constantly to ensure that it is. Okay. Yeah, and especially with like transactional data, like you're getting transactions every a second millisecond, keep up. Yeah. There's no way to keep up with that information. Website tracking too. Just when you know companies keep track of what you're clicking on, what you're visiting, how much time you're spending on their website, on different pages. That's millions of rows of data. Exactly. Well, this has been a really fun conversation actually. Talking about disasters, data decisions. There's a lot of good tea out there on some of these companies. So we'll include all those articles so that you can look through at your discretion. And. Until next time, and just keep calculating.