CareTalk: Healthcare. Unfiltered.
CareTalk: Healthcare. Unfiltered. is a weekly podcast that provides an incisive, no B.S. view of the US healthcare industry. Join co-hosts John Driscoll (President U.S. Healthcare and EVP, Walgreens Boots Alliance) and David Williams (President, Health Business Group) as they debate the latest in US healthcare news, business and policy. Visit us at www.CareTalkPodcast.com
CareTalk: Healthcare. Unfiltered.
Modernizing Healthcare Data Security w/ Aimee Cardwell
If you don’t know where the patient’s data is at every moment, you really can’t protect it yet. That’s the reality many healthcare organizations are facing. Regulations can help but legacy siloed systems keep patients exposed.
In this episode of the HealthBiz Podcast, David Williams is joined by Aimee Cardwell, CISO-in-residence at Transcend. Aimee breaks down why compliance doesn’t equal security, how legacy architectures and vendor ecosystems create hidden vulnerabilities, and what modern, identity-centric, AI-enabled security should look like.
🎙️⚕️ABOUT AIMEE CARDWELL
Aimee is a dynamic leader with over 25 years of experience spanning technology, cybersecurity, and product strategy across healthcare, financial services, and e-commerce giants including UnitedHealth Group, American Express, eBay, and Expedia. As EVP & CISO at UnitedHealth, she orchestrated a reinvention of the organization’s cybersecurity program, systemically reducing cyber risk while enabling rapid business expansion.
Her visionary leadership has earned numerous accolades, including being named one of TAG Cyber’s “50 to Watch in 2023” and receiving the 2023 World 50 Impact Award for Courage.
🎙️⚕️ABOUT HEALTH BIZ PODCAST
HealthBiz is a CareTalk podcast that delivers in-depth interviews on healthcare business, technology, and policy with entrepreneurs and CEOs. Host David E. Williams — president of the healthcare strategy consulting boutique Health Business Group — is also a board member, investor in private healthcare companies, and author of the Health Business Blog. Known for his strategic insights and sharp humor, David offers a refreshing break from the usual healthcare industry BS.
GET IN TOUCH
Follow CareTalk on LinkedIn
Become a CareTalk sponsor
Guest appearance requests
Visit us on the web
Subscribe to the CareTalk Newsletter
⚙️CareTalk: Healthcare. Unfiltered. is produced by Grippi Media Digital Marketing Consulting.
If you don't know where the patient's data is at every moment, you really can't protect it yet. That's the reality many healthcare organizations are facing. Regulations can help but legacy siloed systems keep patients exposed. Welcome to Care Talk, America's home for incisive debate about healthcare, business, and policy. I'm David Williams, president of Health Business Group. My guest today is Amy Cardwell. She's former Chief Information Security Officer at UnitedHealth and Optum, and she's now CISO in residence at Transcend. I'm looking forward to speaking with Amy about what risks she sees lurking and the difference between compliance and security and what modern architectures and workflows should look like. Amy, welcome to Care Talk.
Aimee:It's such a pleasure to be here. Thanks for having me.
David:You know, there's some new mandates coming out on Healthcare, cybersecurity, and will those new mandates be enough to fix it?
Aimee:So mandates set important baselines and um, I think we all agree that we could have a more consistent set of mandates across the United States, as you know, where there's a bit of a state by state patchwork at the moment, but mandates are inherently reactive. They codify what we knew yesterday. They don't really do a good job of helping us with what's threatening us tomorrow. What I think is missing is sort of the underlying visibility and control that I wish companies had more of. If you don't know where every piece of PHI lives and who touches it, then really no amount of regulation will protect it. And mandates can't force organizations to retire the decades old legacy systems that were never really designed with modern threat models in mind.
David:So we talk about legacy systems and sometimes you hear the word siloed legacy systems. Paint a picture of what that actually looks like for a large payer provider. You said decades. That already gives me a sense of it.
Aimee:It should give you a little bit of the willies. So, yeah. Um, I want you to think about, like you said, a large, uh, healthcare provider. So we already know there's a core HR. Uh, EHR, sorry. Where the electronic health records are stored, there's a billing system, there are pharmacy systems, there's claim processing, there's places where labs go. There's places where X-rays go. So you already know just if you have the cleanest possible single system, that there are multiple places where patient's records need to be. But I want you, and this is the, this is where I lived for a little while. I want you to think about what happens every time you now acquire a new company, whether that company does, um, billing for, for suppliers, or whether that company does podiatry and some new area or no matter what the, the new acquisition is. So now you've got a second EHR system and a second billing system. And a second. So at UnitedHealth Group, when I was, uh, the ciso, we were acquiring. Pretty much two companies per month, every month. And so imagine your job as a ciso, we used to think, well, my perimeter is strong and therefore we're good. But as you keep tossing in all of these new companies, the number of systems is multiplying. So for me, you know, all of these, they speak different languages. They have different security models. They don't have a complete audit trail. Reg. Frequently data gets copied back and forth. And the most interesting thing for me is that the pressure is always to maintain business continuity. So these systems very rarely get consolidated quickly. Like that would be the great model, right? Like, got a new acquisition, consolidate, got a new acquisition, consolidate, but, but they don't wanna disrupt patient service naturally. So they're very rarely done quickly. And that means that we end up with this big patchwork of systems that has grown organically through m and a. It's um, you know, when we go back to our mandate's gonna help. Sure. But you have to really understand the deep complexity of the problem.
David:Alright, so we, we started off this episode saying that, you know, if you don't know where all your every day patient's data is, at every point in time, you're, you're, you're in trouble. So I assume if you go in and just ask, Hey, can you do that? The answer is just generally gonna be no. So how do you, how do you start off? What's the first question you would ask to try to assess? What's the data visibility? Because I assume it's not all you know or nothing. How does that conversation usually go?
Aimee:Yeah, so, and I do ask the question, and I know it feels a little bit obvious, but what you see start to happen is they go, well, we can check the EHR logs and then we'll know who touched that data and we can check. So now they start listing out all of the systems. We can check the system, we can check that system. And already, you know, I'm, I'm watching the wheels turn. Um, what I think many companies are trying to do is create a central data warehouse where they'll store that data. But if you've been in the industry as long as I have, you'll know that what often happens is they say, I've got an idea. Let's build one data storage place, and then it'll take rid of, get rid of all the other ones. And so you start off really well. You move a bunch of them into the data warehouse. Most of the time you can decommission a few of them. But then after a year or two, the, the, the drive to do that sort of fades and now you actually end up with more data systems than you started with because the, the effort of decommissioning the old ones kind of falls off a cliff somewhere. So generally for me, um, figuring out how to, how to get a handle on that is the most important thing. I do like the idea of creating centralized data models, but, or, or, or data warehouses. But if you don't do the decommissioning, like, like I said, you're making the problem worse. So one of the things I try to do is sort of start from the bottom and say, what are the things that are being used the least? How do we start to clean those up? And then we take the next chunk of 10, and then we take the next chunk of 10. If we can start at the bottom and generate some. I don't know, a little momentum. Then frequently, once we start rewarding those systems for, you know, we, we reward the teams and we re reward the developers for decommissioning a system, then other people try to climb on board that as well. I'll tell you though, the problem is where the data is that you don't know where it is. So, one of my clients, um, I, I once worked with a client that had data, 14 years of patient data stored in the invoices folder. Of the finance team, and you're gonna say. What was the patient data doing in the invoices folder? But it makes total sense when you ask, because they say, well, we bill the customers and we have to, we bill, bill our medical customers and we have to say, well, we, you know, Amy has this condition and this is what we did for her. And you know, David has this condition and this is what we did for him. And so they had essentially all of the PII in condensed format. On every invoice so that they could demonstrate what work they'd done and why they should get paid. None of that ever got cleaned out. So for me the problem is, is even less the, where the data is that you know it is and even more where the data is that you don't know. It's
David:Got it. All right. Sounding complicated. I was gonna say it. It was, you were making it sound a little bit easier in the sense that like any kind of big, overwhelming. Deal, the thing that you're dealing with, you could say, if I could break it into more manageable tasks in your example, let, let's start at the bottom, pick off some of these things and decommission those that should have been done. But now I'm hearing about risk coming in from a few different directions. So, uh, I guess it's not as easy as it, uh, as it sounds, I guess as the answer to that,
Aimee:not as easy as it sounds and you don't know what you don't know, and that's the part that always bites you.
David:So where do you find the risk most often originating? I don't even know if this question makes complete sense.'cause you already, you already mentioned from these, you know, acquisitions that are coming in. Yeah. And then for data that you have, but you don't know where it is. But a hacker might find it before you do. But are there things like, you know, people are concerned about, like their end points. Do we talk about like identity and privilege management? There's third parties, there's APIs, there's misconfigurations. I mean, is there, is it even a logical. To consider, you know, what the sources of of risks are and, and how do you prioritize them.
Aimee:Yeah, and, and you know, you, I think one of the places where, uh, I, that always keeps me up at night is vendor risk because we know that we put all this risk language in the contract and we obligate our vendors to keep certain standards, and we look at their SOC two, type two and or their high trust ratings. But we also know that that's a point in time. It frequently happens once a year and that all of the questionnaires that we send out, all of the information that we ask for, all of the documents that they give us, they're only as trustworthy as the people who have filled them out. Um, I think you and I both know that if you really want to, you can snow an auditor. Uh, so there's really no guarantee and no way to guarantee. Your vendor is gonna uphold the same security principles as you do. And so if we were just talking about what it's like inside of a major healthcare system, what's it like inside of a major vendor that services 50 healthcare systems? Is it any cleaner? I don't know how many acquisitions have they done, so it's probably not any cleaner. And what happens is, is you know, the vendor, uh, has an incident and then they have to contact all of your patients. I'm sure you've also been a target as I have, where you open up an email or a mail and you learn that your information has been compromised and you've never even heard of the company that compromised it. And so that's sort of part of the problem as well, is you still get blamed, uh, as a company if your vendor gets compromised and there's really very little control that you have. If a company really wanted to clean up their internal systems, they could, if they really wanted to clean up a vendor systems, whatcha gonna do about that?
David:Yeah, you have to, I guess, buy them and, and have the same problem that you had, uh, with all these other acquisitions. Well, let's
Aimee:add some more data sources to our data sources.
David:Yeah. Alright, so how about not just adding data sources, but now adding, you know, like the electronic medical record. Start with that, but it doesn't do everything that you want it to do. So there'll be these Yeah, add-ons, there'll be file sharing and then almost inevitably people are gonna create some, you know, shadow it.'cause the big EHR is not doing what it needs to do. How does that complicate things?
Aimee:Well, you can't blame a physician or, um, or, or. Someone who's running a healthcare practice or an office for wanting to use a new scheduling system or a calendaring system. But of course, when they do that, they have now exposed patient data to another interface that you may or may not know about, and you won't think that they're bad people if they're trying to find new ways to bill or better economies of scale, or if they're trying to make, uh, or understand their patient community better. And in order to do that, they might. Um, download, download a spreadsheet to Excel or now to AI so that they can do some a analysis. It all sounds perfectly reasonable and normal when you put it that way, but that is exactly what you're talking about. It's how data gets out of the ostensibly controlled systems that we think that they're in and into somebody's desktop, somebody's laptop, some third party ai, some scheduling tool, and that's where you know more of the places where things bite you from the side. You know, there have been times as a CISO when I've seen patient data come back to us, and it took us a really long time to figure out where that patient data came from. You essentially have to. Look at that set of patients and say, what do these patients have in common? And it could be like, oh, this dentist over here has that set of patients and therefore it must have come from that facility. And that's really tough forensic work to do. So I can't blame the poor dentist office for trying to do the right thing. And yet they created another data source that then causes damage to the larger company.
David:So let's talk about AI for a second and, you know, are these LLMs, you know, like Chachi PT or, or Claude, are they fundamentally different from one of the examples you've, you've given of, you know, copying it off to some other, you know, shadow system? Is it different to say, Hey, you know, I'm gonna try to put, I can't figure out this patient. I'm just gonna put this in here and maybe I'll come up with a diagnosis or some suggestion for me is, is that different than what's been done to date? I
Aimee:think it's fundamentally different, and this might be a little bit of a controversial viewpoint, so take it with a grain of salt, but it is actually, in my opinion, a little bit harder to get data, patient data in particular out of an AI than it is to get it out of a database. So if I've downloaded your data into an Excel spreadsheet and that Excel spreadsheet gets out into the world, much easier to go, okay, I have this format of data. I know what to do with it. If I've uploaded that data into an AI and said, make an analysis. Tell me what's interesting about this data, it's very difficult for me to then a, know that the data was in there. But you know, if you start asking questions of the AI to say, can you show me some patient data? It's not that the AI is gonna exactly grab what you put in there. It might be some patient data, but half of it could be hallucinated. Some of it could be fake. Like we don't really know it. It's, it's almost impossible to get it back out. So in a way that's better, like if you, if you're gonna ask me, can I either put this in a file share or should I put it in an ai, I'll be like, oh, put it in the AI all day long because actually getting it back out of the AI is gonna be harder. That said, I can delete it from the file share. I can't delete it from the ai. So once it's in there, it's in there and there's no real way to get it not in there.
David:Great. Well that sounds like a good follow up podcast for, uh, for next year once we see some of these things and what, what happens, uh, what, what happens to it. You can have me somebody
Aimee:else and we can debate the pros and cons of getting data out of an ai. Exactly.
David:No, that'll be, that'll be fun. Okay. I don't wanna take us off on too much of a tangent, but I think, you know, that's a good one 'cause that's really happening today. And, um, it's the most tempting sort of a thing for people to, to try to do, and it comes back with a cool result, but then, you know, what happened to your input? Hard to say.
Aimee:Totally
David:hard to say. So there's a thing called the Healthcare Cybersecurity Act. Uh, what is it? How helpful is it?
Aimee:Well, it was passed recently as part of the 2024 Consolidated Appropriations Act, and it requires, uh, HHS Health and Human Services to establish cybersecurity performance goals for the healthcare sector. And I think it's a step in the right direction. I think as we started, uh, at the beginning of this podcast, you know that that sector needs specific guidance and it needs specific rules. I think it should be done at the federal level, not at the state level because the patchwork is just really hard to, to keep up with. But the challenge is implementation. So while I think the performance schools are helpful, they need to be paired with resources and realistic timelines. And you know as well as I do that most healthcare organizations, especially rural hospitals and small practices, are already struggling to meet existing requirements. Sometimes the. It guy is the janitor and he also is the receptionist. And you know, without funding or technical assistance, I think it's gonna be really tough for a lot of practices to comply with those, and it ends up being an additional burden without meaningfully improving our security posture.
David:So after nine 11, there was this concept of security theater, you know, where you'd see all these guys shaking down, grandma and, you know, and, and all that. Is there some equivalent? Yeah, yeah. Take the shoe off, you know, you know, 'cause this one guy, Richard Reed had the, the shoes and I'm thinking, you know, there's all these things to make it look good. A lot of spending, there's a lot of notification. But if I'm an actual bad guy. I'm probably not gonna do the thing that the, that the previous guys did. Exactly. Is there something analogous in security or privacy with sort of privacy theater where you can check the box and have all this audit and say, that's great for the auditor, but the actual bad guy that's gonna take it isn't care what's on the audit. Yeah. And they just go around in some other.
Aimee:First of all, I'm impressed that you were able to bring up the shoe bomber's name, nice memory. Um, but I think you're absolutely right. Uh, and let's just use the vendor example that I used before. We, uh, send out long questionnaires to vendors and they give us big answers back. We ask for their SOC two type two, and they give us their SOC two type two back. So all these are check boxes, essentially. They're, yes, we do this. Yes, we do this. Yes, we do this. Does that mean all vendors are safe? No, and I think it's also very clear to anyone who's in the privacy and governance space that there are auditors who will really dig in and tell you the places where you need to improve. And there are auditors that you call when you wanna get your soc two type two approved. Uh, one of them is, uh, I, first of all, both of them have a place. The one who tells you what you need to do differently, you generally don't publish that report. The one who does the great checkbox and the SOC two type two, that's the one you publish. And so I think you need both, right? I mean, you know, ideally we wouldn't have the one that does just does the check boxes, but. It just because a company hired an auditor that is happy to pass the audit, you know, successfully with no findings, doesn't mean that they couldn't also have an auditor that's really doing the deeper analysis. Uh, they just don't have to publish those results necessarily. So I think that. That there is definitely, I completely agree with you that there's, uh, you know, leadership sees compliance check marks and see the SOC two, type two, and they're like, great, our company's protected. We've done all this work. And that's just not really true. One of them is governance and one of them is security, and they're not exactly the same thing.
David:So I've seen this term, this phrase, you know, security that disappears into workflow. Mm-hmm. What, what does that, what does that look like?
Aimee:I like to call it having the secure path be the path of least resistance. I want the user experience to be seamless and I want the security architecture to operate underneath. Let's use, um. Multifactor authentication. As an example, if you've got a pass key set up really well on your Mac, it does the face id and you're logged in and you didn't have to do anything. I have to remember a password. That's a beautiful example of security that dis disappears into workflow, but. Let's take the example of maybe two medical offices merging together. They use different EHRs, but you're a patient at both of them. So now your doctor, in order to get a full view of your medical history, has to log into system A to see what happened over there and log into system B to see what happened over there. That is not security disappearing into the workflow. That is a doctor having two separate logins into two separate systems and not, and having to mentally pull that data together, that's just not as safe and many clinicians are gonna try to work around that. And then the workaround is what causes the shadow it, which then causes the data to be in weird places. Does that make sense?
David:It does. And I was thinking maybe tell me if this, um, analogy works for it. So you think about like in safety, like physical safety that you wanna make things as so that the simplest thing is actually the, the thing that you're, whatever you're gonna do is actually the safe one. Okay. So that you just, you just do that. You don't have to remind somebody. And if you make it otherwise they're gonna do something that's dangerous and you're gonna have an accident, but you don't have to constantly train them to do it because it's the easiest thing to do anyway. It's a safe thing. Yeah.
Aimee:Think about, uh, and that's a great example. We had to learn to put our safety belts on. Uh, I am old enough that I remember a time before that was a thing that we had to do all the time when I was a kid, and then there was a huge campaign and they were click it or ticket and everybody was putting their safety belts on. But now we have airbags and antilock brakes. You don't have to do anything. You crash and they do the thing. And that's what we really want, is we want protection and safety and authentication to happen without me actually having to think about doing it.
David:So let's talk about, you know, what a modern, uh, healthcare data and security architecture looks like. You talked about the legacy, the silo, the decades old, but if we're starting now or if we're just gonna catch up, what should it look like?
Aimee:Oh, let's make a healthcare startup. That sounds fun. So the first thing I want is unified data visibility. I want a single source of truth that shows where all of the sensitive, sensitive data exists. How it flows and who accesses it. And when it goes outside of my org, I wanna tokenize it so that I can see what happened to it when it left my organization as well. And then, you know, there's a lot of talk about Zero Trust, but I want identity centric security, the same security principles that have always applied to continue to apply, which is I want only the people who are authorized to see the data, to be able to see the data, which means. Secretary can't see the boss's data. It means that one doctor can't see another doctor's data, another patient's data unless the patient has authorized that. So identity centric security with zero trust principles. And then lastly, and I think this is a place where we are only now just having the tool sets that make this possible real time monitoring. Automated response, so not just logging after the, the fact for doing forensics, but actual realtime monitoring. Then you can use the Goodis for a great methodology, which is looking for anomaly detection. So if you already know what generally, you know, patient record access looks like, and then you get one that looks different, the AI raises its hand and is like, Hey, something weird is happening here. And I think enabling that requires us to have really good real-time monitoring of data. But. You also need data encryption throughout the lifecycle. Same principles as always, and API. Security is a place that, you know, I think as we're looking into more and more interoperability, making sure that your data is encrypted both at rest and in transit is really important.
David:Great. And well enough encrypted that you can't break it so easily.
Aimee:And that's gonna get harder and harder with quantum coming of course. But I'm not, we haven't crossed that bridge yet, so,
David:fair enough. Good. So it sounds like, I just wanna dig in a little bit on what you said about ai. So first we were talking about AI and the danger of someone copying something into an LLM, and now we're talking about AI for anomaly detection. And I think compared with, let's say, having to have like a, you know, profile of different viruses and specific things you're looking for, you can say, you could just say. That looks different from something else without having to specify it upfront, what that is, and then you can analyze it. Is that right?
Aimee:Yep. I think, you know, there are risks and opportunities as with any technology. AI is a technology. I think organizations are rushing to implement AI tools without necessarily fully understanding the data governance implications. Um, and I think that AI powered attacks are becoming more sophisticated and those are the risks. But on the opportunity side, AI can dramatically improve our ability to discover and classify sensitive data. Uh, it can detect anomalous patterns like we just talked about, and it can start to help us respond to threats in real time, which is a thing that I think we struggle with a little bit right now. It can help us actually achieve the visibility and control that we're lacking, but only if we get the governance model right first. I've even started to see tools now that I'm super impressed by that if, um, just like. Whatever you use to, uh, patrol all of your endpoints. They have now separate endpoint control mechanisms that if a user logs into any AI and enters information that the system thinks is either sensitive or company confidential, it will automatically redact that from the AI prompt. So it's basically like, oh, let me just give you these social security numbers of these patients and you can tell me what states they're in or they were born in. Nope, sorry. And so it auto redacts that information, which is a great security by design methodology that, you know, I've started to see those kinds of tools come up.
David:Yeah, it's pretty good. Yeah, that sounds like it's quite cutting edge and very much needed. Um, yeah. So there is a lot of, uh, excitement to embrace ai, as you said. And one thing I've seen in these larger organizations, including healthcare, where there's a lot of strain on the budget is they'll say, we need to free up funds to pay for ai, so let's look at it. And cybersecurity is often within it. And I've seen just in the past couple months, a number of places organizations actually doing decently well financially, but they actually cut cybersecurity staff. I, I supposedly, in order to invest in. In ai, are you seeing that? Does that have an impact?
Aimee:So I haven't seen it, but since I've been in charge of a pretty massive cyber budget, I'll give you a couple of observations. So just cutting staff isn't necessarily a signal that they're not as invested in ai. There is a thing that happens that when you cut a certain percentage of staff, you force the team to get more efficient, and as long as you just keep growing and growing and growing and growing and there's never any downward pressure to get more efficient, you don't actually bother to get more efficient. So as a leader, I have noticed that sometimes cutting actually makes you better. Not worse, may or may not be the case in these situations. Um, I would also say, and this is sort of the opposite side, you're going to pay for a cybersecurity incident. You can either pay upfront a smaller amount, or you can pay on the backend a larger amount. But if the CISO is doing her job, she is elevating the risk, both financial and brand risk and time, you know, maybe out of commission. Up to the executive team. They're using that information to make business decisions. And there are some companies, depending on what their data looks like, that it's actually smarter to spend less money, making the product harder to use, uh, and just letting it go and dealing with the risk after it happens because. Just look at some of the big, I mean, I'm not gonna bring up the names, but you can bring up a lot of big names that have had, uh, incidents in the past, let's just say two years. Go back and look at all of the stocks of all of those companies before. Stirring and then about six to nine months after, and I think in every case you'll discover that the stock is still considerably up. It went whoop, and then it went right back up again. So just we have to remember that these are business risks. Uh, we can't only see them as cybersecurity risks. The cybersecurity risk is elevated in order to allow the business to make its own decisions about what, where it wants to spend its money.
David:So if, if I were running a large healthcare system and looking to invest in, in cybersecurity, to what extent should those capabilities be built inside versus outsourcing? What's the right balance?
Aimee:I love this question. Um, obviously it's not binary. I think the best approach is a strategic hybrid approach. There are gonna be core capabilities that you're probably wanna keep gonna wanna keep in-house, like security, architecture, risk management. And sort of how you want to deal with incident response. It doesn't mean you have to do incident response in-house, but you have to decide what your incident response program is gonna look like in-house because for those things, you need people who really understand your environment, your clinical workflows, and your business' regulatory requirements. But you are also not always the best person to do all of the services. And so just like we don't rewrite, uh, the Microsoft Office suite for ourselves, every time we wanna go to a new company, because it's not our core competency, we hire third party companies to give us the services to those tools. So I would say 24 7 security operations for most companies don't need to be in-house. I would say threat intelligence. I would say deep advanced forensics. I mean, as a fractional ciso, I've got lots of deep forensics companies that I'll call because nobody does that really well in house. Uh, so it kind of depends on the size of your organization, but I would say stick to your knitting and the places where you know, you only you have the information that's gonna make you most successful at that task. Great. Call that an internal task, but don't try to reinvent the wheel for a third party service that does it better. One more thing I'll add to this because I see this with a lot of younger CISOs. All of the costs of all of those vendors have a huge amount of negotiation possibility. And so I think I've seen a, well, I know I've seen a lot of younger CISOs get a cost from a third party and be like a hundred thousand dollars or $500,000. There's no way we can do that. And the reason that they give you that big cost is because about 10% of the people go, okay, great, thanks. And then they go pay that money. So negotiate, negotiate, negotiate. Yeah. Sometimes you can get 50% reduced off of that amount of money. Sometimes more. Go figure out, talk to your peers. What are other people paying? Because you don't wanna not use a third party service because you think it will be too expensive. Ideally, it's the best of both worlds. You get a cheaper service than you could provide in-house because they've got economies of scale and you get a better service than you can provide in-house because they're the ones who are the experts at providing that service.
David:Great. So let's say there's, uh, someone listening to this podcast, a new ciso, and she's gonna go to this. She just got hired by this company and she goes in and says, you know what, Amy's right. I've got, I don't really have visibility into the pH. Even the stuff that I know must be there, I can't see it. Nevermind. The stuff that she said. I don't know where it could be, and I don't even know it exists. What's the way that they should kind of go and get visibility quickly?
Aimee:Yeah. There are lots of great data discovery tools. Um, so use one of them. I don't, it doesn't really matter which one. You may already have one in house by virtue of some of the other software applications you have. Don't wait for a perfect inventory get started when you've got stuff that you can get started on. You heard me say at the top of the show. Um, try to find the things that are, uh, noise level first. Like pick off some of the easy stuff at the bottom so that you don't start by having these big negotiations. And then, um, we sort of talk about those wins and, and give people credit for those wins. And then as you're implementing those quick wins, hopefully you can also be implementing multifactor off for high privilege accounts, disabling unnecessary data exports, wherever that's possible. Put in some basic API monitoring. So I'm looking at how can you shrink. Um, the amount of data that's out there, and then at the same time, sort of put closer protections in around the data.
David:What is the right timeframe? Uh, for some, let's say you get those quick wins and then what's the right timeframe to get from a kind of a critical problem to saying, okay, I'm at a, at a good level. Is that six months, a year, two years? What's the journey like?
Aimee:It's kind of an impossible question'cause it depends on the size of the company and the size of the systems. But I would say after you do your comprehensive data mapping and uh, you can, then you can start to create some baseline metrics for that. Uh, for a company the size of United Healthcare, it's like the Golden Gate Bridge. It's an ongoing task. You're never finished. Uh, but for a smaller company, I think once you start to migrate or retire the highest risk legacy systems. Then, you know, then you, you can establish a kind of continuous compliance and improvement cycle that caused this to be a thing that can continue to, to pro, you know, provide benefits and features for the company. We do continuous scans to determine where PII shows up that we're not expecting it to. And I think that's really a best practice because you don't wanna find out that, you know, somebody made that spreadsheet or somebody dumped that data into an ai. Months after it happens, you kind of wanna find out right away. So even if you say, well, we do those scans quarterly. Is quarterly really enough? But for me, the biggest thing is just a continuous progress. Not hurry up and do it and then let it fall off, and then hurry up and do it, and then let it fall off. It's this, this is like, this is what we do. So getting that sort of, that flywheel moving and making continuous progress is the most important thing.
David:You know, you brought us back to the big picture a little bit earlier on, talking about this being overall, you know, it's still, it's a business, business decision and you talked about the communications, you know, all the way up with the board. Mm-hmm. If, if someone's a board member, let's say a, you know, large or large complex organization, maybe not as large as United, but still pretty big, what should they be tracking? Like what. A board member be doing? Same way. You might have a board member that, you know, they look at the financials in a certain way. They look at the legal risks. How should they be looking at these type of issues? What should they track?
Aimee:Yeah, it would be great if we could look at data visibility coverage, but the problem is, is you can't see what you can't see. And so if you tell me that you have a hundred percent of your data, uh, visible and covered, I'm gonna be really skeptical. Frankly, I, I don't believe you. Um, the one that I really love is the, how long does it take to remediate critical vs. And I know that that feels really disassociated from what we're talking about here, but. A team that can hit their SLAs to remediate critical vulnerabilities, has a process and they're running that process and. That gives me more confidence that there is a process around other things as well. So that's one that I always track really closely. Another is the time to detect and respond to incidents. Like how long does it take to respond? How long does it take once we've found something to get to it. Um, what's the meantime to recovery? Do we actually perform business continuity, disaster recovery tests, not just tabletops, but do you switch over from AWS East to another uh, area and keep the business moving without a break? So I wanna see the actual times and trends around that. And then lastly, I wanna know whether our security debt is improving or degrading over time. Um, I am, uh, uh, looking at a company right now who's looking to put multifactor auth everywhere and. I've had a series of meetings with them over the course of the last year, and it was like, oh, here's all the things we're doing at the very beginning of the year for putting multifactor factor off everywhere. And then two months later, I didn't see that in the presentation anymore, and I was like. What happened to that? And they're like, oh, well some things haven't gone as well as I've wanted. And I'm like, that doesn't mean you get to stop reporting on it. So those metrics tell a story about whether security teams are doing what they should be doing and how process oriented they are. And that's what I care about most. I can't know whether every team member is doing what she needs to be doing. But I can know how good the processes are in place, and if they look pretty good, then my confidence level goes up.
David:Let's talk for another second about the first one that you mentioned about how quickly the patch cadence for the critical vulnerabilities in particular. Yeah. One of the things I like about that is you can measure it with technical means. So going back to what you said about, you know, how frequently are you scanning? Quarterly is not enough, so, but if you're scanning that, let's say theoretically, almost continuously, you can actually determine, you can measure how fast is that actually happening. There's no room to kind of blow smoke and say, oh, we didn't count this or that. Like you can see it, it's there and then it's gone. Somebody patched it.
Aimee:Yep. And it's, yeah, it's really tough to fake. So, back to the checkbox, this conversation we had earlier on, it's like, Nope, this is the data and here it is. So how'd you do with that? Yeah. Love it. Yeah, that's, I love that measurement too.
David:So, last question for you is, I don't know if, if you can answer this, but do you have an example of what success looks like? Anything you can, you can point to and say, that was really good.
Aimee:Um, well, I know what success would look like and I know that we're all trying to work toward it. So you asked at the top of the hour sort of what, what is the fundamental question? And we said, do you know where all your data is and who's accessing it? If I don't get a 15 second pause before they start to then diagnose each system, that would be something like what success looks like. Another. So, you know, when I was responsible for managing all of those, uh, acquisitions, I set up an acquisition program that. Um, actually, so the, we did this in a really interesting way. We said, what are the 25 most important controls that we want any company to have? And then we, we then we said, what are the 10 most important? We actually gave that list of the 10 meter no RDP multifactor, all the most simplistic stuff. We gave that list to the acquisition target before close. Instead of saying, let me go review what your company's like inside, because you know, I'm not gonna keep the business from buying the company anyway, so that's, there's kind of pointless. I said, these are the things that I want you to demonstrate to me before close. And if you can't do that before close, we'll delay close until you can do just these 10 things. And because that's the moment when that acquisition target is really interested in being bought. They're gonna work really hard to get those things done, so at least the moment they become part of your company, they're this tall. Now you need to get them up much taller, but you've already started a trend because it happened before they got acquired. And then you just grab onto that trend and you keep pulling. And try to get them there. So that's another symbol for me of what good looks like, is let's stop telling the business who they can and can't acquire, and start thinking about how we're gonna make these companies as safe and robust as they can possibly be once they get in. But you know, then it's also what are, you know, like, like with the program I talked about that can redact information out of AI prompts. What other tools are you using to try to keep the unknown unknowns out of the world?
David:Great. Well, that's it for another episode of Care Talk. My guest today has been Amy Cardwell, CSO in residence at Transcend. I'm David Williams, president of Health Business Group. If you like what you heard, please subscribe on your favorite podcast platform. Thank you, Amy.
Aimee:Thank you. It was a pleasure.