Decipher Security Podcast

Idan Plotnik

April 05, 2021 Decipher Episode 74
Decipher Security Podcast
Idan Plotnik
Show Notes Transcript

Idan Plotnik, CEO of Apiiro, joins Dennis Fisher to talk about taking a risk-based approach to code and securing the software development lifecycle.

Speaker 1:

[inaudible]

Speaker 2:

Welcome back everyone to the decipher podcast. My guest today is Aidan Palatnik from Piero coming. Why from Tel Aviv, Israel, which I'm very jealous about. You guys can't see it, but he's got a beautiful view of the entire city behind him. I'm kind of glad that we're not recording the video cause everybody would be kind of upset about this. How are you, Dan? I'm great. I'm great. Thank you for having me. Oh yeah. It's my pleasure. I'm glad we got to do this. Um, so I wanted to start by, there's a, there's a whole bunch of things I wanted to ask you. Cause I think the idea that you guys have with the PIRO is super interesting and it it's one of those areas of the security community or security industry that I think hasn't really gotten a lot of attention maybe because it's not super sexy O day, uh, research type type stuff. But, um, I wanted to ask you first off where the idea for this technology came from, was it a problem that you had seen that you thought, okay, I have an idea for solving this or, or how did it come about? So first I will say that it's, um, you know, we called it a quote risk platform. Um, it's more than just, um, I then define, you know, um, model developer behavior or things like that. It's looking at your entire software development life cycle. And I, I then define providing inventory and risk visibility across all your applications based on how developers behave and their security maturity, and based on the business impact and based on other things that we will talk about today. Uh, but going back to your original question. So, um, I'm, I'm in the industry for 19 years plus, um, my last startup was acquired by Microsoft and I was a, a general manager, uh, for engineering at Microsoft. And, and I, I just, I just felt the pain. I just felt that, you know, how you, on one side, do you want to move as fast as you can, um, make sure the business is growing faster and faster. You want to move from waterfall methodologies to agile development methodologies, but on the other hand, you don't want to release code with risks. You want to be able to assess the risk in, um, you know, based on multi-dimensional approach and

Speaker 3:

For them, do you need manual processes? You need the risk assessment questionnaires. You need to make sure the people that answer these questionnaires really understand, or, you know, are into the details or less based on self of the station and, and all the work that you are doing is for nothing. Because if you ask them, Hey, do you have PII in store the look fine? And they tell you yes, but it's not, or they're gonna, you know, any, and it is, you know, it's, it's a pain to, to make it look story short. It's, it's a, it's a challenge that I felt. And then I went to other large organizations and I asked them, uh, if they, they have the same challenge, why releasing code to production? And then we said, Hey, this, this is going to be a huge problem. And I'm talking two years that, um, maybe no less than two years, like, uh, I don't know, 16 months, uh, back. And, and this is where we said, Hey, this is a huge problem because it's not just yet another, um, you know, a solution that cell cells, FID, fear, uncertainty, and doubt or not setting, Hey, I'm, I will attack you from, uh, the cloud or I will attack you from her identity. You have your mutual problem between the CEO that CIO, the CSO and this application security practitioners and the developers, you know, so you have a real problem that, that basically touches all the chain of command or, you know, all of the managers.

Speaker 2:

So when you mentioned that, that your startup had been acquired by Microsoft a few years ago, and, um, you had been in startup land before that. And so what was the difference when you got some Microsofts between the way that like, you know, the way that code was handled and, you know, Microsoft obviously has a pretty mature SDLC, so they understand how coach should be developed and assessed and, you know, hopefully scan for vulnerabilities. And you're looking for all these known problems. Um, how different is the way that a large organization, such as Microsoft handles it from, you know, even a smaller, an SMB or even, you know, a startup

Speaker 3:

It's a very, very good and complex question. Okay. So even in a mature, secure development life cycle processes, you start from, um, a threat molded, and then you go to add this security design review, and then you go through compliance reviews in some cases, and then you go through a security Corddry, and then you go through a penetration testing and then you go through for an ability scannings throughout the CACD pipeline. And then you're overwhelmed. You're saying, what's going on here? Like I have one person that is responsible for security across 100 developers. This is the best case, by the way, the ratio is one to 100 in the best case scenario. And then this guy says, Hey, what's going on here? I went through all the process. I did all the phases, but now I have thousand four inabilities. I have 2000 tasks to do, to remediate where what's first. And then I I'm, I'm telling you, uh, from, from an experience that you need to decide if you block the product from getting into your customer's hands or, uh, you release with risks. Yeah. And this is the fundamental problem that our pure solves, we said stop chasing. So basically Gardner said, stop chasing for an abilities throughout the development process. What we did is we said to the industry, Hey, one vulnerabilities, it's too late in the process. Do you need to focus on risky code changes? Okay. What are the material changes that you are introducing into your, uh, application, if you're changing the layout of your, uh, login page, who cares, uh, versus you changed the logic of an API that responsible for money transfer. This is a risky change. Now I'm not talking about we're in abilities or not. I'm talking about a fundam of changes that you you're taking these, all these changes and passing them through the same vulnerability scanning pipeline. We are saying something else we are saying let's differentiate between changes. And we will not only differentiate between changes based on their technical aspects. We will differentiate between changes across their attack, surface technical impact. What's the business impact of the change. And what's the business impact of the application. What's the knowledge and the behavior of the developers that made these material changes. And only then we will decide which changes. We'll go through a, we're going to be scanning. You see? And then you say, okay, I have Palestinian SQL injection, but I have only five of these sequel injections are relevant for an internet facing API. That is part of end user facing application that serves up 70% of the revenue of my organization and the change in any internet facing API that holds the SQL injection. Now go and remediate what's really matter. What's, what's really better to your business and not, what's really better to your statistical analysis tools that are generating so much noise without any context, without any deep understanding of who is the developer that made this stage. So this basically goes through, you know, your question, uh, I can tell you, and it's not unique in Microsoft or any other places. Uh, they photo a process and you must go through the process and it's same goals in the large banks and other large corporations. They need to go through all the Gates of the secure SDLC or their vacation security program. And, and none of them are risk-related. And what we're saying is stop your application security programs to date, stop them, transform your application security program into application risk program or application risk management program, uh, speak at the same language as your CEO's or CIO's. And, and, and even as a developer, even as an obstacle engineer speaking the same language, this is our premise.

Speaker 2:

Well, it, it makes a ton of sense to me as someone who has literally never written a line of code since like eighth grade, but, um, in, in your experience, how, how quickly do you know let's take three different roles? How quickly do developers understand what you guys are trying to accomplish? How quickly do the app sec engineers, get it. And then how quickly do you say, you know, the product managers who have to sign off on, okay, we're going to push this out to the real world. How quickly did they get it? Are they comfortable with it?

Speaker 3:

It's a great question. Um, and, and I will pick it, I will say, let's go to the basis. Okay. Then what's the basics. First, do nothing to reduce to your developers. Anything, if you don't understand what you have, how many assets do you have, how many applications do you have? How many API APIs do you have? What's the business impact before application do not bother to introduce anything to your developers. First, you, as, as a, as an AppSec engineer, do the D go through the basics and businesses. The first thing that we are doing with our customers plug the Purell into your source control manager, via rebound. The APR plug up Purell into your ticketing systems appear will automatically scan all the history and all the text in your ticketing systems and build that inventory automatically. The developers doesn't know that we are there. Okay. They don't know that the pure is there. At this point, we are saying the basic is scandal thematically, or learn automatically all your code basis while the story enrich this data form all the tickets ticketing systems that you, you know, the ticketing systems holds, um, features, bogs, user stories and things like that. Um, so your ticketing systems holds, um, user stories, bogs features, high-level designs, a pure enrich this data, eh, uh, why the scanning, the cold, and then you as an AppSec, you know, I have 50,000 applications or, um, and only 10% of them are business critical. And I have, uh, 10,000 API APIs that are internet facing that exposes PII deep. Now, when I, as a S as an app, sick engineer, I have an inventory based inventory. Now I can decide what to do. I have exposed secrets in my coat, you know, great. Let's say every time and developer introduce a secrets into our code basis automatically, uh, raise the flag in his or her poor request to not introduce them. New tools never, ever hates when you introduce them new tools. So integrate a pure integrates into their existing environment and raise a flag and say, that is, you are introduced, you introduced a new exposed secrets. Let's add God, is there a reviewer because you've done a month ago, remediated, these kinds of frisk, so he can help you or give you some recommendation and say, you have, you're using the HashiCorp vault in this application, go to this line of code and see how, uh, I don't know, Jane DOE she, uh, introduced, she, uh, added the secrets into Hashi Corp. And this is how you help the developer remediate the risk early in the development process, same goals, by the way. And this is an, you know, very, very innovative approach, uh, that we introduced into the market and say, even before you start coding, uh, the product manager is writing a new user story in JIRA, uh, PIRO, analyze the content and raise the flag to the upset guy and say, Hey, this might be a risky user story. Go and talk to the, uh, uh, product manager and half the contextual security review or a high-level more build on that. And you don't need to rely on people to come and say, Hey, can you help me with this user story? Because it might be risky. You can't rely on self at the station anymore. And if I would go back with your permission to your first question, when, when, what I learned in, in, at Microsoft is that we had, I dunno, risk assessment questionnaire between 50 to 100, a hundred questions. What I learned is that, and without offending anyone, I found out a bit, but developers were relying in these questionnaires, not in, in intention because they didn't know the answer. And then at the end, I found myself investing a lot of money and resources on the wrong changes on the wrong applications, on the wrong risks. And, and it's go back to the same problem that you cannot rely on humans to tell you, where did you have risks? You need to do the heavy lifting for them and focus your resources, your OPSEC engineers on the right, um, uh, risks. If it's in the design phase. Great. If it's in the coding phase, you need to do it before the CACD pipeline. And as early as you can.

Speaker 2:

Yeah. Humans are terrible at assessing risk. I think that's one of the things that I've learned over the last 20 years. We're not good at that.

Speaker 3:

It's not only that we don't, we are not good with that. It's a simple thing that our mind cannot hold so many risk factors. And when I say risk factors, I mean, like from, uh, that it's, can you hear me? Yeah. So from where do you deploy this application? It's, on-prem, it's in the cloud, uh, what's the best coverage that you have on application, which data this application holds PII payment, what's the business impact? What's the knowledge of your contributors? What's the result of the open source school? What's the Risco fit infrastructure is code. What is the risk of the application code? What's the effect surface? What's the outcome from your third party, uh, uh, scanning tools or what are the security controls? So many, uh, so many risk factors that a human being cannot calculate everything in, in, in, you know, in a human mind, you need a machine to do that for you, and then just point you to the right direction and say, then go here on this specific change. This is the most risky change in your application. Go and have a meaningful conversation with the developer or breaking automatically the compliance officer, because you added the PII to the application,

Speaker 2:

Right? And, you know, software security experts for years have been saying, we need to build secure software and get the security people as you know, involved as early in the process as possible. Now we don't, you know, it's a lot more efficient than trying to secure something after the fact, you know, obviously the built-in versus built on, uh, debate there. And this to me seems like something that is completely, uh, built to work in that process, the way that software is built and delivered now, as opposed to the way it was delivered like 20 years ago, where it was, we build a point release, push that out every six months. And now everything is, you know, as you said, CIC D and it's, it's literally delivered hundreds of times a day. So if you're not paying attention to security at the very beginning of the process, you're going to have so much, so many more problems later on. It seems

Speaker 3:

I totally agree. And I think everyone bought, you know, that they are into it, like every company from I for the last, I don't know, 16, um, uh, 16 months, I was talking with more than 250 companies from, uh, 50 developers shop to, uh, uh, 20,000 developers. Everyone are, uh, they bought the idea that they need to, uh, uh, integrate security as early as they can. Now, the problem is, and from our point of view, getting security into the CACD pipeline, in some cases, it's, it's good, but it's too late. You want security to be one on the design phase to tell you to prioritize across all the feature requests, all the user story to prioritize, what are the most risky, uh, features that are going to be developed in the next release, and then handle them at the design phase and run the contextual, uh, threat Maldens or security design reviews as early as you can. And the second a phase before the CICB pipeline is I can say it depends on what's your development processes, but you can run the security assessment or the risk assessment on your develop branch or the feature branch for the main bridge before you trigger the vulnerability scanning processes. And there, at this point, you can do a few things. One, as I told you, you can trigger, uh, automatic workflows and say, if I have these types of changes, I need to bring in a pen tester before I even release this code throughout the pipeline, because a pen tester might find for an abilities that you can't find in an automatic tools. And if he will find these were in abilities, uh, uh, on these create the color or material changes, then when it will go through the pipeline, it will reduce the noise that I will get at the end of the process. Same goes for a compliance review or same goals for, you know, what, I don't have many, uh, people like people that will run manual processes. And I want to define, set off, uh, the Phoenicians of what is a material change at the development process, and then send only these materials change, risky material changes throughout the pipeline, and like do a fork in the road and say, if it's not a risky material change, pass it and deploy directly to production. If it's a risky material change, go through my code analysis tools or composition I system, you know, um, and this is how you streamline the security early in the development process. And, and in some cases, as I mentioned earlier, you can integrate, integrate the process into the poor request, um, when the developer is the right person to remediate the rates. So, uh, for example, if, if you have, uh, if the developer introduced a PII to, to, uh, the application, then you can't w it's, it's not efficient to, uh, tell it to the developer of, Hey, you introduced a material change, uh, sorry, you introduced a PII because the compliance officer needs to approve the PII before releasing to production. So you can bring him in, in an Amante commander.

Speaker 2:

Is there a way, and I, I, I have no idea if this is possible or not, but is there a way to, to introduce this kind of technology and mindset, like for developers as they're learning to code, you know, at university or in their, you know, initial jobs out of school where they're trying to figure out how software is actually built.

Speaker 3:

So there are things that you can do, and there are things that you can't do for example, to, to learn the, the, the, the, the basics, like you need to validate the input. When you get, uh, um, data from an untrust of empathy into, for fortification, you need to sanitize the data when you score it to, uh, a database or things like that. You can learn what's the basics for writing secure software, of course, but there are risks that you can't teach at school again. And again, I'm going back to two, I'm going back to the example of I'm a developer. I exited them key added your, um, um, I don't know, whole mod dress and amount of money that you have in the bank to an internet facing API. It's not affordability, but it's a risk. I can't teach you this. This is, this is based on the context of the application based on the context of the company that you are working on. And, and so the answer is yes, the basics, but no, for the other, uh, um, risks that are taken from the essence of the application or the business, the industry that you are in, um, this is what we are trying to automate. And if you just to give you another example, um, we are working, I can't expose right now, uh, the name of the vendor, but we are working with the large, uh, um, security training, uh, um, um, Venn for, to run because we have an knowledge and an experience profile on each one of the developers. We know that you are a top-notch beckoned developer, uh, that those with sensitive data, we can trigger contextual training, why you are developing the coat.

Speaker 2:

See to me. Yeah, it does that, that part, the context piece of it to me is what really makes the big difference. Because, you know, as you mentioned, there's, there's vuln scanning tools or static analysis. There's all these different ways to find bugs in code. But if you don't have a context for this, developer has a lot of experience with this application, knows what the risks are knows he or she made this change in this way. And as you, you know, has been working on this app since its inception, or it's a completely inexperienced developer who is new to this project, and probably shouldn't have made this change. That there's a big difference. You know, people love to talk about security as a series of trade-offs, but, you know, the context matters in those kinds of decisions

Speaker 3:

Spot on. This is the fundamental, um, you know, this is exactly what the pure is doing. Um, and, and, and we're trying to put context into the code changes that you are doing. And not, as I mentioned, not only the, the, uh, the context that it is technical, the context, who are you, what's your knowledge across the history, by the way you, you, in a pure, or let's say, you're, you're working in this organization for five years. We, we, and, and for the last four years, you worked as a backend developer, and now you're working as a front-end developer. We will look at you, even if you're working for five years at the same company, we will look at you as an, uh, as, as a risky. It depends on the changes as a risky, uh, uh, um, as a risk bore into the code change itself, because you're writing a front end code only last, I dunno, a few months, you know, so you always need to look at it at the developer and who, who reviewed the code of this developer, and what's their knowledge, because from my experience, uh, I, so[inaudible] scenarios where, you know, you and I, we are friends and we're sitting together at the same room and you are a front-end developer. I'm, I'm a backend developer and asking you that is, can you approve my pull request? I want to tell my boss that I'm moving faster, and this is a risk factor, you know? Um, but it's only one, it's only one. As I, as I mentioned, we have so many, uh, it's as, as, as, um, as we're always saying code risk is multi-dimensional, you know, it's the human, or it's a technical factor, it's business impact, and more and more and more so,

Speaker 2:

How does the learning process for the tool itself worked like once it's deployed in my, in my environment, how does it learn that context about the developers in what they've been working on and what their, you know, roles and experience are?

Speaker 3:

So I can't, I can't, uh, expose the, you know, the nitty gritty, uh,

Speaker 2:

Secret part secrets

Speaker 3:

Behind it, but in high 11, it's, it's very simple. You connect us to the source control manager, uh, like GitHub GitLab, Bitbucket, Azure dev ops. We support all of them. VR only API. You provide us access with read only, and then you connect our platform, um, into your, uh, ticketing systems. As I, as I mentioned, where you managed your, um, uh, epics, bogs features, user stories and so on and so forth, um, pure will start scanning the entire history of all your code basis appear will enrich the code analyses. We, what you are writing in the JIRA tickets. For example, if we were talking about JIRA, a pure with enrich this data from other places, um, and, and analyze like running NLP algorithms on the tax themselves, that like itself, um, in, in, and then we will be in, uh, it's kind of a simple graph of who is interacting with who and when. And, uh, and the secret sauce is not on understanding the developer knowledge and experience. The secret sauce is eventually how to identify what is a material change like, uh, as a, as I mentioned earlier, if I I'm giving you a very simple, simplistic, you know, uh, example, uh, if you changed it, color coding of the login page versus you've changed the logic of a module in the application that responsible for, uh, transfer, transferring money, like billions of bars a day or an hour, a business, this is a material change. Um, and now when you understand that this is a material change, now let's look at who is the developer, and what's the context. And where did, um, uh, where did you change this code from which device what's your location, who reviewed your code, um, and, and more and more and more data that we can even reach from different places. Um, again, I, I just told you about two data sources, but we can connect to your API and to other, uh, um, you know, production systems to enrich the data of the code change itself. Um, and, and, and, and, you know, th there, there are other factors of learning who are you? And one of the things that may be worth mentioning here is that you can scan the code as a snapshot point in time. And this is the patent behind the Purell. You can snap scan Snapchat, pointing time and say, Hey, I understand the cold Benny says in as an, as an identity across the history and also across different repositories and of course, different applications. And then we can learn your knowledge. As I mentioned, in one application, you can be a top-notch backend developer. And on the other one, you can be an UBI front-end developer, or you move to be a product manager. I don't know, you know? Right.

Speaker 2:

Yeah. It all makes sense. I mean, it's a little bit like if somebody asks you, you know, Hey, do you know this, this person? You're like, Oh yeah. You know, we went to college together twenty-five years ago. Uh, he was great back then, but if you haven't seen him since then, you don't really know what kind of person that guy is, you know, your team, you can vouch for what they were like back then, but you don't know what they've done. You know, I really, really like this analogy it's yours. You can have it feel free to use it. It's recorded. Yeah, that's right. It's on the record. Um, the, the other thing I wanted to ask you about, obviously the last few months, there's been lots of news about supply chain attacks, and you know, that it's not, this is not a new problem. It's a very, very old problem. You know, getting, uh, back doors or malicious code into a code base is like one of the older, a technic techniques and a very useful one. And I wanted to see, you know, what are your thoughts on how, you know, how the techniques that you guys have developed would, you know, help any software vendor protect against that kind of thing? Cause it's not just big software vendors that have to worry about that. You know, we've seen, you know, supply chain attacks on much smaller pieces of software than solar winds, or, you know, Microsoft, for example, tons of smaller little tools that, that don't make huge headlines.

Speaker 3:

So a lot of customers approached approached us, um, when Saul our, when, uh, you know, happened and was published in the news and we developed a very, like a unique technology. Um, we ha we wrote three patents on this technology. And what we said is, is the follow-up. Um, we already know then knowledge and the behavior of the developers. We already know the number, the, um, like deeply understand the code. You understand the code components, security controls, data, um, um, and, and the business impact of every application, the missing part is how we can go and analyze the binary before for you shipped it to your customers in the case of solar winds, but what happened there? I'm sure our audience, they know that the attackers, uh, breached the CAC, the build, the build server and reject it, the code into the binary before signing it. So what we did is we, we developed, uh, an API where every bid server can, the binary to a Purell. We are analyzing, uh, like doing reverse engineering to the, uh, binary itself. And we, uh, we compare or basically extract all the possible logic flows and symbols, and we clean all the auto-generated compiler logic. And then we say, Hey, we look at the binary and we see code changes that we can't, uh, see in the source control manager. We can see this code in GitHub, how come like the developer triggered the build process through GitHub. He wrote the code, he committed the code that initiated this, being the process that eventually generated this VLN file. How come the code that we see in the DLL inside the DLM or the binary file. We can't find it in the source control manager. And this is where we are raising a flag. We are breaking the bins, and we're saying, you have a suspicious build. Um, and these are the methods, and these are the code changes that we see in the binary file. And we can see in the source school reposable, and, and we can alert to the SOC team. We can alert to the developer, we can alert to the application security, uh, owner. And, and this is what we did to be able to prevent, not to detect, to prevent that, that, uh, uh, the case of solar winds, uh, like build time attacks, uh, that to we'll be able to, uh, you know, that will, might affect, uh, other customers, um, one, uh, another attack vector, uh, that I think we'll be, uh, we'll be there in the wild. And, you know, we will, we will, for sure we will hear about it in, in the next few months. Um, maybe because of the Microsoft exchange, a zero day exploit, which allows attackers to get into the network very easily, but I think that will, it will be, uh, and, and you went that vac for where the attackers will compromise at developer's account. And on behalf of, of the legitimate account, we'll commit malicious code, which eventually we'll get to, you know, to, uh, throw at the supply chain, uh, process and be delivered to customers. So what we did to, to, to, uh, um, approach this, this kind of a tech vet bore, we used our knowledge, uh, about every developer and we've been a behavioral model by the way, this is my previous startup. So, uh, my previous startup Erato, we were pioneers in the UCBA space in the user and entity entity behavior analytics space. So we took some of the concepts, so few BA and implement them on, uh, developers and code changes. And now we can raise the flag and say, Hey, Dennis, uh, committed an abnormal Kummete because he's using a, it did it in an abnormal hours or from different locations or from a front-end developer committed back in code and so on and so forth. So I think these two key to attack that force, we will see them in the wind more often, like, uh, attacking the bid process and injecting the code before signing the binary and ship it to in this case, Microsoft FireEye and all other government government agencies and the new at that back door, uh, which the audience needs to, uh, pay attention to that bad attention that here they heard about this, that, that vector for the first time, it's just compromising a legitimate account and, and commit the code from there, um, in, in, uh, and I think that this is my view on, you know, the supply chain attack. And I always, for every customer that I'm talking to, I'm telling them do not install new updates from vendors without scanning their, you need to have a, I don't know if it's in you, um, uh, regulation or whatever, but I think that large enterprises that, um, are relying on vendors, it's the risk assessment questionnaire. So of, uh, you know, the vendor assessment processes are not enough. It's again, it's self at the station. You need to scan their code with a Purell or with other vendors and make sure that the binder is that they shipped to you are aligned with the source code that they have in their source squadron manager. I'm not talking about for inabilities. I'm not talking about secret injection. I'm not talking about CSRF and cross site scripting. I'm not talking about it. I'm talking about something simple just to make sure that the code that I'm deploying in like the binary is that I'm deploying as a bank are aligned with the source code off my vendors business.

Speaker 2:

Yeah. That, you know, the problem comes in when, you know, for the last 15 or 20 years, everybody's been saying patch, patch, patch, patch, patch, as quickly as you can, and just deploy these things so that you're protected against the exchange old days or whatever it happens to be. But, um, you know, so that mentality of like updating stuff as soon as possible is, is out there for sure. Um, it's not everybody can do it, but I understand what you're saying.

Speaker 3:

And I understand what you're saying, because I want to patch as fast as I can, but you can automate the validation or the verification of the binary versus Thor school. You can, you can automatically, you don't need to wait for, I don't know, a person to come and grab, uh, the binary and do it in a manual manner. Right. And so it's, it shouldn't affect the way that you implement patches, um, throughout, you know, your current processes. You just need another ultimated process.

Speaker 2:

I think the last thing I wanted to ask you, I know I've taken up a lot of your time is, um, and I'm sure that you've thought about this. I'm sure you guys have had a ton of discussions about it in terms of the competition. Is this something that say the big software houses like Microsoft, like Google, like, you know, whoever it happens to be can build into their own processes, can they build their own version of what you guys are doing? Or is it, you know, I do they want to invest that money? I guess

Speaker 3:

We want competition. We need competition to build a big market. Absolutely. And you know what, I'm not, I don't know. Maybe we'll open up our patents to others to, you know, to expand the market. We are, we introduced a lot of new concepts to the, uh, you know, risk management that stack offs, application security, um, and even cloud security, uh, domains. We introduce a new concept, which is focused on material changes before they will, uh, become a vulnerabilities. We introduced a new concept off, uh, understanding the developer knowledge and experience when assessing the risk of applications. We introduced the concept of automatically identifying what is the business impact of applications. So honestly, I want them to do that and I want them to have competition and at one to have a big market, because I think, and this is, you know, I'm a big believer in a risk based approach. And I think that if we will automate it, our delivery processes will be much faster and much more accurate. And we will finally can, um, you know, w we, we have a prom, like the promise of Bev SecOps is to release faster. It's not, you know, right now what's, what's happening in, in the DevSecOps industry. It's cacophony too many tools and processes, um, uh, too many vulnerabilities, too many, too many. And, and we want to, to make some, or there, or, or automate these processes based on like, based on our risk based approach. And, and hopefully, I don't know, maybe we will partner with the big ones and integrate our technology into their software, developing life cycle, or secure software development life cycle, or they will build their own.

Speaker 2:

All right. Cool, man. Well, this is a lot of fun. I really appreciate you doing this, Don. I, this is a super fun conversation. I really look forward to, uh, to see and how, how you guys do it's. It's a great idea.

Speaker 3:

Thank you very much. I enjoyed, um, and invite, invite me more often.

Speaker 2:

I will. You're you're in the rotation. Absolutely. Great. All right. Thanks. Take care.

Speaker 3:

Bye. Bye.

Speaker 1:

[inaudible].