From Startup to Exit

Gen AI Series: Gen AI in Security, Sumedh Barde, Chief Product Officer, Simbian

TiE Seattle Season 1 Episode 21

Send a text

Learn about Simbian,  an AI security company. Simbian is the first security company to accelerate security by empowering every member of a security team, from the C-Suite to frontline practitioners, to craft tailored insights and workflows for their unique security needs - ranging from complex investigation and response to governance and reporting. Learn how they pivoted from providing security for Gen Ai to leveraging Gen AI to creating agents that speed up security workflows and investigations.

Sumedh is the Chief Product Officer at Simbian, a cybersecurity startup that builds AI Agents to autonomously handle security operations such as investigating security alerts, hunting for threats, and filling questionnaires. Previously Sumedh was Director of Security programs at Meta, where his favorite project was to measure business outcomes of security. Earlier he was Partner GPM at Microsoft, where he led the product teams for data security products for cloud customers. Sumedh did his B.Tech in Computer Science from IIT Bombay and MS in Computer Science at University of North Carolina. Sumedh lives near Seattle and enjoys the beautiful Pacific Northwest outdoors.

Brought to you by TiE Seattle
Hosts: Shirish Nadkarni and Gowri Shankar
Producers: Minee Verma and Eesha Jain
YouTube Channel: https://www.youtube.com/@fromstartuptoexitpodcast

SPEAKER_02:

There are models floating dark web that only they have access to. And phishing attacks now, way more realistic than they were. You probably heard about the pretty famous incident of this finance guy in Hong Kong who got invited to a video conference where he saw his CFO and his colleagues all on the video conference, and he was told to wire somebody over, and turned out they were all deep. So, with attackers using AI, this inflation insecurity is going up like a hockey stick.

SPEAKER_09:

Welcome to the Startup to Exit podcast, where we will bring you world-class entrepreneurs and VCs to share their hard-earned success stories and secrets. This podcast has been brought to you by Ty Seattle. Ty is a global non-profit that focuses on entrepreneurship.

SPEAKER_01:

My name is Gauri Shekhar. I am uh here with my co-host Shirish Natarney. First of all, thanks to everybody who has supported us on this journey. We have completed many, many episodes now. And thank you all for recommending us and following us on all platforms. Uh Sherish and I serve on the board of Thai Seattle. It's a not-for-profit based uh all over the globe, and our chapter produces this episode, this podcast. Sharish is an author and he's written two books. We borrowed the name of his first book for our podcast from Startup Brexit. The second one is Winner Take All. It is my pleasure to host this episode again with my friend Sharish Nakarni. Sharish?

SPEAKER_03:

Thank you, Gauri. Very pleased to have with us today Sumeit Burde, who is the chief of uh uh chief product officer at Symbian, um, an AI security company. Uh Sumeit brings over 15 years of product and engineering experience in security and digital rights management. Uh prior to Symbian, Sumeit was head of product at Microsoft for Azure's data security products and director of security programs at Meta.

SPEAKER_05:

And now he's chief of product at Symbian, spelled S-I-N-P-I-A-N. Sumit. Hi there. Thank you for having me on the show. Great.

SPEAKER_03:

So let's start with your uh prior uh background experience. Uh you were at uh Microsoft for a long time, and then you joined Facebook for some time, and then you ended up at Simdial. Uh so tell us a little bit about your journey and you know what you worked on, and how did you uh decide to join Simdial?

SPEAKER_02:

Yeah, so at Microsoft I joined initially in the engineering team, did multimedia and graphics for a long time, and by a quirk of fate, I uh ended up in the security team. Even in the multimedia team, we were starting to do security a little bit, and it started, I got interested in it. And so the last I from about 2012 to 2022, 10 years, I was in the Azure security team. So I started, I was in product management over there, I started the data security team over there, and then the team grew. Um and so by by the time I left in 2022, I was um heading up the uh all of the encryption, PKI, key management, all of the data security side of Azure. Um it became fairly mature. It was an awesome journey starting from zero and actually winning customers one by one till the point where pretty much most of the Fortune 500 was using the software that we built and became mature. So time to do something different. And I always wanted to try a um how some of the other big techs are handling security because I'd only seen one side of it. So that's what made me jump over to Meta. And I learned a lot through it. Meta is a very different company from Microsoft, the way they approach uh problems is very, very different. Um and so there too, I was in the security team running security programs, and along came ChatGPT at some point. And as we all know, it just took off like a hockey stick within one month of releasing. There were already, I think, tens of millions of users on it. And then one guy I had worked with, he was not at Microsoft. When I was at Microsoft, he was actually doing, he was a not exactly a competitor, but he had his own company doing identical things. Small world of security, we kept in touch. So we buddied up saying, hey, you know what? Anytime a new computing paradigm picks up, which was true with mobile, it was true with cloud, security is never far behind. There's usually a new paradigm picks up, usually there's a big opportunity for security. So do you want to buddy up over there? That's how I ended up here in a startup journey because it was seemed like the right time, great opportunity, something is gonna happen. To be honest, we had no idea what the opportunity was. But the general thumb rule of anytime new paradigm picks up, security is now far behind, has always been true and still is true. So that's where we jumped in. And we pivoted many times after we started because we had no idea what we were getting into.

SPEAKER_05:

Okay. Great.

SPEAKER_03:

Um, so uh tell us a little bit about uh Symbian. Um what space is it in? Uh we always talked about security, but you know, they think more specific. And uh you know what is the offering uh that we offer?

SPEAKER_02:

Um Symbian, the name doesn't really mean much, it's just a unique name. Um is a cybersecurity startup applying AI to make security operations autonomous. What is security operations? So pretty much every large enterprise, or actually every company for that matter, not just large enterprises, they buy a bunch of security products, they may build their own software also. But either way, between all the products and the software, that's never ever sufficient by itself to keep the company secure. You always need a team of people, 24-7, operating all of that. As an example, um, Shirish, like you may the the soft the identity software might detect that Shirish is logging in from Seattle and logging in from Tokyo within five minutes of each other, and will throw an alert saying that doesn't seem possible. How can he travel from Seattle to Tokyo within five minutes? It's called an impossible travel alert. Most identity software will throw that. And they have no idea, is it legitimate, is it not? So then some person, some human, has to actually look at it and say, is this legitimate? Is he maybe he uses a VPN? Maybe he has a virtual machine sitting in Tokyo that logs in as him, maybe he has shared his password with his spouse or secretary or someone and logging in as him. Who knows? Could be perfectly legitimate, or it could be that no, his password is hacked and then someone else is logging in as in from Tokyo. So distinguishing all of that requires a human because that is business context. So the thing with security has been that while people are spending billions and billions of dollars, almost$125 billion in security software today, they're spending about as much on labor and services. And guess what? As the world moves on with AI, as adversaries start using AI to create more security headaches, companies are going to have to ramp up that spend. It's easy to ramp up the expen the infrastructure itself. It's just throw more machines at riddles scale. What do you do about the people? There just aren't enough trained people in the world. So it became imperative, not only because it's expensive today, but at the rate at which this is going up, that we find some ways to make the human side of it more scalable. And so that's what we do. We take specific problems in security operations and say, if we were to make this completely autonomous, what would it take? So we are a long ways out from making all of security operations autonomous, but we have narrowed down uh a few use cases where we are able to, with pretty good results, say that in these kinds of situations, we can let the AI make decisions using business context just like humans do. And so that gives you uh few things. Number one is I don't need a team that large. Let's say I have five people today, but realistically I need about 30 people to readily run this. So 25 people. I'm not going to get that much budget. But I can buy AI from Symbian that does the job of those 25 people for a fraction of the cost. So cost savings. Second is the ability to go much faster. I mean, whatever we do goes at machine speed. So the faster you go, the more secure you are. And the third thing that it also gives you is the ability to adapt that as situation changes. I mean, you humans don't learn as fast either. So you get better security from that perspective also. So, I mean, in short, yeah, making security operations autonomous, um, that's the bigger picture. We there's some specific goals we do. Right now, I talked about uh the scenario that I talked about is called the Security Operations Center use case, where there is a team of people looking at alerts and trying to triage and investigate and see what that's happening. So that is one thing we have automated. The other thing we automated is something called threat hunting, where um every day Google, Microsoft, Palo Alto, CrowdStrike are releasing research on hey, here's what we are observing with this threat gang. They are attacking the banking sector in East Asia. If you are one of those companies, then you're incentive to go search through your company's environment to see if that threat is active in my environment. We are automating that scenario as well. So instead of having people do it, now AI searches through your environment for patterns of abuse. The third scenario, and this is a very simple scenario, um security teams are swimming in documents all the time, looking at their customers asking them something, and then me looking up my own specifications answering. Or maybe I'm reviewing my vendor, saying I am dependent on so-and-so vendor. Uh, maybe I use Atlassian, maybe I was use Amazon. I need to look at their documents and see do I do these guys meet my requirements? So it's just comparing lots and lots of documents. So answering those kinds of questions from a corpus of documents, we have a questionnaire agent for that. So each of these use cases we are packaged up as agents, a questionnaire agent to answer questionnaires from corpus of documents, a threat agent which takes a threat report and hunts through your environment, and a SOC agent that takes alerts and then investigates them autonomously. Those are our three modern use cases.

SPEAKER_04:

Okay, got it.

SPEAKER_03:

So I understand that your uh your company is venture funded. Uh, who are some of your investors and how much have they uh how much funding have you raised?

SPEAKER_02:

So our lead investor is Kota Capital in the Bay Area, and then there are three other investors Icon Ventures, Rain Capital, and Fireboard. Um together, this is about a little north of$10 million. This is a seed fund. Um seed fund of$10 million. That's uh pretty interesting. That is, yeah. Uh some degree, yeah. Uh if you are in security and you are in AI, the seed funds tend to be on the larger side right now. Uh because I mean AI is expensive, so you either have to go all in or you might as well fold up. So that that is part of it. Uh, but definitely the team we have assembled has a very strong track record. Uh our founder was a second-time entrepreneur. Uh, previous company had raised about$135 million, um, uh became the second largest data security company. And um, we also have the CTO of IBM Security uh as our VPO's security startup. So we have uh and our CTO comes from a deep data background from Twitter and a few other startups, so very, very strong team. Um so that was part of it also.

SPEAKER_03:

That's true. Um so you said that uh when you uh uh started off by you didn't really have a specific product idea. So what did you guys pitch to your VCs? Uh did you pitch a specific product or did you uh pitch uh you know it was a high-level kind of uh rational uh strategy?

SPEAKER_02:

It's interesting. So remember I said anytime a new computing paradigm comes up, security is never far behind. Hey, cloud, cloud becomes popular, then people want to know how do I secure my cloud assets? Mobile becomes popular, people want to know how do I secure my mobile assets. Similarly, the initial interest when ChatGPT picked up, and in general Gen AI picked up, people were wondering, wow, my employees are using it regardless of whether I like it or not. How do I secure their use? How do I make sure that people use ChatGPT and they're leaking their organization information to ChatGPT? That's part one, data leakage. Part two is prompt injection. As I'm, let's say I'm using Microsoft Copilot. If there is some if you and I are both part of the same company and I don't like you, I can potentially craft content in my in our shared repository that hijacks the prompts that you're giving, essentially. So it leads to behavior that you didn't intend to have. Hallucination is a third one where I might depend on Chat GPT so much, or in general these Gen AI models so much, that I take it as gospel, use the answers, and then do some things which actually can cause big harm. So all these attack vectors were big. So that's where we started originally. So we said let's go figure out a way to secure the use of Gen AI in companies. We talked to a lot of companies. Um everybody found it was interesting, but we were not really getting any real bytes in terms of let's go to a proof of concept or a proof of value. So we sat down with our advisors and uh I they and us together came to the conclusion that this entire market of adoption of Jen AI at the time was very frothy. As in everybody was interested in it, everybody said they wanted to do something about it, but nobody was really, nobody really had a structured way of approaching that. On top of that, the other thing that was happening was the market was changing very fast. At the end of the day, you realize that security is always the tail to the dog, not the dog. So first you have to be convinced that it's a dog, not a horse, not a lion. So that's it's first the chat GPT. Um then uh and chat GPT and equivalent like Gemini and Bing. And then people said, okay, no, I want to use the model directly and I want to build my chat bots. People then build language frameworks on top of it. Then the second order of SDKs came around, like Microsoft Copilot and so. So, first of all, which model is going to stick? Each of them has a different security problems, which layer of the stack is the one that's going to be popular. All of that was evolving really fast and still is evolving. So, then in terms of security, what is that thing that we are securing itself was becoming very fuzzy. So we could invest a lot, and in a week, something else comes up, and then we have to throw out that investment to something else. Around that same time, um we we knew that look, this was a bet. So we were throwing multiple hypotheses all in parallel, depending on who you talk to. So we started floating also, what about the converse problem, applying Gen AI to security? Um and we started getting way more bytes on that. So people were, you know, I have this real problem today. I have this pain. Can you solve that? And they were all over the place. I mean, we we we threw some hypothesis out about what we could solve, but they were really all over the place. And it was initially hard, like as a as a CPO, for me it was hard to like where do I even what do I even bet on? Um then we followed just the general dollars and said, look, generally the sock space there's a heavy amount of investment. So let's at least invest there. That may not be when we stop, but let's at least invest there. So we started doubling down there, started double-clicking, and definitely there was a lot of bytes there. People were interested. So we built a co-pilot. At the time we thought AI was not ready to be fully autonomous. Um, there was too many issues, and so let's build a co-pilot and around the same time, Microsoft released their own copilot as well. So everybody kind of was gravitating towards the same idea that this is it. Interestingly, we built it out, we talked to lots of people, but then uh the thing that we were getting back was I don't want to spend on you and still spend on the person trying to operate you. I just want the problem taken away. So, okay, so then we evolved from there to building something called autonomous agents. Agents that behave like humans, that just take up one problem at a time from you, solve it, and you're done. That is really landing well. So since then, the amount of interest we've been getting from large companies as well as service providers has been really high. Um it's hard to build those, hard to build those functionally, correctly, as well as reliably. Um so it's taken a lot of hard work. It's not, I mean, anybody looking at it just as a wrapper over Gen AI is not going to cut it. I mean, you have to have bring in security knowledge, you have to build in lots of other things around uh the core Gen AI piece. Uh but finally we are there, and it's it's been um I I think we are we are we found our sweet spot at this point.

SPEAKER_03:

So when you just to be clear, uh when you first uh raised your venture fund, it was uh around the hypothesis of security, securing chat T usage. Right. And then you then then you evolved that didn't quite get the bytes you're looking for, and then you evolved into co-pilot and then eventually into agents.

unknown:

Okay.

SPEAKER_03:

So how was that journey with the uh investors? Uh did they uh uh were they uh uh you know uh helpful in that process uh to figure out what is the right strategy for the company? Yeah, our investors have been very supportive um all along.

SPEAKER_02:

Um in different ways. I mean, some in terms of our marketing. Um one of our investors has been really awesome in terms of helping us actually get into enterprises. Um so yeah, I mean they know that this is part of a startup journey to go figure out as long as we are doing all the right things to fail fast and discover and then actually landing wins, that's what matters, right? Not attached to any particular thing. I mean, I think even early on, um I don't think anybody really, like in the early days of Chen AI, I don't think anybody really knew where the money was. And so everybody was approaching that with an open mind to some degree. That's okay. That we'll have to keep pivoting a little bit to find when where the opportunity is.

SPEAKER_03:

So in total, how many uh customers did you talk to? I'm just curious uh how much of uh your initial research uh how much uh time and energy it took.

SPEAKER_05:

It was a gradual journey.

SPEAKER_02:

Um I would say that between me and our founders, we probably easily talk to 100 plus. Um I mean, both no some customers, some friendly people who are not necessarily buyers but knowledgeable enough about what would they do if they were in the shoes of customers and so on.

SPEAKER_03:

And did you actually build out the Chat GPT solution first or did you just do the research first? You did, okay. You you did, and you tried to sell that and didn't get the bytes that you were looking for.

unknown:

That's right.

SPEAKER_03:

Oh wow, okay. So in hindsight, do you think you should have done it differently? Meaning that first do the research and then build the product?

SPEAKER_02:

Um there is certainly value to doing that. The problem there are two problems with that that we found in practice. Number one is that it's hard to get an in unless you have something working. Many people won't even give you an audience if you don't have something working. So your ability to execute itself puts you in probably the 30% of people who can execute versus just talk of an idea, right? So that that that that's part one. But part two is also um I learned this early at some point in my Microsoft career. I used to be the kind of person who used to design a lot before I wrote any code. And then I had my one of my peers who used to do exactly the opposite. He used to just write code first, figure out what went wrong. And used to, by the time I got to my first line of code, he had implemented the product like five times over. And so the interesting thing I learned from him is that the act of doing actually teaches you lessons about questions that you would have never cared to ask. So, apart from the five questions that you think you want to research in, there is probably 15 other questions lurking that you never thought about. So only when you actually do it, that's when you care to ask, oh, I'm facing a fork in this. Let me ask if that matters to someone or not. So it's it's a mix of both. I I don't think you can really do one or the other. So we we had a parallelism. We had some people on the engineering side um just build out stuff. I mean, I don't know where that came from. There's that old diagram, right, where the you invest where the technical uncertainty is high but the business certainty is low, and vice versa. So we had basically in parallel. Like there were a couple of us just reaching out to a network trying to understand where the opportunities might be, regardless of whether it's possible to build. And then we had a small team of engineers just building out whatever was possible without knowing whether it was there was a business behind it or not. And then we're like exchanging ideas constantly, trying to find the community.

SPEAKER_05:

So in some ways.

SPEAKER_02:

So we almost ran it like a research lab in some ways in the early days.

SPEAKER_03:

So how long did the whole process take you? Like when did you start and how many months has it been since you started?

SPEAKER_02:

It's been a little over a year now, a year and a year and a half, I think. Um the the the first pivot took us, I think, a good five, five months or shit when we realized that security of AI itself. Um a lot of interest, but nobody willing to put their money where the vowels. Yeah. And then after that, it took us probably another five months, not even five months, probably, I mean three to four months to realize that okay, co-pilot is interesting, but people want more. I mean that co-pilot is still there. I mean, we we still have it, it's still part of it, but it's not sufficient.

SPEAKER_06:

Right.

SPEAKER_05:

Right.

SPEAKER_03:

And did you hire um salespeople to go sell your initial chat GPT solution or was it founder led sales at Chronicle?

SPEAKER_02:

No, initially the first couple of pivots were completely founder. Um, it's only after we actually saw started seeing I wouldn't say we are at PMF yet, but we are seeing okay, there is there is a certain message that's landing quite well. There's definitely interest and it's a buildable technology. Um and now that we have actually paying customers, that's when we we started doubling down on our GTM and hiring proper sales feed.

SPEAKER_05:

All right, let me turn it over to Gauri to continue the conversation.

SPEAKER_01:

Hi, so this uh is fascinating in the journey that you the said the product itself I'll get to, but the journey of iteration and discovery is fascinating because you had to find a sort of at least a pseudo-market fit to figure out should we continue to build something or not, right? While you were going through that, was there a consideration that the either incumbents or the LLM providers could actually uh build something out quickly because they had customers or the enterprises that are customers? Was that a consideration as to how you pick the pivots? Uh okay, there's some green space, there's some time, or no, these guys want to do it. How could that that uh pivot seems like a very fascinating and uh but at the same time you have you were very uh intentional about yeah?

SPEAKER_02:

I mean, there's always that risk, right? That somebody big is going to creep into it, but given that nobody really knew where this was going, and given that the bigger companies tend to be a little more okay, I want to see evidence of a clear large market before they invest big, it gave us some runway, I guess, in that sense. That, okay, if we run fast, we'll have something working before the big guys even bother to invest a whole lot. Um so we were we were conscious of um making sure that we are not colliding directly with some of the biggest companies who for whom this is their bread and butter. Um as far as like anybody other than the biggest companies, the who else you call incumbents there, it's a new space completely. So there wasn't really an incumbent from that perspective. Yeah, cybersecurity in terms of SOC automation, there are a lot of incumbents, pre-AI incumbents who have tried automating this whole SecOps space with just general code. And could they compete with us? Yes. Each of them is trying to evolve their software to incorporate parts of AI. In some ways, the innovators' dilemma is holding them back. They are pretty successful in doing what they're doing. They are dipping their toes into AI in very, very cautious ways. Um so again, gives us some runway. Will they get there someday? Yes. Um, but at the same time, if we are fast enough uh and execute fast, the learnings that we are having are very valuable. And so, I mean, every startup has a risk from that perspective.

SPEAKER_01:

Yeah, right. There's no question about this, right? Just the just the question of how you decided we'll stay away. Seems like you made some choices there. That's extremely interesting. Going back to the product, so structurally, you were built on top of the LLMs, right? Did you first master what?

SPEAKER_02:

It's not just that we are not competing, we actually have partnerships with some of the biggest people there. From their perspective, also rather than investing in-house, sometimes it's better for them to um uh partner with a startup who's willing to take the risk in this area and then see where that goes and then take it from there.

SPEAKER_01:

Yeah, yeah. Uh so that it's uh seems like your go-to-market strategy is also evolving because you you're able to get there uh through these partnerships, right? Get back to the product. So you uh as as a as a stack, so you have the LLMs at the base. Did you start with one and expand to others, or you started with one and went deep with them? Who how was your thinking, or did you have um any unique expertise in one over the other?

SPEAKER_02:

We have tried to keep it generic. As it means there is certainly the amount of investment you put in one LLM doesn't directly translate into another. You always have to do some custom work. But to the degree we can, we've tried to keep we've tried being agnostic in the sense that there are some cases where it doesn't really matter. The same prompts will work across both. So then we actually keep experimenting with who which LLM is good at what, and then we use a mix of LLMs throughout for every single use case, it's a different one. Uh and so uh did we build expertise in one versus the other? Certainly there are a few that we are the big ones we are more familiar with than some of the very niche ones. Um but other than that, we do try to I mean we don't want to bet on one. Um each of them definitely has a strengths and weakness.

SPEAKER_01:

So if you look at the Secop space, is especially when you're solving the human problem, there the cost as well as the response agility, let's call it, right? Fine, even if I have 25 people in the example you gave, I may not be able to respond fast enough because you're not, you know, they're only reacting, not proacting, which is sort of what you're trying to solve. Seems like in the so far, Secops has been a uh sort of bread and butter of global SIS, right? Um they have people spread over the world, uh labor arbitrage, cost arbitrage, um coverage over 24 hours, etc. Right. So do they see you guys as a tool that they could use? Or are they saying we could build this on our own? Um where does where is this cooperation uh in some sense with with these uh global SIs, especially in SecOps?

SPEAKER_02:

So many of our customers actually are um what are called MSSPs managed providers or security services, service providers. And in both models, um some of these MSSPs will sell through. They're also a reseller, they sell us to their enterprises and then they operate the software. Some of them use it internally to offer their entire service, and the end customer doesn't really know. Um do they see us as competition? Not for the most part. I would say there are there are a handful of them who have built their own software stack and are pretty good. They have good software engineering muscle. But the rest of them, their their primary strength is their service offering. It's not been AI per se. Yes, they are building out their own AI, and then we have arrangements with some where we might end up actually using their LLM rather than the ones we bring and so on. So there are some customizations here and there. So we we we have been fairly flexible in terms of how we engage with MSSPs. Um we want to make sure that we together we solve the problem for our mutual customers. And so um there are some white label of cases and then there are some others where they sell through directly. The whole range exists. Um they actually see us as an opportunity way more than competition overall. Most of these MSSPs, there is talent shortage in the world. So when you look at these SecOps analysts, there is like level one, tier one, tier two, tier three. The tier ones are the ones you can hire right out of college. You can train them, you can give them a punch list of do these things and just follow those instructions directly. But even then, it takes a few months before they can truly interpret because no two situations are identical. Even though they have a punch list of do these things, the next alert might be slightly different than the previous one, and so on. So it takes a few months for them to truly be good at their jobs. Level two and level three require years of experience. You cannot hire them out of college, you cannot just overnight train other people. So as the demand grows, you cannot find enough people over there. And that talent shortage in some ways is holding back business for these MSSPs. If they could get more people, they would be able to handle more clients.

SPEAKER_05:

That is part one.

SPEAKER_02:

Part two is that even for the clients that they have, um if their service is primarily manual, there is a limit to just how much they can cover in terms of coverage at their clients. And that leads to quality issues, that leads to dissatisfaction issues, and so on. So if they can scale out and imagine that, look, I am providing a service to a client. There's always questions like the example that I was giving, Shirish logging in from two different places. So many questions come up. There are so many possibilities to explore. I have only limited time. If I could run at machine speed, I could provide a much higher quality answer than if I were doing this. So they are extremely incented in any automation that helps them scale out both their quality as well as business. So from that perspective, uh there is definitely, I would say, way more incentive to partner with us than to compete. And that's exactly what we're seeing. I mean, we are not able to actually, even the ones that we are getting right now, um, it it takes time to handle the partnership, set it up, and we are we are actually constrained how fast we can go on that one.

SPEAKER_01:

Right, right, right. Because if they each have their own scenarios, and now you have to build out exactly. So as the product head of Symbian, right? So you are keeping a very I'm assuming you're I'm not assuming you're I'm sure you're keeping an extremely close eye on closed uh models, you know, Chat GPT or uh anthropic, and then open models, right? Uh uh llama, etc. And then there's somewhere some people are in between, and then there's this talk about agentic and small language models, right? If you were to take the first, do you make conscious choices as to open versus source, philosophical or cost or otherwise? Or you say, hey, I gotta get the best product and I gotta decide as a product head, you're you're making a very important decision in how your product interacts uh with your customers. But there's a uh base understanding of the LLMs that you're absorbing. How are you deciding in open versus closed or somewhere in between?

SPEAKER_02:

I think the quality of the final deliverable is the guiding factor here when the our customers have a scenario or a use case to go solve and do whatever it takes to make sure that the the technology that we use is secondary at the end of the day. Um our goal is to make sure that as our customers get alerts, for instance, that we can speedily resolve at very high quality. Whatever um if if a certain model works better than the other, we'll do it. If in certain cases we have to bypass LLMs completely and have our own code, we'll do it. So this technology in that sense is secondary. We do make heavy use of LLMs under the hood. Um and without that we wouldn't have been able to build what we are building. But my point is that I we don't really have a religion around open source versus closed from that perspective. The quality we're just trying to solve a problem itself.

SPEAKER_01:

Right. So you're coming from the hey, does it help our customers and therefore how should I use any of these, right? Absolutely. Absolutely. So the du jour of the day is obviously agentec. You know, um everybody's talking about it. Everybody thinks 2025 is the year the AgenTech uh architecture takes hold within the enterprise. Uh, and then there's this uh constant gnawing around small language models, right? You clearly are building agents and they're autonomous. That's the first thing where you're getting traction. Are you also seeing that uh you're building a SecOps language model, so to speak? That because you're seeing scenarios often uh through your through your partners and customers, that you are building expertise that your agents are getting better. The LLMs may get their foundation layer, not the not the delivery layer, so to speak, for lack of it. Are you seeing that as the your natural progression?

SPEAKER_02:

Yes. Um, I think it's inevitable. Um, both for performance reasons as well as privacy reasons. Um I mean the big the big models are just not trained with all of the right data. I mean they're general purpose models. So first bringing in all the right data to train um not just generically, but also in the context of specific customers, as well as the realities of making sure that cost and performance like I really don't care that my model is amazing at writing poetry. Like a waste of training data and waste of model weights over there, right? I really don't care that it can do awesome math and solve all the problems. So from that perspective, yes, absolutely.

SPEAKER_01:

Yeah, yeah.

SPEAKER_02:

So is that the is that the most burning question right now? No.

SPEAKER_01:

Yeah. Uh you um also, you know, uh uh talked about how the adoption curve within the enterprise you noticed as to where it shifted and where the leaning was. And every CISO, I'm sure you talked to the chief security officer for every enterprise, uh, wants this balance between protection and AI implementation, right? At the end of the day, they're not measured on, they're measured on how well their enterprise is protected, right? As opposed to, and then you have all these various actors who also happen to have the same access to the LLMs that you have, right? So now you have this uh sort of a chess game that you're playing in the SecOps, uh, which has always been the case, but it's now heightened, so to speak, because they have access to similar tools. Um where do you think is the pulse of the CISOs in terms of where they were pre you know say 2024 pre-2025 to what you see as the uh chief product officer to say, hey, this is kind of where they're heading, or you are navigating them to this is the best path for them to protect their assets. Yeah.

SPEAKER_02:

So um, so here's an analogy, right? Like if investment analogy of if I keep my money as cash, I'm actually falling behind because inflation is just natural. So I have to make sure that my money keeps growing just to keep pace with inflation. Um security automation is the equivalent in the cybersecurity. If I use yesterday's approach, if I'm not actively thinking, okay, I have I'm doing 10 people's worth of job along with my software. I'm not actively thinking in terms of the job that 10 people are doing, how can I automate some of it? If I'm not actively thinking about it, I am falling behind. Because tomorrow's load will be more than today, day after tomorrow's load will be more than tomorrow. That's just the natural law of cybersecurity that their load will keep on increasing. So every day you have to keep harvesting saying, what am I going to automate more? And that's not new. That's always been the case. What's new now is what you just observed, which is adversaries have access to the same models that you have. In fact, there are actually more. There are models floating dark web that only they have access to. Um and phishing attacks now, way more realistic than they were. You probably heard about the uh pretty famous incident of this uh finance guy in Hong Kong who got invited to a video conference where he saw his CFO and his colleagues all on the video conference, and he was told to wire somebody over, and turned out they were all deep.

SPEAKER_05:

So so with attackers using AI, this inflation insecurity is going up like a hockey stick.

SPEAKER_02:

If you're not actively thinking about how to automate and helpers scale, you are following. Falling way behind. And so every CISO from that matter that we have talked to actually is like incented from that perspective. Saying, how do I stay ahead of the curve? I don't want to be in six months regretting that I did not invest in AI. It sometimes many of the customers that we talk to come with very specific problems saying that particular thing is painful for me. I need to automate that. But there are many that we are seeing who are just being forward-looking, saying, Look, I don't know whether it's this problem or that or that, but I know I need to get a good automation framework and automate more and more. So what do you have?

unknown:

Right.

SPEAKER_01:

Right, right. That's that's and that's it. Um let's shift the conversation a bit to your personal journey, right? Uh our audience has a number of uh number of folks much like yours, where they spent a ton of time with uh hyperscaler uh or the the big seven, so to speak, and um are either contemplating or have gone into startups, right? In your personal journey, you took a fairly big leap from Microsoft Meta to a complete startup, which you know may have attracted funding, but was uh flying at a risk level that was unclear. Uh what advice would you give to founders uh that are going through the journey that you were maybe some months ago, some maybe a year ago, as you came from a large company to a startup that didn't, that still was figuring out what to sell to and why, etc. Um, you know, you seem to have made that transition uh quite well. And uh what advice would you give to others listening to this, uh looking at you saying, I want to be, I want to go through that chip.

SPEAKER_02:

So, I mean, first of all, the decision to start up, I mean, be in a startup is these days with big tech paying so well, I would say the decision to go to a startup is often just emotion, passion, fundamentally irrational many times, right? The big risk, big reward kind of thing. So that being said, um I definitely learned a lot in my journey just in the year and a half or so, less than a year and a half uh that I've been here. Um I was fortunate that our founders both came from a startup journey. Uh they were and and so I learned a lot from them.

SPEAKER_05:

Um in big tech, in big companies, we are almost trained to optimize for success.

SPEAKER_02:

What I mean by that is that many times when we take on a bet, it has already been vetted by many layers before it even comes to you. And the entire machinery around you almost makes sure that you succeed more often than it's some way or the other. It may not exactly work out the way you want, but you're not going to be out on the street. Um and I and many times what happens is that means that we prematurely start thinking in terms of how do I design this right so that I can build it for the long term, I can scale it, I can have great quality and all of that. And that that tendency to think about that was second nature to me in big companies. Coming here, I was initially always doing the same thing until I was reminded, hey, we don't even know if this idea has legs, why are we doing that? So let's go to hypothesis first before we double down, right? That was I had to unlearn that habit a lot uh initially.

SPEAKER_04:

Yeah.

SPEAKER_02:

Now it's like, okay, uh, here's a hypothesis. What's the cheapest way I can test this out before I actually double down on it? So that that really, really, I think is the biggest. If anybody wants to go from big companies to startups, that's the biggest thing I would say. Um and then then that's the other thing was also, I think, something that Sherish had asked about did you research first or did you build first? And it's not one or the other. Uh in big companies, we had the um luxury, I would say, to do a fair amount of research because our previous product was still paying us our salary. Um or while it's doing it, we can we can spend 10%, 20% of the time doing the research for the next one. Whereas here, you're starting at zero, you don't have anything, you don't have the opportunity to just do research for a long time. And so you're something, it may not be what you're building for the long term, but you have to do something. So bootstrapping that is is it just requires a different mindset and a different um willingness to take risks and just go out on a limb sometimes.

SPEAKER_01:

Right, right. So seems like uh unlearn a bit from your past and reel uh and it and implement the learning of build fast, fail quick now. Yes. It seems like those are two tenets that you got out of this journey. So that that's great to hear.

SPEAKER_02:

Uh I think the most important thing also is um having the right team and team relationships and keeping the bars super high. I mean, like in the early days, especially, like if the entire team is not all like, hey, I want to make it big. If su if there is even the one person kind of dragging it down, it can spread. So um you really have to think differently. As compared to I think big companies where people are optimizing for a marathon rather than the sprint.

SPEAKER_04:

Yeah.

SPEAKER_02:

Um, the kind of people they attract around you, there is a different um, I guess the way HR works, what everybody wants to do, you have to it's a different philosophy compared to hiring and stuff.

SPEAKER_01:

Uh how big is the company in terms of employees now?

SPEAKER_05:

Right now it's a little over 30.

SPEAKER_01:

Till a smaller on the smaller side, you you still have um so a lot of founders uh out there either thinking or going through this, hey, we can build something using LLMs, right? Um the journey that you went through. What advice would you give them uh given that the LLMs are moving at a breakneck speed, every day something new is coming, it's getting better, number of uh parameters in is increasing, decreasing, because your dependency uh you have a codependent relationship with them, and you have discovered that codependency relationship in a fashion that is useful for your customers. If I'm out there thinking, hey, I can build this on my LLM, why should I not do it? Uh on any LF, right? How would you characterize your journey and give advice to somebody saying, hey, uh yes, there is LLM, but what are you building? But you know, you seem to have gone through that over the last year and a half to get to this point here.

SPEAKER_05:

I mean, there's a lot.

SPEAKER_02:

Um I would say almost every day I see dozens of little projects around the web where people have taken an LLM and built a little bit around it. That is in the direction of solving a problem. So I've seen as an example somebody who said, Oh, this is a threat model that allows you to go threat model your software. Uh, here is something that creates fuzzing tests for your code.

SPEAKER_05:

So tons and tons of stuff.

SPEAKER_02:

They're almost always technology upwards, right? So you start with a model and you add a little bit of stuff on top, and yes, it goes in the direction of helping. But it leaves so much unsolved in terms of the fit and finish of the overall product that they're not really usable and they're very easily copyable as well. I mean, the traditional, keep stay focused on the customer and how they're using, like truly understand the customer, their mindset, what's hard for them, what's easy for them. Uh, what does a delightful user interface look like for them? What is the end-to-end? Like, what are the little things that they care what all of that, just like traditional products, all of that continues to be the bulk of the things that make a product successful. In some ways, the LLM is an enabler, but not necessarily the bulk of a successful product. Um technology-wise, maybe, but from all those little things, like LLM is the thing that gets you about 80% of the way with 20% of the work, but the other 20% is critical. Like that takes other 80% to make it happen. And people have to just recognize that. I mean, you're not you cannot be lazy uh building something around LLMs and succeed. Um and when you think about it that way, when you own the final surface and make that and you're conscious that look, the way I'm using LLMs is just an enabler, you automatically start thinking in terms of how not to marry yourself to a particular LLM and keep switching. It just becomes another another dependency that you have.

SPEAKER_01:

Yeah, yeah. And and looks like even your journey, you started somewhere, listened to the customers, figured out that wasn't going to cut it in the near term. Maybe it'll do it in the long term. In the near term, abandoned it uh for something better, different, uh, that works for your customer. And there's this emotional, right? There's this if you work in big tech or big companies, you don't have to abandon anything. Stretch it out. You can stretch it out for a while.

unknown:

Yeah.

SPEAKER_01:

Right? But you you had to abandon because next month, otherwise, uh you somebody else is building it. Not you. The need isn't going away, somebody else is building it. Seems like that's an important lesson for founders out there. The unlearning part of it was when to stop and when to shift. Seems like you made the jump, especially as the product head. That's the hardest thing because you built something. You it's your baby. So thank you very much. This is uh fascinating, the journey that you've been through. Uh I appreciate Sharish and I appreciate you coming on our podcast. I think our audience will really enjoy listening to your journey. And uh I I uh wish you the very best and uh hand it back to Sharish for a final question. Go ahead, Sharish.

SPEAKER_03:

Uh thank you, Gauri. I did have a final uh question, which is uh you know, around hallucinations and building agents. Uh I mean that's one of the uh downsides of using uh LLM. So, how do you make sure that the agents that are using the LLMs don't end up in a situation where the LLM is hallucinating and it's not correctly performing its task?

SPEAKER_02:

That also in some ways falls in that bucket of what I said the 80% and 20%, right? Um there are a few so that there is generic software that people have built. In fact, that's what we started with as well, right? Building a generic software that um makes sure that and we in fact have a patent on it. Uh that layer is called trusted LLM, and that's a component of our larger software. That makes sure that as we are using LLMs, um there is there are certain guardrails around it.

SPEAKER_05:

That's one aspect of it.

SPEAKER_02:

Um but the other aspect of it, interestingly, is if you were to open up um a car and give the driver access to every single part, they who knows what they'll do with it. But then somebody, some car manufacturer has taken the chassis, put it in a nice body, and at the end of the day, your driver only has access to a few on-off button, a gear, maybe steering wheel, and that's all they can really do on a break. Um that reduces the number of that that that makes sure it's a guided journey through it. Because the car manufacturer has under the hood decided when I click this, what are all the things that happen? Think about the shift from providing LLMs directly to agents like that. Instead of letting your end users deal directly with LLMs with the full glory of the good and bad that comes with it. Instead, you're depending now what we're selling is agents written by people who understand the domain. So now we are we all came from a security background. We understand that domain. We are trying to solve a problem in that context. We are using LLMs as one of the many tools. It's now it's a curated use of the LLMs in some ways. So through experience, we know what is okay to pass in, what is not okay to pass in. Um, we do tons of experimentation to find out okay what are the common mistakes they make, and then look start looking out explicitly for those. So, yes, there is the generic layer, but then doing very domain-specific checks on top of it because we've seen that these kinds of errors happen. Making sure that is one part, it actually helps make sure that finally the quality of the LLM is better because you're not using it wrong. The other thing is hallucination is not a bug, it's a feature. That is the entire purpose of a LLM is to hallucinate.

SPEAKER_05:

And as in create stuff.

SPEAKER_02:

When it works in your favor, you don't call it hallucination. When it works against you, you call it hallucination. That's all. But what is a when you ask it to create a poem for you, what is it doing? It's hallucinating in some ways. It is creating a sequence of things that didn't exist out there. That's a form of hallucination, a good hallucination. So anyway, so so given that fact, um hallucination is natural. Let's embrace it, let's um let's just do it. What is more important is that whatever output comes out of it, making sure that it doesn't have downstream impact. I think that is the more important thing. As an example, cars, again, coming back to the analogy, emit a ton of poisonous gases. And you're going to say, like, how do I contain? Okay, there is an exhaust pipe that stares in in a certain direction, gives gives it out, and the driver doesn't get poisoned because of it. So as long as you make sure that the output is channeled in the right way, avoid harm, then the fact that they hallucinate becomes less important. So it's a multi-layered problem. The generic layer, the domain-specific checks, the making sure that the outputs are correctly channeled to minimize downstream harm. There isn't a one answer as such. Okay.

SPEAKER_07:

Very good. Yeah.

SPEAKER_03:

Again, fascinating conversation. Uh really interesting to talk about your journey, the different pivots. So um all the best. Thank you. And uh hope we'll have you back uh at a year's time to see where you are. Thank you.

SPEAKER_02:

Yeah, man, we we we have a strong team. Um our founders came from a pretty good startup background. Like, I think to together um where we are is because of the team.

SPEAKER_03:

Thank you very much.

SPEAKER_00:

Thank you for listening to our podcast from Startup Exit brought to you by Dis Seattle. Assisting in production today are Isha J and Vini Burba. Please subscribe to our podcast and rate our podcast wherever you listen to them. Hope you enjoyed it.