floating questions

Clarence Chio: Hacking, Risk x AI, Building with Pragmatism, and the Fine Line Between Delusion and Persistence

Rui Episode 9

From a hacker's perspective, what are some of the most overlooked - yet critical - problems in tech?

In this episode, Clarence Chio shares his journey from giving DEFCON talks on adversarial AI before it was cool, to building two startups at the intersection of risk management and AI. We dive into his experiences hacking alongside white, red, and black hats in an abandoned warehouse near the Kremlin, to the quiet evolution of vendor management, and how the meaning of "work" is evolving as tech modularizes knowledge and labor.

Clarence also reflects on what it takes to scale companies without letting ego lead, why a little delusion might be necessary for persistence, and why letting go can be the hardest - and most essential - leadership lesson.

If you've ever wondered where hacking, humility, and the future of work collide, this conversation is for you.

[00:00:00] Rui: Welcome to Floating Questions, the podcast where curiosity leads, we follow and stories unfold. My name's Ray, simply asking questions, shall we begin?

Hi everyone. Today we're going to chat with Clarence Cho, co-founder of Defcon AI Village, a community of hackers working to educate the world on the use and abuse of artificial intelligence, AI, in security, and privacy. Clarence is also a co-founder of two Startups, unit 21 A, no Code, tooling for Risk Operations backed by Google Cover Base centering around vendor management.

On top of that, Clarence is also an author of a book, machine Learning and Security, and a lecturer at uc, Berkeley with bachelor's and [00:01:00] Master's Degrees from Stanford in computer Science. Welcome, Clarence. thank you very much for joining me today.

[00:01:07] Clarence: Of course. Thanks for having me, Ray. Good to have a conversation.

[00:01:10] Rui: Yeah. Um, so I first stumbled upon a YouTube video of you talking about machine duping at a defcon, I think almost like from like 10 years ago. So naturally as a modern stalker, I just searched you on LinkedIn and notice that you have two startups centered around like risk management. I just cannot wait to dive into, this discussion with you and in general, your life journey.

would you like to give self-introduction, perhaps. Any fun story about you to complement the impressive journey that you have taken on.

[00:01:45] Clarence: Thank you. Well, it's not really an impressive journey. you know, that talk from Defcon from 10 years ago I was kind of doing that research for fun and back then it just seemed like a pure research topic. 'cause first of all, AI was somewhat experimental.

People were [00:02:00] using it a lot as buzzwords to sell products. But in reality, there's nothing generative that was in the market like today. And there was not really a lot of seriousness put around adversarial ai and. Nowhere near the same level of criticality and reliance on systems built with AI like today.

So that was a pure fun Defcon talk to share with people to kind of build that fear maybe ahead of time. Um, and that was also roughly the time that Open AI started to really sound, the alarm bells around adversarial ai, how AI could be really dangerous. And they were positioning themselves as this AI lab that was trying to prevent the bad effects of ai.

And everyone was thinking, that's crazy. Like, first solve the adoption problem first and then solve the, security and risk problem there. But I think, you know, what happened over the last 10 years, people can see and this has already become a very big topic.

Um, and I'd say my startup journey has not been [00:03:00] as cool as that kind of adversarial AI stuff. It's been a lot more pragmatic. the last company was all about anti-money laundering and fraud. And this company is a lot about vendor management and risk. a lot of it is driven by what we saw in the market to be interesting products that were somewhat boring and uninteresting to most people.

I don't think anyone can claim that they find a ML or fraud super, super interesting or, you know, maybe even more so vendor management. But, that's precisely the types of areas in which I think is the most ripe for doing interesting things.

[00:03:32] Rui: I'm a hundred percent with you on this part. gosh, I don't even know where to begin because I have so many different questions for you at the moment. Um, maybe let's talk a little bit about how did you even get into the risk and security in the first place? how did you really discover this is the place that you want to be?

[00:03:52] Clarence: Yeah. so I was in school and then I went to a bunch of startup career fairs and there was this one company [00:04:00] that was a series B company without a product, and I went up to their booth. It's, it is a company called Shape Security. It's my first job. Um, I asked them about their product and they said they couldn't tell me more details.

it's all under reps, it's all all secret. and I had to sign an NDA before they could even give me a pitch about what the product does, because, uh, sold a lot to the defense sector security services. And that fascinated me, they were pretty typical Silicon Valley company in that time raised a ton of money.

I was there for four years and about selling to F five, $4 billion, pretty interesting fraud credential, stuffing protection company. through that journey I was so fascinated by the attacker journey and there are a lot of real like problems with interactions with attackers that we face there.

So that company was solving credential stuffing problems, which is the thing that attackers would buy a bunch of credentials on the dark net and then they would use these [00:05:00] credentials to test them against different services and products and online platforms. And a lot of them would work because people would use the same username and password across all these different services.

And so, this was causing a ton of breaches to account. So if let's say LinkedIn gets breached, then people would buy these credentials off. The dark web for pennies on the dollar and you would use them on Bank of America and American Airlines and steal miles and steal Starbucks gift card dollars. And all this was happening in really interesting ways.

then we would go into like real time battles with actual attackers that have teams of people that will be trying to automate the stuffing of credentials into, let's say American Airlines website. We would deploy some defense, and then they would change their, attacking tools to get around these defenses.

And you could see that happening in real time and sometimes they would even leave messages in the different languages in the request body, for us. And it was like a fascinating thing. And [00:06:00] through that process I got so interested in the attacker defense space. I started this meetup group in the Bay Area called Data Mining for Cybersecurity as an excuse to get large companies who are usually pretty.

Cautious about sharing what they did for security. And I just got people to give talks. And so I, I went to Facebook and Netflix and Google and you know, I found a bunch of people leading abuse and security and fraud teams there. And I said, Hey, you wanna give a talk? I'm gonna gather like 200 people.

And that caught on. So eventually it was a community of like 2,500 people. We had an event once every month or two. and that was fascinating because I didn't really know about what different companies were doing. And I always thought that, hey, these large companies must be super sophisticated. They must be doing lots of things with data.

But then I realized at the end of the day that everyone's still trying to figure things out. You know, like, no one really has everything figured out. that journey led me further down the data science machine learning for security route. That's also actually [00:07:00] where I met my co-author who led, anti-abuse at LinkedIn. And, we hit it off and decided to write a book on that together. 

[00:07:07] Rui: Wow. Well, first of all, I think you misled the audience a little bit in the beginning of like, oh, you know, it's a little bit boring and then nobody really thinks about that. But like, the story that you just shared is extremely exciting. 

I really relate because, the company that I'm working for, needs to handle risk management really well. Since our product is about moving money cross border, what will be a better target than that for fraudsters to, you know, try to get through and then cash out on the other side of the world?

fraud management, it is a very humbling, space in the sense that as you shared with the audience already, that whatever you do, they find a way to go around it. And so whatever you think that you have solved for maybe just a few days or a few month or a few years later, [00:08:00] you realize, oh no, this doesn't work anymore.

So I think the people who really stick in this space, for years it's a lot of humility.

[00:08:10] Clarence: it's actually pretty interesting 'cause you bring up what the attackers have been doing and so there was a period of time in which I was just really into going to security conferences and I, I liked that community because unlike a lot of conferences where it's about.

Like self-promotion or promoting a product or a company. A lot of security and hacker conferences like Defcon is more about trying to break systems and it didn't really matter what systems you're breaking as long as you broke some system. I was cool. And so I would go to a bunch of security conferences around the world.

Some of the most interesting ones that I went to were in Russia. And, I remember going to Def Con Russia, which is a regional iteration of Def Con that was in this abandoned warehouse close to the Kremlin. And, why it was so [00:09:00] memorable was because I had a bunch of conversations with people there, in as, difficult the conversations were, because of language.

But I realized that a lot of them were probably, Not working for the same side that we were working for. You know, a lot of people that attend hacking conferences are on the defending side, but a lot of them are also on the attacker side. 'cause there's a huge bustling economy in breaking systems and it's super lucrative, right?

you could have white hat hackers that are looking for, let's say, vulnerabilities in Chrome, but if you could find a vulnerability in Google Chrome, then you could probably make a lot more money from bug bounties. So, it's a pretty fascinating experience interacting with people that are on the other side because they're super smart and enterprising and, you know, chose a different path that maybe we don't agree with.

But, you can't disagree that they are technically super strong and there's no doubt that they can get around a lot of the defenses that people put in place.

[00:09:55] Rui: your story reminds me of a. Time when I was like in south, [00:10:00] Africa, and I was just like finishing this hike. And I met someone who owns a cybersecurity investigation firm.

And he would talk about, how they're trying to negotiate with, hackers, over the keyboard, and also how the fraud, has evolved into fraud as a service where you also have tech guys in the illegal circles. trying to outsource their tool as a license fee and then enabling many other actors, whether it's causing damage at the e-commerce or, uh, in financial services or just like you name it, 

it's a organized crime and they also have a very lucrative business model and operating model behind the scene. And you mentioned that a lot of these hackers are super smart and they just chose a different route.

I think in general, these hackers are also extremely creative as well. And I think people who are on the defense line also needs to be creative too. So before we dive into your [00:11:00] startups, as a, digression, I'm actually curious whether you are engaging any other creative endeavors in your life.

[00:11:08] Clarence: Creative and creative endeavors. I like music actually, that's kind of my dream. when I was younger to be a musician, I had always been, playing the trumpet for a long time and that's actually the main reason I got into Stanford. Like, I'm not super smart and I have a great arts application because I played in the youth orchestra and I'm good at playing the trumpet and that's the only reason I got into Stanford.

people think I am joking or I'm humble when I say that, but no, it is the only reason I got into Stanford. So, so, uh, that's, that's, uh, that's something that I've always wanted to do and I, I still do it, uh, but I'm nowhere as good as I should be. 

[00:11:46] Rui: Um, do you have a band? 

[00:11:48] Clarence: I don't, I actually play classical and jazz trumpet, and then I just moved to Seattle. I don't have my trumpet with me because I'm trying to force myself to learn other instruments. So I got a, [00:12:00] electric guitar and I got a, tenor saxophone, and I've been learning that by playing YouTube videos and I'm really bad right now.

But, I like learning new, new stuff like that. 'cause it's just a great way of just, you know, dispelling some of the stress that builds up

[00:12:13] Rui: Yeah, for sure. 

okay. Well maybe let's chat a little bit more about Unit 21, the company focusing on no-code, uh, tooling for risk operations backed by Google.

Actually, let's start with how did you even choose the name? unit 21.

[00:12:30] Clarence: the first comp company customer that we worked with was Coinbase at Unit 21. It was a pretty interesting situation, Like we were just emailing a bunch of people and we were trying to find people roughly in the area of fraud and security and financial services.

That was me and my co-founders interests intersections at that time. And so we got in touch with this fascinating guy from Coinbase. He leads the Coinbase Global [00:13:00] investigation team and uh, I didn't know Coinbase at a global investigation team. I didn't know what global investigations meant in a company, but it turns out when you're a large crypto exchange, there's lots of law enforcement around the world that goes to you for information because of course a lot of illicit activity happens to crypto. And they have to do a bunch of investigations and provide, respond to subpoenas, respond to grand, grand jury inquiries about things that are, associated with an address that had intersected with the platform. And, that interaction and that customer experience was so impactful to us because, not 'cause of how much money they were paying for us or the kind of product feedback that they gave to us.

But because the energy of everyone in that team was not really about making more money for your company or not about even like reducing fraud loss. It's all about we wanna put bad people behind bars. And everyone on that team was ex law enforcement, ex-military intelligence, [00:14:00] ex CIA, you know, ex Scotland Yard.

And they, they were all people that have spent their whole lives building cases around bad actors. And that was so fascinating because I was like, we thought how cool would it be to build a company that was channeling this kind of energy. Um, so they call themselves the unit 'cause like, you know, like police unit.

And we wanted to also channel the energy. So we tried to find a domain name unit and it was really expensive. Uh, so we found a good number that to put behind it. And, uh, 21 turns out to be also the Californian penal code for attempted but unsuccessful financial crimes.

[00:14:35] Rui: Oh, fascinating. This is awesome. So maybe. Now that we can get into a little bit, what is your fundamental hypothesis behind this company? Like what's the value?

[00:14:49] Clarence: So around Unit 21, I think the thing that was really fascinating to us, to be honest, it wasn't something that was any realization in the very beginning. I think a lot of the times [00:15:00] when I hear founders tell their story, they always feel, okay, I had this vision in the very beginning and I followed through this vision and it all worked out.

But no, we were figuring these out as we went along. And eventually we found something that made sense and worked. and that was that when talking to people in fraud teams or anti-money laundering compliance teams, we found that there was this primal pain point that every one of them faced, which is that they were always staffed by non-technical people and they had to react to these fast changing attacker adversarial patterns, right?

So whenever they had defenses in place, whenever they were trying to block some activity, whenever trying to find some bad activity and report on them. They couldn't do so in a very long lasting way because whatever they put in place would be irrelevant in a week or two. And because of that, many of them needed to rely on engineering teams to get their job done.

They needed to go to the engineering team and say, Hey, you know, we need to have a new rule [00:16:00] set be implemented. Sometimes it was an internal engineering system, sometimes it was going to a vendor that they were using and then telling them, Hey, I need to implement a new rule. I need to change this condition.

Can you do it for us? And that was such a primal pain point because it must be very frustrating to have KPIs in the job, to have your objective in the job, but you have to rely on some other team that was always very busy to do it. So our realization was that, hey, what if we could lean into this whole fraud, no code idea?

And what if we can give them the powers to do most of what they needed to do without relying on an engineer? So the first iteration of this was actually really simple. It was like, can we give them a, you know, a series of dropdowns to create their own rules? They're not writing code, but essentially it compiles into code and give them all the tools they need to fight fraud.

And that stuck on, so we got a bunch of people that, that really liked it, not [00:17:00] because of how powerful and sophisticated the fraud fighting engine was, but really because of the fact that they could do a bunch of things that they needed to rely on engineers to do. And it's a lot of the time, not really just about the algorithms and the findings, but also a lot about the deployment and the control and the testing of these rules, which it's, its own, uh, can of worms too.

So yeah, that's what the thesis of Unit 21 is around. And the idea has been taken further since then and still being taken far by the teams to today. 

[00:17:28] Rui: Um, if I unpack the product a little bit. So you almost have to integrate with a company's database so that the data can flow into Unit 21 product so that you can potentially engineer a features. a feature being a risk signal of like, oh, what if a user created 10 transactions within one minute, that's not normal.

let's say number of transactions per minute, And then you pipe that into a rule, which is a condition of like, if [00:18:00] you know account create more than X transactions in a given day, then we would want to review this account. so what you are providing is this end-to-end integration 

[00:18:10] Clarence: Yeah, that's right.

the whole idea around fighting fraud is that the ability to be agile in deploying defenses is more than half the battle. 

what really mattered was giving people at the edge, people that are on their front lines, fighting fraud and responding to these events, all the tools they need to react.

And you didn't really need to have super sophisticated defenses because attackers would go after the lowest hanging fruit. And if you made this a little bit difficult for them and you know, ate into their economies of scale, then they would essentially go elsewhere. And so that's the thesis around it.

[00:18:48] Rui: Interesting. And what about the risk, signal engineering portion of it? Like, is that something you are giving the capability to the customers to write the risk signals themselves right away, even [00:19:00] if such risk signal, wasn't computed before.

[00:19:03] Clarence: Yeah, the risk signals are pretty interesting because in order to give more dimensional, give the problem of fraud fighting, you needed to have a lot more ways to express how to capture activity I had an ex-colleague at shape security that described this problem as you are basically trying to balance between diversity of signal and stability of signal.

So if you could accurately characterize one, let's say web request, meet to a server and try to characterize as web request and tie it to a single person's identity. You know, traditionally, maybe 10 years ago, you would think about this as, the canonical Google ad fraud. Detection model, which is, let's look at the H TT P header orders, and we would try to understand if it was the same person or the same browser making the request.

Because different [00:20:00] specific versions of Safari or Firefox or Chrome or web kit would have a specific order of headers that the HTP request was made in. And if it deviates a little bit or it doesn't match the user agent, that was the, the request had, then you knew that maybe they were impersonating a browser using some type of tool or making a crawl request, trying to fix their identity, then you would get more and more deep in this area.

Right? You even go down like fingerprinting, the GPUs of the server because every computer was made of a different manufacturing defect and you could, I mean, you had different types of things like hsts cookies that borrowed on privacy problems, but. At the end of the day, it's all about how rich signals can you get to try to understand who's on the other side of the screen and if the person on the other side of the screen is actually making a legitimate request or not.

So it can group up the behavior in a stable way and it wouldn't change from request to request even if it's the same person.

[00:20:53] Rui: And I imagine maybe part of the challenging, journey is that you [00:21:00] inherently need to observe a lot of data across the company so that you can actually have that data at hand and engineer the signals. Um, was that part of the challenging portion or there are other parts that are actually harder to, you know, wrestle with?

[00:21:17] Clarence: That was definitely a challenge. I think the other challenge especially. In the last, five, eight years, is how to balance this with privacy and access because, you know, getting more signals from the browser, from web requests, from transactions frequently also means encroaching a little bit on user privacy and how much data you are collecting about every single person's activity on the web, right?

And I think this is a very natural conflict point, right? How much do we sacrifice for security, safety, and privacy? Um, so that was interesting as well because there are some very effective fraud signals that browsers would just block because, you know, you could effectively track this and [00:22:00] something that could be an effective fraud signal to identify a person would be instantly used by the real time ad marketing agencies to serve ads.

And that's, not something that most people want. 

[00:22:09] Rui: I wonder have you, thought about working with regulators, to talk about the specific use case in, you know, security and risk management space and separate that out of, um, the marketing effort or there is no really way to really separate those two.

[00:22:25] Clarence: that could be interesting. I think, like a lot of compliance regulatory companies, we try, to go to regulators and usually, I think technically a little bit hard for them to draw the distinction. But also, practically, the thing that, we're fighting against is not necessarily just regulation and the law, but also around consumer perception of the importance of privacy.

And more and more over the years, people are starting to value this as a first party importance, such that the mainstream browsers blocked persistent cookies, you know, a few years ago. and that's, [00:23:00] essentially killed a bunch of technology around this area, 

I'm not saying this is a bad thing, I'm just saying that it just means that it's something that, you know, people fighting fraud, people fighting security issues have to work around. 

[00:23:13] Rui: let's talk about cover base. It's about changing how companies handle vendor management. Actually, this is a topic that I've been thinking so much about lately because in risk management it's inevitable to shop for vendor data unless you actually stand up, intelligence group that develops very, very deep understanding around like device, ip, name, email, validations, all the identity pieces.

and the difficulties I find is that there's no single vendor that will realistically excel in every dimension of the data all the time. So how do you integrate with a vendor, and easily test their efficacy, switch if needed and also onboard a new vendor with backward data compatibility if [00:24:00] you choose to move on becomes critical, to think through as part of the.

Strategy And I think vendor management is also rooted in management philosophy too. last episode I was talking to Ashish, who built the ML system for Revolut. And because Revolut has this philosophy of the effectiveness of risk management, is only as strong as the weakest point in the entire chain.

And so they choose to have control internally then you would want to minimize the vendor integration, right? But for most company, that might not be really feasible. I cannot wait to hear a little bit more about what you think about vendor management.

[00:24:46] Clarence: Yeah, I think vendor management is always seen as this backwards, not super interesting space. It's definitely more boring and sleepy than fraud. And a lot of people actually ask [00:25:00] me why I work on vendor management after unit three one. And, it's actually a bit of a philosophical thing.

I think that vendor management is one of these areas that companies need to get better at to excel in the next couple of decades. And over the last year, I think we've seen more and more signal that vendors tools, different software services that companies are using are going to generate way more value than full-time employees that companies hire. not already, right? Most people think of the vast majority of the work product of their companies as what their employees do, but in reality, I think the vast majority of what most companies generate is actually a result of the selection of their vendors. So the types of software that you [00:26:00] pick, the types of relationships you have with your vendors, what you do to manage the performance and the contracts and the relationships and the risk and the security of your vendors, I think is only going to increase as time goes on, which is why I think even though vendor management is not seen as a core competency that differentiates companies from one another today, if in 10 years it's not seen as that, I would be very surprised.

And even today, I think we're starting to see that in the early days of like AI and agentic workflows and all that stuff. I know there's a lot of heightened noise around that, but. I think there's this company, I forgot which company it is, but, if someone wanted to hire a new position for their team, they needed to prove to the CEO that you can't find an AI agent or software vendor that did 60% of the job.

and a fraction of the cost. And I think that's a fascinating idea because that to me represents like the first steps of this, right? The progressing of what the [00:27:00] employee work product looks like is gonna be less about what a person does day to day. And a lot of what is going to help employees excel and to differentiate themselves is how they can manage and select vendors and, find good tools out there that do the job they need to do.

So that's why we built Cover Base and we're starting off with risk and security because it's the most hair and fire problem in this area. But of course it goes a lot deeper than risk and security. It also goes into how can we manage the performance and relationships with vendors? How can we know the scope of what the vendor's doing for us and how do we know how to select the best vendors out of all the different vendors out there?

And it's just such a broad problem that it's a hard thing not to work on right now.

[00:27:47] Rui: Yeah. let me start with a basic question of why do you. Hypothesize that, company would need to integrate with more and more vendors in the future. You touched that a little bit [00:28:00] in your answer, but I really want to pull that thread out a little bit

[00:28:03] Clarence: Yeah. Yeah. I think companies have already been using more and more vendors and products and, but I think the definition of vendors is important here. 'cause most companies today already. Even the software that you use is built with tons of libraries, some paid some open source, and I would consider all of them vendors, tools, products as well. And I think more and more we're starting to see that, you know, through the advent of SaaS, of course, over the last 15 years and through the advent of AI that we're currently in, the very early stages of this problem is just going to accelerate.

I call it a problem. It's actually a big opportunity. It's a huge progressing of the nature of work. Right? most people think of selecting a vendor Much less important than hiring a member of their team. [00:29:00] And actually the effects are at least the same.

Frequently, if not more important. Because If you pick a great vendor, it could triple your team's productivity and work output.

If you pick a bad vendor, it could grind everything to a halt. And so it's a big multiplier into what it does for just productivity alone. And of course it's not just productivity It's also security and risk compliance. 

[00:29:26] Rui: And the fact that more and more tools are becoming more and more sophisticated, for example, model as a product and being used, to support fraud detection, right?

companies like Sift, or Stripe radar. well, first of all, talent concentration is important and there will be a pretty big shortage in the literacy in managing these very complex things and then build very complex tools to support that type of use case.

So it just makes sense that If you can get a group of people who can really think this problem [00:30:00] through very well. For example, how do you really build a fraud management tool, right? it requires experience. it requires complex thinking. 

Then there's a small group of people obsessing over the problem will do this really well, And then if you try to start an internal group you are gonna have to justify either strategically or, economic value wise, why you would do better than a product that has like decades of experience behind it.

so that's my interpretation for what you have been talking about 

[00:30:33] Clarence: Yeah, that's exactly right. I think the skill sets of an operational person or even someone on the front lines of fraud, I think have to change, right? Because there's a bunch of tools that can help roles like that do a lot more.

one of the earliest customers that we work with, there used to be different roles for risk operations and risk strategy and risk data engineer.

But now [00:31:00] it's one role and because the tool sets have made it easier for people to approach this without maybe formal engineering education and things like that.

So I think this is a great thing 'cause it increases the productivity and the revenue for employee and, for the world, I think.

[00:31:16] Rui: Yeah. Um, maybe let me take this to a very far extreme and supposedly a lot of things that you want to do internally has an abstraction layer built as a product by a different company, right? In that world are we basically just like picking and choosing building block and then trying to output something as a company 

[00:31:44] Clarence: Yeah. Yeah. I feel like that's a lot of how things already are. there's been so much specialization and unbundling of different things that a company does that it looks so different from the type of work that starting a company looks like 15 years ago, right?

[00:32:00] I think from month two in, starting a company, we already were using like 30 vendors because there was one for payroll. There was like no note taking. There was like three different conferencing. There's all these different things that are essentially building the back office function, and even if you're building a product, right?

And that's the reason why people can build so quickly today. That's the reason why product differentiation is less of a company differentiation today. 'cause the cost of building products to a functional level has gone down so much because there's so much unbundling of services and specialization that runs so well that no longer makes sense to build something from scratch if a good option that does 70, 80% of what you needed to do exists.

And I think this is a pattern that I don't see reversing anytime soon and eventually custom fully bespoke software built might be a premium. Just like having artisanal things you buy off Etsy it might be that kind of world. I'm sure a lot of young [00:33:00] startups are using co-pilot, V zero, a bunch of code tools and this allows us to move a lot faster. So it may not be as simple and straightforward as bundling a bunch of tools together to produce a company or a product, but it may be that you're using a bunch of tools that give you the powers to be creative.

that's why this AI agent stuff, this whole wave is so interesting to me, it fundamentally changes the meaning of work. 

[00:33:30] Rui: Yeah. first of all, I really like how you summarized everything. I really like the angle that you were looking at this because if you're a software engineer or if you are someone who has written some code before, one of the most important things is that making the code base very modular so that you can easily attach or detach something without impacting the rest of the stack. sounds like we're heading into a future where we're basically modularizing labor forces [00:34:00] and knowledge in a way that will increase the overall system efficiency. if we're talking about overall system, meaning like an entire society where, specialization of knowledge is just baking to a product and we're just calling each other's API, 

It's making the Lego pieces a lot more refined and easy to use, and then collectively, you can build even a grander product on the global scale and pushing the human society into a different level. Um, so it's actually really interesting to think about.

[00:34:32] Clarence: Yeah, I think that could be a feature. I tend to have this idea that, hey, maybe things are just going to be unified and the most efficient thing is gonna happen. you know, in reality there's always this ugly long tail of things that persist long.

there may be that companies will start interacting with one another and using each other's services purely by API.

But, there might still be a lot of complex and old school relationships that last for decades, centuries. [00:35:00] So it's a interesting world where we're going into.

[00:35:03] Rui: on top of that, like you and I, we have spent a lot of time, in the US and at the edge of observing a lot of technology improvement The truth is, The rest of the world. So many places are barely caught up. 

[00:35:17] Clarence: Yeah. it's pretty interesting the way that technology advancements propagate throughout the world, right? how technology has leapfrog in ways because the distribution channels, are so effective, or because the technology usability becomes so accessible.

There's like the mobile banking, traditional banking leapfrogging, uh, even like the wire internet and starlink, I think I'm less worried about that, but I think it's the centralization of power of who creates and owns technology that is going to be pretty interesting, 

there has been more centralization of technology ownership and creation [00:36:00] than the decentralization of it over the past 10 years. Even if there have been so many Silicon Valley of X countries being created, it still feels like the owners of equity in companies that are generating the majority of value in the world that is exciting slash cutting edge is still in a small handful of places in the world.

And that, I think, is kind of a worrying concept because, it's always a tough to see how this creates a more, equal distribution of wealth or a more fair place. I don't know the answer to this, but I agree with you that it's interesting.

[00:36:40] Rui: Yeah, I think not just about concentration of technology, but concentration of data, right? Like open ai. Probably owns, sits on a massive amount of like data and then just keep getting more and more because people from all over the world interact with it.

So you have this [00:37:00] input that basically just keep compounding the product value with people all around the world, and they're charging everyone around the world. but the data that we're putting in there is what makes the product more and more valuable 

How do we think about data equity? Meaning getting money or incentives, from giving your data away or even just interacting with something so that they understand your behavior a little bit more. I think that would be a very interesting to think about.

[00:37:29] Clarence: Yeah, that's so true, right? Because all of these, if you look at all these AI powerhouses maybe before the age of the open ais, these companies have all provided such viral value that was independent of data in the beginning, but put themselves in the position of being at the choke point of collecting all this data, like all the Googles and Facebooks and that class of companies over the past couple [00:38:00] of decades, their real value was created when they became the truck points of data.

And with AI companies coming up today, models are never going to be the differentiation. if anything, it's becoming so commoditized today. You want to be the trick point of data 'cause that's where the value is gonna be accumulating.

But yeah, it's fascinating and a worrying problem, Don't know how to solve it.

[00:38:27] Rui: Yeah. Um, well, we're not going to solve the world problem today, so maybe let's switch the gear and talk a little bit about your journey as, a founder, entrepreneur. you mentioned that you transitioned from unit 21 to this new startup cover base. uh, you showed up in a different podcast before and I listened to conversation there and you mentioned letting go of control and trusting your team, to be able to steer the direction themselves.

I'm curious about how did you even get to that point? Like what are the [00:39:00] tipping points or the combination of things that make you, say, you know what, it's time for me to let go of control. like, I'm causing more problem in the room. Uh, than if I'm out of there.

[00:39:13] Clarence: Yeah, this is I think one of the difficult learnings that most founders, and maybe not even founders, I think like managers in general face problems like this, right? Where maybe you're coming up from a individual contributor position, good at your job, know how to do the job, and then you're hiring someone to do it, and then you're seeing that, man, that's not how I would do it.

And the first instinct is to say. No, no, let me go do it instead. Or maybe let me tell them how I would do it instead. And I feel like that was how my thinking was because, most people that take pride in their work would want to own that.

And they would want to see everything that their team is doing perceived to [00:40:00] be as high quality in their minds as what they would do. But over time I started to realize that is both not scalable and also not correct um, the not correct part is probably the more important thing to focus here, because someone is not doing the thing in the way that you would do it doesn't mean that it's not correct and observing and letting go of that very natural ego that most people have in the pride of their work probably is the toughest thing I'm still learning. I still feel like there's times in which you see something being done and you're like, man, I would do it so differently.

But on the other hand, having someone make mistakes and learn is probably a much more sustainable and effective lesson than stepping into take over. So,

[00:40:49] Rui: Right. it's all about ego, isn't it?

[00:40:52] Clarence: yeah, at the end of the day, it seems like it's all about ego and pride and all that stuff, all the Freudian, um, concepts [00:41:00] that seem to persist throughout centuries. I.

[00:41:03] Rui: Yeah. at least for me, I have to actively manage my ego, 

I wonder how do you detect your own ego showing up and how do you work with it?

[00:41:14] Clarence: Yeah. I don't know. I wouldn't say I'm the best at it, but I think having the self-awareness that you just mentioned is. The first and maybe most important step, right? That ego plays a part in decisions that you make. And I think a lot of people don't think like that. A lot of people are, are, are not thinking that ego is going to affect the trust of my decisions.

And I'm going to have to identify it and at least know when ego is playing a part of this, even if I'm not gonna change my actions. for me, I think the thing that I try to look for is when there's decisions that I would [00:42:00] think to myself are, not the thing I would do 24 hours later. a lot of things that when you give yourself some time to calm down and take a step back and disconnect from the presence of the problem and the moment. think are driven by the ego and emotion of the problem rather than the logic of it, which is a very human thing to do. And I do think actually there are some decisions that ego can make that are good, but I think in a lot of cases the lack of rationality around decisions that people make when driven by eco can result in pretty subpar decisions.

[00:42:43] Rui: I feel like as you were talking about it, you have so many stories floating through your mind. I'm curious about whatever you have in mind when you make the statement of sometimes the decisions out of the ego is actually a good one.

[00:42:56] Clarence: Yeah, I've seen how a lot of [00:43:00] founders, especially like those visionary founders, when they're trying to do something that defies rationality, they sometimes do things that feels and looks delusional from the outside.

And I think there's such a fine line between delusion and persistence, right? Because when you look at that long 20 year chart of Amazon's revenue over time, you're looking at that chart for the first 18 years and you're wondering, how can this guy continue persisting through this and keep at it, right?

that's a very delusional thing to, to do. And I don't know whether what they had in mind was that it was going to eventually go up into this crazy hockey stick growth. but I think like a lot of that persistence is ego driven. You just don't want to give up because giving up means you have lost. I think the rational thing to do a lot of the time is to look at the data over the past 18 years and realize that you are not growing and do something [00:44:00] else. So I think ego does drive people to do some incredible things sometimes, but it's really hard to know when the incredible thing is gonna result in a good outcome or not.

[00:44:12] Rui: Do you feel like the line then between the delusion and the persistence is almost like in hindsight of whether it's something worked out or not.

[00:44:21] Clarence: Definitely. I think there's definitely so much of that. 

I remember going to this talk by one of the founders of Instagram and they were talking about how, you know, they launched this app, this like whiskey photo sharing app, and then two weeks later there were a hundred thousand users. It was like, cool. and then they did it more and two weeks later, a million users.

I'm not sure if that was like the reality of the mindset in their mind. there's so many like revisionist stories. I'm not sure that revisions that may have been the case, but I think there's a lot of people that look at the history of things and things we're like, this was such an obvious, straightforward thing.

I had this in mind all along, but in reality, I don't think that most [00:45:00] people have that mindset when they're in it in the moment. 

[00:45:03] Rui: when we're so used to in a school system or even professional place where you have at least pretty obvious relationship between effort and reward and that dopamine hit isn't there. And just like you said, for 18 years, and if you just keep going, that is a really, really tough space to be in.

You have to be really believing in your vision and where, how this world is gonna pan out and you just keep doubling on that bet.

[00:45:33] Clarence: Yeah. Yeah. it is very fascinating. I fully agree with you that today there's just such a direct, it, it almost seems like everyone's operations, the way that you live life, the way that you work in startups and companies is so structured around data, evidence, and learning from mistakes in such a short and condensed time span that it [00:46:00] doesn't really engineer people for non-data driven decisions.

Uh, which is, I think, a very rational thing to do, but it also prevents people from exploring paths that could lead them down, that kind of extraordinary outcome. I think the thing that you're doing, like, you just randomly reach out to me on LinkedIn and let's chat.

I think it's really cool because like, don't really care if there's 10 people listening or two people listening, or a thousand people listening. At the end of the day, if you are getting gratification from the conversation and it's interesting to you and me, then you know, who cares what dollar value of, creation this generated in two weeks, right?

this is much bigger impact beyond that. So, yeah, I think it's an interesting way to live life.

[00:46:57] Rui: I'm curious, like on [00:47:00] your, startup journey, how did you manage those, doubting moments?

[00:47:07] Clarence: It's so tough to approach this from a non philosophical lens because, the delusion and persistence thing, I really respect people that can be delusional. I think it's a true differentiating skill it is such a skill to disconnect from the reality of data, to be able to persist through a mental state that things are not growing as you expect and things are more difficult than you think.

That optimism, that persistence, that blindness to what's happening around you. Uh. It is a superpower no matter what people say. And sometimes that superpower results in everyone around you thinking that you should be giving up. Uh, and sometimes they're right in retrospect, sometimes they're not. [00:48:00] But for me, it's been really tough, 

so for me, and I think for a lot of startups as well, you're being, pressured to play this game of getting to proof points and revenue and, your first million dollars, your first $2 million as quickly as possible. And I think that results in companies making subpar decisions around the kind of impact that they can and want to have.

probably quite uncontroversial that just because you are doing something that someone is willing to pay you a million dollars for in one year doesn't mean that's the thing that can help you to create the company that makes the biggest impact that it could possibly have. In fact, a lot of the time it may be the opposite.

It may be the precise thing not to do because then the timescale of the types of projects that you can take on and problems you can solve are so [00:49:00] small. So that's why I think when you're looking at, entrepreneurs that are going on to their next companies after they don't have that pressure of raising rounds of financing, maybe they're self financing, maybe they have enough leverage over, investors and financial systems to not be held to that kind of short term.

Series fundraising pressure, then you see them doing a lot bigger things. But the reality is that's the vast minority of companies. But I think, that should be the type of thing that people invest in more.

I just don't know how that can be a reality given the state of the world we're in right now. And maybe it will never be the case. that's the kind of problem that I struggle with, which is why all the companies that I've been working on, we try as much as possible to have a realistic data driven approach here.

And if things are not going well for 2, 3, 4, 5 years, then you know, most people would consider that [00:50:00] they need to pivot. They need to change. they're not gonna persist. And I'm probably in the same mindset.

[00:50:05] Rui: I think maybe statistically speaking, I mean I have no data to back this up, but. that's probably the right thing to do. it's beautiful to hear about, like defying that rationality and then you score a huge win down the road, but maybe actually for most people it's best for you to stop.

the story that I have in mind is about Slack. actually it started with a video game company. The funder really wanted to do a video game company and it was just miserable. And the user, engagement level is just not picking up. And so he had a kill criteria okay, in three months, if the user engagement doesn't reach this level, I'm gonna kill this company and return whatever capital that I still have at hand back to the investors, and then start something new.

And he did, and then he pivoted into this internal communication tool that become slack today. 

[00:50:59] Clarence: [00:51:00] Yeah.

[00:51:00] Rui: I don't know. I, I feel like this is like, art, there's no right answer and you can't really tell what is really the right choice because you already make that choice and the reality collapse, and you only have this like one path to keep going.

You don't know the alternative universe. So I guess maybe at the end of the day, it's just like making the decision that you felt like you would regret the least. And to say that, I made the right decision at that point in time. And that's it. 

[00:51:28] Clarence: yeah. maybe that is the more sane way to think about it, right? 

And I always wonder whether companies that try more things, like spend more time in experimentation, uh, actually end up creating bigger value. 'cause in my company so far, I always try to optimize a lot more in experimentation early on. 'cause I find that the thing that defines the size of the company eventually is not really the number of things you try that works, but it's [00:52:00] the. Can you try enough things to find one thing that works really, really well? it's like the percentage success of projects is not important, And the way to get there empirically is through trying a bunch of different things.

and, most companies are not built to optimize for trying a bunch of different things, cost of time and capital efficiency and needing the proof concept results. Especially for public and large companies. So, um, yeah, I wonder how to create an environment like that, where you could actually optimize and incentivize companies to do more of that.

[00:52:36] Rui: This could be the last episode of floating questions, or it may not be either way. I hope you enjoyed flowing along with us today. If you liked our journey, please consider subscribing. Thank you for listening and made the questions always be with [00:53:00] you.