Infinite Machine Learning: Artificial Intelligence | Startups | Technology

Evolution of intelligence, Digital life, AGI | Flo Crivello, founder and CEO of Lindy

November 13, 2023 Prateek Joshi
Infinite Machine Learning: Artificial Intelligence | Startups | Technology
Evolution of intelligence, Digital life, AGI | Flo Crivello, founder and CEO of Lindy
Show Notes Transcript

Flo Crivello is the founder and CEO of Lindy, where they are building a personal AI assistant. He was previously the founder and CEO of Teamflow. Prior to that, he was at Uber where he led the development of Uber Works and JUMP Starter.

In this episode, we cover a range of topics including:
- Evolution of intelligence
- The dawn of digital life
- Artificial General Intelligence
- White House's Executive Order on AI
- The founding of Lindy AI
- Large foundation models vs smaller specialist models
- Nvidia's position in the AI ecosystem, its strengths and weaknesses
- Building AI agents

Flo's favorite book: Atlas Shrugged (Author: Ayn Rand)

--------
Where to find Prateek Joshi:

Newsletter: https://prateekjoshi.substack.com 
Website: https://prateekj.com 
LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 
Twitter: https://twitter.com/prateekvjoshi 

Prateek Joshi (00:02.495)
Thank you so much for joining me today.

Flo Crivello (00:05.25)
Thanks for having me.

Prateek Joshi (00:07.92)
Let's start with your view on the nature of intelligence. And again, it's a broad topic and because AI is so front and center, I just wanted to talk about how has intelligence evolved over time and also how did that lead to where we are today?

Flo Crivello (00:32.594)
Wow, that's one hell of a way to kick it off. Let me think about this one for a second. You know, the definition of intelligence that I've liked most is the ability to leverage your environment in order to pursue your aims.

Prateek Joshi (00:35.28)
haha

Flo Crivello (00:52.146)
I think there are other really good definitions of intelligence, which is the ability to compress, the ability to predict. At the end of the day, that's what it's optimizing for. I think at the lowest physical level, intelligence is really about producing negative entropy.

It's about somehow, you're always fighting the second law of thermodynamics and over the long term you're going to lose, but it's about delaying that defeat by as much as possible. And so, somehow turn the disorder that surrounds you into order. And so you turn it into internal order and you spit out that disorder. That is my favorite definition of intelligence.

Prateek Joshi (01:41.172)
I love that. I love how the concept of entropy that you use to describe how that is. And I think that makes up so much sense in the sense that, as you said, in the world, entropy is always increasing. And here, what we wanna do is, at least in the case of intelligence, take that, convert that to something less chaotic, less disorderly so that it makes sense. That's fantastic. Okay.

Also, across your podcast and the article, you mentioned that we are at a point in history where we are witnessing the birth of a new form of life, and it's digital life, and it's because of AI and all the developments around it. So first of all, can you define digital life? How would you characterize it? And also, where do you see this going from here?

maybe in the next five years.

Flo Crivello (02:42.91)
Yeah, I think that the technical definition of life is something that can grow, reproduce, and evolve or something like that.

Certainly, AI seems to check every single one of these boxes. The image I always use is, or rather the framing, is if you zoom all the way out from the birth of the universe, the evolution of the universe has been to wield greater and greater degrees of self-organization of matter. And so we started with just pure energy and then subatomic particles, and then atoms, and then molecules, and then cells, and then multicellular organisms, and then brain,

well, silicon-based intelligence seems like it might be the next step on that march. Regarding predictions, I think it's Yogi Berra who said predictions are hard, especially when they're about the future.

Prateek Joshi (03:40.109)
Yeah

Flo Crivello (03:40.438)
But I will try anyway. My timelines for AGI are quite short. I say that there's a more than even chance of AGI arriving within the next eight years or so. So I think in the next five years, things are going to...

I think there's a lot of impedance in the world. And so I think the deployment of a GI may be longer than we think. But at first, it's going to look very much like it's looking right now, which is, oh, snap, this is cool. Like, there's a new product that came out. There's a new version of GPT-4, GPT-5, GPT-6. There's Vision, there's multimodal. There's these really good autonomous agents that are starting to work. Like, oh, it's starting to be really convenient, right? I think at first it's going to look like this. And so I think in the next five years,

we're going to see a dramatic leap in capabilities of the models that we're seeing. I think we haven't even scratched the surface. I think the models that we're going to have in five years are going to make GPT-4 look like...

and Xbox One make like an Amiga look. It's just, it's going to be a very, very different sort of model and set of capabilities. I'm a lot more interested over what happens over the even longer term than that. When I think over like a 10 or 20 years time horizon, I think that's when things start to become exciting or worrying.

Prateek Joshi (05:06.444)
I loved what you said about how the world is moving towards greater and greater self-organization of matter. And I think that there's a certain beauty to that. And obviously, humans have played a role, but also, I think the world, in fact, moves in that direction. Okay, so you mentioned AGI, the timeline. We'll come to the timeline in a second. But just for our listeners, how do you define AGI?

Flo Crivello (05:36.758)
Yeah, I think the technical definition of AGI is you are as good as the average human at roughly everything. I actually think that it is slightly too high of a bar, and I think that AI can become civilization altering in very, very profound ways, much before AGI.

you know, the thing that the reason why people are really interested about AGI is the reason why they consider it an important milestone is because if you have an AI that is better or as good as humans at doing everything, since building that AI was one of the things that humans did, it follows that AI is going to be better at self-improving than humans would.

And so by that logic, since it can self-improve, it means that it could take it, call it a day to make its next version. And then, well, since the next version is better, it would take it 20 hours to make the version after that. And well, the version after that would be perhaps 12 hours. And it's the limit you could imagine this AI self-improving on the rate of milliseconds per self-improvement. And so you get to that thing that's called the intelligence explosion.

I think again, AGI is too high a bar because in order to enter that recursive loop of self-improvement, you don't need to be as good as humans at everything. You only need to be not only even as good as humans, you need to be good enough at only a certain limited set of things, which are chip designs, AI research, and software engineering.

If you are good enough, again, that doesn't mean better than every human, that doesn't even need to mean better than the average human, it just means good enough to make a positive contribution. If you're good enough at any of these three things, then you could enter a recursive loop of self-improvement.

Prateek Joshi (07:35.872)
Right, that's an interesting point. And if you think about, can a machine create another machine that is more intelligent than itself? And if that in fact is true, then you can imagine a situation where quickly that machine creates the next one and the next one, and it just explodes from there. Also, you said eight years, right? So for the time,

what needs to be true, like what needs to happen in those eight years so that at the end of the eighth year we'll say, hey, we are at the mark, HDI, we've achieved that.

Flo Crivello (08:12.946)
Yeah, I think you'll know it when you see it. Actually, I'm not even, sorry, I take that back because I think people can move the goalpost at a break-neck speed is pretty remarkable. So I think from a technological standpoint, what we...

Prateek Joshi (08:15.777)
Hahaha

Flo Crivello (08:32.918)
what needs to happen in order to get there is there's multiple compounding loops here, which is one, there's just like an economic loop where it's like we're pouring more and more money, billions of dollars into that. Then there's Moore's law, which keeps the pace and the GPUs are in fact getting better and better and better. By the way, these two loops are also interwoven because as volumes increase, then you climb up the experience curve and you get better at producing better GPUs and cheaper. And there's more R&D going into GPUs and so forth.

architectural improvements to the models. I don't think the transformers or the final architecture and they're all extremely exciting competing architectures coming out that may be orders and orders of magnitude better than transformers.

there is inference optimizations. I think, even if we keep everything else constant, quantity has a quality of its own. And so if we could make inference 10x faster, well, that in and of itself could make a pretty big qualitative jump at the end of the day.

Flo Crivello (09:36.318)
I think quantization is certainly interesting. I think distillation is certainly interesting. I think training optimizations are certainly interesting. I think we're learning a lot about the kind of data that goes into this training. I think, again, just going back to the architectural optimizations, if you look at ORLHF, I think the kind of ORLHF that gave rise to the models that we use and love today, the GPT-3.5,

and GPT-4s of the world, I think this is the worst that RLHF is ever going to be. RLHF is improving very, very rapidly and we're already starting to see versions of RLHF using DPO instead of PPO that is extremely promising. RLHF as a whole is becoming more and more efficient. I could go on and on. Basically, it's like there's all of these curves that stack on top of each other that I think lead to that Lona-Palooza effect of huge improvement over the next few years.

Prateek Joshi (10:34.772)
Amazing. And if you look at the perception of AI, if you do a survey, many people are excited. I am very optimistic. I'm in that camp. I love what technology can do for humanity, especially AI. And there are also people in the camp where they're worried about AI. They're like, hey, this thing looks fast and dangerous. We should rein it in. So that led to the White House.

recently publishing the executive order on AI. Many people on both sides of the camp, in the sense that, what was that about? So what did they get right and what did they get wrong? Maybe top three in each column.

Flo Crivello (11:19.998)
Are you asking about the White House order?

Prateek Joshi (11:22.774)
the executive order.

Flo Crivello (11:24.974)
Frankly, I'm not familiar enough with the White House order to opine on that developed granularity.

Prateek Joshi (11:29.554)
Oh.

Flo Crivello (11:32.726)
What I will say is that I very much sympathize with the people who worry. And by the way, at this point, some of the world's greatest AI experts, like Jeff Hinton, are extremely worried, and I think there is reason to be. If you were looking for a historic example of a time when a small-dill species was subjugated to a number one, I think you would very much struggle to find this kind of example.

We're giving rise, I think intelligence is the most powerful force of the universe. I think it is what made us the dominating species. And we're giving rise to a species that is going to be all those of magnitude more intelligent than us.

Prateek Joshi (12:16.728)
Right. Shifting gears a little bit. For listeners who don't know, can you quickly explain what Lindy AI does?

Flo Crivello (12:26.122)
Yeah, so we are building an AI employee.

over the long term and frankly not even that long term, I think these AI employees are going to be sophisticated enough that they can manage entire companies like autonomously. I think we're going to have AI autonomous companies made of thousands and thousands of AI employees just pursuing goals. And so we built a platform that lets you create as many of these AI employees you want. And you give them a goal and you give them access to certain tools and applications. And then they pursue the goal that you want

that you give them access to. So these AI employees can be support agents, they can be recruiters, they can be marketers, they can be salespeople. Soon they'll be able to make phone calls and so on and so forth.

Prateek Joshi (13:12.592)
Amazing. So can you just talk about outside of Lindy, let's say you're talking to a customer, a big company and you're advising them, what do customers really want from an LLM product?

Flo Crivello (13:31.102)
Yeah, I think especially business customers don't really know yet. I think they're very aware that something important is happening, but they're, they're sort of confused by how fast things are moving and the breadth of offering that's out there. They don't even really know how to evaluate these offerings. So I don't exactly know that they know what to look for. If I had to advise them on what to look for, I would tell them look for.

look for something that just delivers value right now, be sort of greedy because if it delivers value right now I think it's likely that it will deliver even more value in the future. I would urge the very big enterprise companies to be more willing to experiment with smaller players earlier.

than they have in the past. I totally understand the concerns about, oh, they're going to be a valuable long-term partner, but the truth of it is that things are moving so rapidly right now, and the companies that are really building the groundbreaking stuff are going to be startups. And so, I...

I would recommend companies to just fuck around and find out and just implement a lot of things and work with a lot of things and try a lot of things and see what's working for them. I would let the thousandth flow of bloom.

Prateek Joshi (14:51.312)
That's amazing. And actually that's a good segue into my next question. If you look at the array of foundation models available, on one end we have very large foundation models, very generic, they address a whole bunch of different use cases. And also on the other hand, we have a large number of smaller specialist models for a specific task, use case, and the world does this place for both.

But how do you think this market will pan out in terms of will the world need more of the large genetic models? Will they need the smaller specialist models?

Flo Crivello (15:36.118)
Yeah, I.

I don't think it's an either or, but by and large, I think the larger specialist models are going to dominate for the simple reason that they're going to become cheaper and cheaper. And for a lot of tasks, we're going to reach decreasing returns, meaning for most tasks that people care about, I'm not talking about the stuff that's like solve global warming and cure cancer and all of that stuff. But like, when you're talking about like writing my email and triaging my inbox and so forth, I don't need God to write my email.

And so, okay, like 3.5 is already doing a pretty good job at that. And 3.5 is going to get exponentially cheaper from here and it's already basically free.

And so really, I think the biggest advantage of using a specialized model here would be the capability of the model. But again, 3.5 is doing a reasonable job. And if I k-shot prompt it with my writing, it does a really good job. It's the speed and the cost efficiency of the model. But again, the big models are all so good and so cheap and so fast that it doesn't really matter. So my money on the long term is over the big general models.

Prateek Joshi (16:49.524)
Right. And early on, you talked about chips. And in AI, you cannot talk about chips and not talk about NVIDIA. So I want to spend a minute just talking about NVIDIA. It's been a huge, huge driving force behind this AI revolution. And obviously, they supply the GPU, the flagship GPUs to the entire AI world. Now, obviously, big company, hugely dominant.

So at this point, what are its strengths? Meaning why are they staying dominant? And also more importantly, what are the angles of attack? Meaning if there's an upstart who's thinking about, okay, there's the weakness, there's Nvidia's weakness, what would that be?

Flo Crivello (17:35.154)
Yeah, well, first of all, it's very hard to be an upstart in Semi. It's a very tough market to break into. The strength is CUDA. That's the real moat that they built around the chip. The silicone is not really the moat.

Prateek Joshi (17:42.065)
Right.

Flo Crivello (17:52.59)
could I use the mode? And it's just extremely hard to catch up on. And if you look at the developer experience on competing chips like AMD's chips or even Amazon's chips, it's very, very behind NVIDIA's experience. Sometimes frankly, it's sort of embarrassing how much behind it is. So, if I was to start a semi-company, which I'm not, I would look into

I'm very excited right now about

the companies that are basically burning LLMs onto a chip. I believe Huawei is working on something like that, or maybe Qualcomm. But technically, you could burn a very large model onto a single chip and get basically an entire four-wheeled bus to move at roughly the speed of light. And I think that could make a huge, huge difference for a lot of people.

Prateek Joshi (18:49.968)
Right. And of course, CUDA is NVIDIA's platform, the software platform that enables people to leverage that hardware to build general purpose compute applications. So basically NVIDIA's hardware, obviously very complicated, it's important, but even more important, even more critical to their dominance is their software stack on top and what they've built. I think that's a fantastic point.

And also looking forward, the world is consuming compute. So much computer, there's hungry for AI compute all the way and they're producing a lot of chips and there are cloud services being offered, infrastructure platforms being offered. So if you look at the AI compute market, software and hardware, how will that market shape up? Meaning...

Will the incumbents take all of it? Is there room for a startup to build, maybe not the chip, but anything else on the compute side?

Flo Crivello (19:57.674)
I mean, right now, Nvidia is basically a monopoly that seems to have a very safe chokehold on the market. I foresee that's going to remain the case for the foreseeable future.

Prateek Joshi (20:09.848)
Right. And if you look at the software infrastructure side, meaning obviously, NVIDIA supplies the chips and say big three cloud providers, they'll provide all of the cloud compute you need. What other infrastructure, like software products does the world need right now? Do we need more frameworks, more speedups, more training tools, more testing tools? Like what do we need on the infrastructure side today?

Flo Crivello (20:36.639)
Yeah.

There's a lot of work being done around evals, like evaluations of your product. There's a lot of work being done around guardrails that you can put around your LLM-based product. There's a lot of work being put around observability. There's a lot of work being put around automatic prompt engineering.

You know, I will say I am feeling sort of bearish on LLM Ops as a category. You know, ML Ops as a category didn't pan out nearly as well as anyone expected, and LLM Ops

I feel like at this point there's more LLM Ops companies than there are actual large language models in production. I feel like there might be a lot of concentration in the large language models that are in production. So the market size...

Prateek Joshi (21:23.386)
That's right.

Flo Crivello (21:30.378)
in dollars might be big, in companies might be small. And you actually don't want a market size in companies that is too small because then you have a lot of revenue concentration. And also you run the risk of these companies just building their internal tooling. And we're actually seeing quite a bit of that already happening. I also have like a personal philosophical beef with LLM Ops and tooling companies because they strike me as what Peter Thiel calls

Flo Crivello (22:00.792)
It's like you're confident that everything is going to pan out well, but you don't know exactly in what way. So you sort of spread your bets and you make this meta bet that's like, I'm going to bet on the category and build this thing. And I'm like, man, the future needs to be built right now. There are too few builders and we need everyone we can to actually come over to the application layer. The water is good. So I wish fewer people were making this kind of bet and more people who are actually trying to take a definitive bet.

Prateek Joshi (22:21.914)
Hehehe

Flo Crivello (22:30.452)
view and a strong view of hey what is the future going to look like and how do we accelerate the future coming about.

Prateek Joshi (22:41.328)
That's amazing. It's actually a wonderful point here, is that you need ops when there's stuff that's already built and it's mature and then you wanna monitor it. And again, if you look at the cloud, cloud compute when it came around, first companies were built, the cloud was built, companies were built, and then...

Flo Crivello (22:50.05)
Right.

Prateek Joshi (23:01.356)
after once it was semi mature, then Datadog came along and said, hey, look, there are so many like servers and applications and micro, like we need to, we need a platform to keep an eye on the many, many things. Then maybe it made sense. But they didn't like, even before you, even before AWS, if you set up a Datadog, then it's just kind of, it doesn't make much sense. All right, let's shift gears to AI agents. And given OpenAI has been

providing amazing models and people have been building AI agents, AI agent platforms, many different like little tasky use cases. So step one, can you just define what an AI agent does for our listeners?

Flo Crivello (23:46.698)
Yeah, there's a lot of definitions and according to some of them, regular software is basically agents, but I like to describe it as it's an autonomous entity that is given access to tools and can use these tools to pursue goals in the world. That's an agent for me.

Prateek Joshi (24:04.568)
All right, and if you look at an AI agent, let's talk about a consumer first. So what needs to be true for an AI agent to be useful to a consumer?

Flo Crivello (24:19.582)
Yeah, well, it needs to be.

it needs to pass the viability threshold. It needs to be sufficiently reliable, I think most importantly, and deliver more value in your life than discussing it. Is there any money or it's stress because you're worried that it's going to screw up and whatnot. Basically, that viability threshold is a function of how high the stakes are for the task that you're trying to perform, and how much value.

does the agent provide it to you when it's delivering the task correctly. And so there is a way to lower the stakes greatly by introducing a human in the loop. And so the agent is not fully autonomous. It just submits the work to you as a human, and then you review it, which I think is the form that takes.

the most successful AI products are there right now, which are basically Harvey for law and copilot, GitHub copilot for code. You know, they have very high value because law and code are expensive. And the stakes are high, infinite, but since there is a human in the loop, you know, like the lawyer reviews the work that Harvey does for them, and the engineer copilot literally lives in the repository. And so you see the kind of code that's being written live.

Since there is a human in the loop to stake, so actually low will to create deal. So that's what I think is needed for AI agents to deliver value to you.

Prateek Joshi (25:49.132)
Right. I have one final question before we go to the rapid fire round. And it's around the discussion around where is the value of crewing within AI? Some people say that, hey, big companies, big tech companies, are going to take all of it because it's mostly you need a lot of money, a lot of resources, need years of expertise. And there are people on the other side.

who say that, hey, incumbents have always existed. It's true for every startup ever, but that will not stop startups from finding angles of attack and becoming the next big company. So when it comes to AI, obviously, it's a more nuanced answer. Where are the opportunities for startups and what areas will be dominated by incumbents?

Flo Crivello (26:37.258)
Yeah, you know, my mental model of this is that incumbents win whenever you can tack the innovation on top of your existing product. And startups win whenever the existing product needs to be rebuilt from scratch, because the innovation cannot be tacked onto, it's just at the core of the new product. I think the great example of that is, for example, Figma.

Flo Crivello (27:05.394)
multiplayer and browser-based editing is not something that Adobe could tack on top of any other products. They would have had to rebuild entire products from scratch, and that's what they tried to do with Adobe XD. But once you try to do that as a big company, you arguably are not at much of an advantage versus the startup and are arguably at a disadvantage, because your existing customer base and your existing legacy, both technical legacy and institutional legacy inside your company, are actually holding you back and preventing you from

fast and from innovating on building this new product fast.

The other example of that is mobile with Salesforce. Elad Gill always tells the story of when mobile came about, a lot of startups were pitching the mobile CRM for Salesforce, salespeople who are on the go and so forth. And it turned out that the mobile CRM was actually Salesforce, because you could actually just tack that innovation on top of the existing product. So when it comes to AI, I think very often,

be taxed the innovation on top of the existing products. So for example, Zoom is doing a reasonable job rolling out AI into the product and giving you meeting summaries and so forth. But I think there are a lot of products that really need to be reimagined from the ground up. So for example, AI agents, Infine, if you really had to make them fit in a category, it would be the automation category. But when I look at the big automation players out there,

I don't see a single one of them where AI fits neatly on top of the product. And when I look at the way we have built Lindy in a way that I think is native to this new AI paradigm, I view an architecture that I think is very alien to what I imagine is the architecture of these other automation products.

Prateek Joshi (28:54.008)
Amazing. And I love that viewpoint from a lot of skill. He was actually a guest on the podcast a few months ago. And it's I think what you said about, you know, the can you just add that innovation on top of an existing product? And if that's if that's easy, then yeah, incumbents will just take it because they have distribution. And that's what that's what counts. Get that thing into the hands of the customer. So startups, they don't have an angle. But I think that's a fantastic way of looking at it. All right.

With that, we're at the rapid fire round. I will ask a series of questions and would love to hear your answers in 15 seconds or less.

Flo Crivello (29:32.609)
Is it good?

Prateek Joshi (29:34.296)
Alright, question number one. What's your favorite book?

Flo Crivello (29:38.527)
I like frags.

Prateek Joshi (29:41.964)
Perfect. All right, I think Atlas Shrugged, it's a fantastic book, long, and obviously it's beautiful. We can talk about that in another episode, but love that. All right, next question. What has been an important but overlooked AI trend over the last 12 months?

Flo Crivello (30:08.674)
Quotidation.

Prateek Joshi (30:12.088)
Alright, next.

Flo Crivello (30:12.559)
Perhaps you want more. I think it's a very big deal that we're about to get so much performance out of such small models. Yeah, I think everybody expected only the giants to be able to create these models, and it turns out that you can get really, really good performance on models that run locally. I did not expect that.

Prateek Joshi (30:24.123)
Mm.

Prateek Joshi (30:28.708)
Amazing. Next question. What type of knowledge professional is most underserved by AI right now?

Flo Crivello (30:38.078)
I think they're all equally interested, except for lawyer and engineer. I think we haven't even scratched the surface yet.

Prateek Joshi (30:40.816)
Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha

Prateek Joshi (30:46.02)
All right, amazing. Yeah, that's great. All right, next question. What separates good AI products from the great ones?

Flo Crivello (30:55.19)
take a deep understanding of the problem you're trying to solve and the willingness to nail every single detail in order to provide like a magical experience from beginning to end and a reliable experience most importantly.

Prateek Joshi (31:07.748)
Right, next question. As a founder, what have you changed your mind on recently?

Flo Crivello (31:16.622)
I think the AI safety debate is the biggest thing I've changed my mind on. I used to share a lot of people's skepticism about it, and as the facts change, I change my mind.

Prateek Joshi (31:27.5)
What's your biggest AI prediction for the next 12 months?

Flo Crivello (31:39.638)
keep this one to myself. I think, I think AI agents, I think, I think people don't quite still realize how important AI agents are going to be.

Prateek Joshi (31:41.732)
Ha ha ha!

Prateek Joshi (31:49.928)
Awesome. All right, next, last question. What's your number one advice to founders today?

Flo Crivello (31:58.87)
Fuck around and find out.

Prateek Joshi (32:01.54)
Perfect. I think that's a perfect place to end the episode. And I think that's where we are in the AI phase anyway. You just had to experiment, like keep doing things and some will work, some won't. But the point is exactly that, just keep doing stuff with instead of debating and endlessly arguing about a whole bunch of things. I think that's a great way of doing it. So this has been a fantastic episode. Loved your candid.

viewpoints on so many topics and I'm glad we covered many, many topics here. So thanks again for coming on to the show and sharing your insights.

Flo Crivello (32:37.046)
Thanks for having me, Pratik.

Prateek Joshi (32:39.384)
Perfect, hold on.