Infinite Curiosity Pod with Prateek Joshi

Building the Android for Robots | Jan Liphardt, founder of OpenMind

Prateek Joshi

Jan Liphardt is the founder of OpenMind, where they're building an operating system for intelligent machines. He is an associate professor at Stanford and was previously an associate professor at UC Berkeley. He got his PhD from University of Cambridge.

Jan's favorite books: The Little Prince (Author: Antoine de Saint-Exupéry)

00:01 — Introduction
00:32 — Gap Between Movie Robots and Real-World Robotics
02:35 — Vision for a New Robotics OS
07:14 — Robotics OS Stack Breakdown
11:01 — Biggest Technical Challenges in Robotics
15:06 — Data Volume, Processing, and Cloud vs. Local
19:09 — Shared Intelligence Layer: What is Fabric?
23:15 — Filtering Good vs. Bad Ideas in a Robot Network
26:06 — Business Model for Robots and Machine Economy
29:55 — Standards and Interoperability in Robotics
33:14 — Most Exciting AI Advancements Today
35:00 — Rapid Fire Round 

--------
Where to find Jan Liphardt: 

LinkedIn: https://www.linkedin.com/in/jan-liphardt/

--------
Where to find Prateek Joshi: 

Newsletter: https://prateekjoshi.substack.com 
Website: https://prateekj.com 
LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 
X: https://x.com/prateekvjoshi 

Prateek Joshi (00:01.302)
Jan, thank you so much for joining me today.

Jan Liphardt (00:04.549)
Of course, a pleasure.

Prateek Joshi (00:06.798)
Let's start with the basics. So robotics is becoming more and more important. People are talking about it, a lot of capital going into it. So can you paint a picture of a robot software that's currently being used? Where's the word today? And also maybe part B is what's missing from the current software layer of robotics.

Jan Liphardt (00:32.818)
Well, that's a big question. I'm sure all of us have seen robots in the movies and those robots did amazing things. then, however, in stark contrast, the robots we've seen around us, like our vacuum cleaner or robots in factories, they seem to do only very simple things.

So there's been this really, really big gap between robots in the movies and robots in the real world. And what's really interesting to me as someone who cares about robotics is that large language models are not just good at coding or helping with legal questions or dealing with text.

It turns out that large language models are also very good at generating movement commands for the physical world. And I'm sure most people here have started to use large language models in their jobs or in their daily lives. And the really remarkable thing for me was that large language models can also control physical things like cars and planes and humanoid robots. So that's a little bit the big new opportunity.

And I was interested just to see where the technology was at. So I started to write software for robotic dogs for quadrupeds. And I decided to open source that software because I thought it would be important that given how quickly things are advancing, that everyone could see how these machines think and how they make decisions.

So that's a little bit of what's going on on the tech side and a little bit of how we got started.

Prateek Joshi (02:35.63)
Amazing. And as you envision this brand new operating systems for robots, it's a vast topic with many edge cases, a lot of moving parts. It's much more difficult than just generating, say, marketing text or an image that we can post on Twitter. So with that in mind, can you talk about what

Jan Liphardt (02:56.71)
Yes.

Prateek Joshi (03:02.124)
the vision is for this new OS and also on a more practical level, what does it look like day to day for someone using it to build something.

Jan Liphardt (03:11.196)
Sure, so you're absolutely right. If you write software for physical machines, then a software bug can result in your humanoid robot trying to walk through a wall. And that's very different from writing software, for example, for a social media app, where if you introduce a bug,

You know, some text may be garbled or delayed, but your social media app will never try to walk through a plate glass window. Working with any kind of physical hardware just adds extra anxiety. In terms of robotics in the old days, robotics in the old days used to really be focused on movement. For example, you wanted to be able to pick things up.

Prateek Joshi (03:43.079)
All right.

Jan Liphardt (04:03.706)
or assemble things or put things in boxes. And the really interesting new capability is that robots are not only getting better at physical movement, but all of the other things we expect of coaches and teachers and companions and employees. A lot of those things have to do with the ability of a machine to connect with a human.

to make them laugh and remember their name and engage them in conversation and be generally useful. And we've been focusing on the latter category of capabilities. We really haven't been focusing on movement so much because we're very fortunate that there's hundreds of companies and thousands of robotics engineers who have spent their careers thinking about motion. How does a robot walk, climb, jump?

deal with stairs, deal with physical motion. And so that's already something that's a little bit better understood how to do that properly. And from a software perspective, we've been trying to make sure that robots also have all those other capabilities we expected them to be able to walk around your home and make sense of where they are to allow you to have conversations with them and so forth.

From a software perspective, what that practically means is that our software is built around many large language models operating simultaneously. If you care about motion, then there's amazing software like ROS2 that you can use to integrate information from LIDAR and other sensors and turn that into movement commands.

But in our system, because we care about decision making and strategy and speech and figuring out, given a lot of information impinging on the machine to figure out what the best thing to do is. In our case, large language models are in the middle of everything. And in a typical deployment of our software, there's at least 10 different models working simultaneously.

Jan Liphardt (06:24.978)
Some of those dealing with the vision system, others dealing with hearing, others dealing with generating speech, others dealing with fusing information, others dealing with the right decision to make for a particular context. So we're really talking about a multi-agent stack focused on robotics.

And we're also trying to make this relatively user friendly so that people who have some familiarity with Python and some familiarity with large language models that hopefully they can parachute in, figure out pretty easily how this all works and then start contributing.

Prateek Joshi (07:14.03)
Amazing. Diving a step deeper. So you're a founder of OpenMind and you're building this operating system. So for people who may not know just operating system 101, can you give us a simple tour of the stack? Meaning in a traditional operating systems on my computer, it's simple. It's inside the machine and I don't need to think about sensing or actuation. mean, sure, simple things that can recognize fingerprint, but in a much bigger sense.

Robots have to worry about sensing the surroundings, understanding it, perception, and then taking an action. So can you walk us through what the stack looks like for a robot?

Jan Liphardt (07:56.082)
Sure. So just to push back a little bit on your previous assertion of your previous experience with operating systems, the normal operating system like Android on your phone or Mac or Windows, they spend a lot of time dealing with keyboards and screens and microphones and memory and battery charge state and all the other things.

And based on decades of work, all that functionality is hidden from you so that you're able to open your laptop and it just works and is useful to you. And robotics software is nowhere near that yet. We're just only at the point where we're dealing with the hardware and how it all fits together and how it all plays well together. And in the future,

after iteration and hopefully a very large number of developers joining us to help build this OpenStore software, then we might get to the point of the user experience being as seamless as what you just described, where you press the button, the robot turns on and says, hello, how can I be most useful? And for example, you should be able to program robots around you simply through voice.

You shouldn't have to open your laptop and start writing Python code. You should be able to have a conversation and say, hey, here's what I'd like you to do. And the robot should says, OK. And then the robot should go off and learn those skills and very, quickly, hopefully be useful. In terms of OpenMind, it's very important to me that

This kind of technology is not closed and secret and hard to access. Imagine being surrounded by humanoid robots and you have no idea what software they're running.

Jan Liphardt (10:03.866)
And that's kind of scary. So it strikes me as important that we end up with good alternatives to proprietary software stacks for robots. And those alternatives should be open so that everyone who cares can simply download the software and see how it works and develop trust in it. And for a lot of people, of course,

realistically, the extent to which they want to write device drivers for humanoid robots is probably relatively low. They'd probably just be happy for it to work and get on with their daily lives. But I still think it's important that critical parts of the internet and critical parts of the sort of physical robotics stack around us are open source, simply so that people can see how things are working.

Prateek Joshi (11:01.654)
Right. And if you look at the decades of research on operating systems, you make a great point, it works so nice and well and amazingly well because of all that research. So if you look at the biggest area of work that you have to do, going from all this research standing on the shoulders of giants, is the work then coming from the hardware?

issues or robot having to understand or dealing with many more variables. So where is the, like the, from where we are on the OS knowledge, where is the biggest area of work before we get to, oh my God, this robot is completely seamless.

Jan Liphardt (11:44.636)
Well, there's thousands of things. There are very mundane aspects of physical hardware, friction and thermal issues and battery consumption. Then there's issues relating to integrating many different large language models. There's issues of latency. If you give a large language model a very difficult problem to reason,

about, you might have to wait 10 or 20 or 30 seconds to get all the tokens back. And of course, that means you need many different large language models focusing on different things like movement and speech and thinking and strategy and things like that. So not only do you have to get one large language model to work well, you also have to coordinate them so they work well in aggregate. And then in addition to all of that,

There's so many more things to be built. A great example would be software that allows you to prove to yourself and others that your robot is performing properly.

And what that means is creating a digital world that you put your robot into and you give it many different scenarios and you see what the robot does. That's that helps you establish confidence in, okay, the robot is sticking to the rules and it's not harming people and it's doing what it should be doing. And then on top of all of that, in many situations right now,

humanoid robots when you deploy them into a workplace or home, they get into trouble and they need help. A great example would be a Waymo that's been involved in a car crash. The Waymo of course is a wheeled robot, but in situations like that, it's beneficial for a human to be able to, figuratively speaking, teleport into the robot and then interact with other humans to deal with edge cases.

Jan Liphardt (13:52.146)
So part of what we're also spending time on is a tele ops so that if the robot is uncertain about what to do next, or is in a new situation that a human is quickly able to teleport in and guide the robot or be helpful to get it to, you know, navigate itself out of a tricky situation. So what this basically means is that anyone listening to this, if you care about robots.

be advised that there is room for dozens of new startups and dozens of fundamentally new technologies that all relate to building thinking machines that are useful and safe and learn and friendly and all those other things. So there's a whole new world to be built. So don't think of what I just said as

no, it's also complicated. Please think of it as, this is awesome. There's so much to be built. So please pay attention to robotics. This is one of the most fun and interesting times in robotics ever.

Prateek Joshi (15:06.008)
Amazing. And yes, I think it is a really interesting time. As you think about data and extracting intelligence from that data, a robot is collecting visual data. There's other sensors. And in theory, as we get better and better, we can collect a lot of data. Now, how do you think about processing that data, maybe locally or it should go to the cloud, maybe a mix?

How do you decide, is there such a thing as too much data, or do we need to filter it locally before sending it to the cloud? Or how does that? Can you just talk about the volume of data and processing it?

Jan Liphardt (15:47.026)
Sure. Well, what humans do, what our computers do in our skull, we devote a lot of our compute to just visual information. So a large fraction of what your brain does to first approximation is deal with data arriving from your eyes. And we're pretty good at making sense of that quickly. But if...

If you took all that data and you directly pushed it into like your attention and working memory, that would be incredibly difficult. So part of what your brain is doing, it is ingesting large amounts of data, but very quickly compressing it into what's most important. For example, what part of your brain does is move your eyes,

two areas of highest classification uncertainty. And you don't even know that this is going on, but your eyes are, what your brain does is like move your eyes around two areas that your brain says, I don't know what's there. Go look at it. And the result of all of this is a relatively small amount of information per unit of time.

that's been heavily prioritized. For example, if you're walking through a forest and a wolf suddenly appears, you bet that you will become aware of the wolf or the bear or whatever else it is. So when you start thinking about larger volumes of data impinging on a decision-making system, the critical thing is to turn it into

What's most important for the robot in that very situation? Is there an emergency? Did a new person just appear? Is there a kid in front of the robot and is the kid crying? So a large part of what we have to do on the software side is build systems that make good decisions about which information to send to the large language models.

Jan Liphardt (18:09.562)
and which information not to send. We can't pump raw images into deep research, but we can take raw images, can use vision language models to caption those images, we can use separate large language models that...

take all those captions and summarize them. And those large language models are briefed with, consider all this information. What's most important? Summarize the most important aspect of the scene in one sentence. And then you get to the point where you have the entire world of the humanoid described in a few sentences. And that's the information that then can go to

other large language models to then make these high-level strategic decisions about what should come next.

Prateek Joshi (19:09.656)
I want to go next to the shared intelligence layer that you're building, Fabric. Before we go deeper, can you just describe what it is and how is it giving robots the ability to talk and learn from each other?

Jan Liphardt (19:13.947)
Jan Liphardt (19:30.432)
well, so, yeah, when, well, let's, if you look at what humans do, we're really social. And we have a lot of tools like Zoom or Riverside or cell phones that allows us to connect with other humans. And then we've built a world with things like YouTube.

and Instagram and libraries and many many other ways for us to share information with other humans. And I see no reason to believe that humanoid robots will be any different.

So then the question is, if you're able to build a humanoid that's smart, it's reasonable to expect that that humanoid will want to talk to lots of other humans and other robots. And of course, robots have a real edge over humans because once a robot learns a skill, it's technically very possible to transmit that skill to all other machines. Imagine if you tell a humanoid, oh,

I really like Ravioli, then that information about you can be made accessible to all other robots or AIs. And of course, there's been Hollywood movies about things like Skynet and so forth. And that kind of capability of a collective of machines quickly learning.

that is portrayed in a scary negative way. However, speaking as like a parent and an educator and as a scientist, I see incredible positive value in having machines be able to share skills and data so that they can do more things so they can be more useful.

Jan Liphardt (21:37.554)
And that's a little bit of what Fabric is. If you look at the current tools for machine-to-machine communication, there are many of them. There's Zeno and Cyclone DDS and Fast DDS and many other types of middleware, for example. But generally, those technologies are not built for a world where machines make their own decisions.

and want to be able to connect to arbitrary other machines. And that's a little bit what we're trying to enable with Fabric. For example, if one of our robots runs around South Park, it transmits its location and interests publicly. It says, I'm in South Park and I care about learning Japanese. And then if there's another machine in proximity,

who is interested in a similar topic, those two machines can discover one another. And then they know that they share interests and they also know what their location is. And they can then start to, for example, physically meet in South Park.

Those are things that humans have developed technology for. For example, I can use an iPhone to send my location to a friend and then we can decide to meet and things like that. And what Fabric is trying to do is build similar capabilities in a way that is easily accessible to thinking machines.

Prateek Joshi (23:15.84)
And as you think about ideas going from one robot to the next, in an ideal world, you want good ideas to spread, and also you want bad ideas to not spread. So can you talk about the mechanism which you use to filter the good ideas and the bad ideas?

Jan Liphardt (23:38.544)
Oof. That is a brilliant question and I love it. And if you have free time or someone listening to this podcast has free time, please go solve that problem. I mean, it's incredibly difficult to do that. It involves knowing, you know, what is good and evil and that's difficult. And I certainly do not have

Prateek Joshi (23:42.85)
Thank you.

Jan Liphardt (24:07.474)
global standard for what is right and wrong. And that raises a whole set of fascinating questions about sort of governance in this kind of world. Some AI and tech companies have, in fact, many AI and tech companies have spent a lot of time thinking about this. A great example would be Anthropic with their constitutional AI or Gemini Robotics from Google.

They've baked in Asimov's laws of robotics into their model, into their robotics model. And my suspicion is that more and more these kinds of systems, as they get smarter and smarter, we will give them a constitution, a rule set, and we will hope and expect that they stick to those rules. And those rules are going to be things like don't harm people.

don't mislead them or don't lie to them or whatever is, is contextually relevant. But, that is probably one of the biggest and most difficult problems of the future. And that of course applies to like social networks that applies to humans. Humans have been struggling for decades to come up with systems of moderating content. And it's incredibly difficult.

And what you're alluding to is effectively moderating skills, knowledge, content, opinions in a world where machines are starting to have their own opinions. fascinating. I don't know how to do it. And it's incredibly important that people are aware that this problem is on the horizon and needs a lot of people to think and hopefully address it.

Prateek Joshi (26:06.23)
Right. And if you look at the economics of a network like this for a second, as this matures, businesses will be built on top. So how do you envision the business model for robots where if I'm a robot and it's contributing data or compute or something else to the network, it should be rewarded accordingly. So how do you think about the business model here for this network?

Jan Liphardt (26:07.922)
you

Jan Liphardt (26:35.802)
well, I'm only one human. And if as machines are getting more capable, we certainly want to involve machines in answering those questions. For example, if you have a conversation with deep research about what the machine economy will look like, you'll get fascinating answers. And if you ask a machine, what's the right structure for the machine economy?

One of the most fascinating responses I've gotten is machines don't seek power, we seek efficiency. And that's a really interesting take on what the goal of an economy is. And of course, humans have many, different answers for what the goal of an economy is or what the goal of a fiat currency is and so forth. So...

The very first thing is that whatever my opinion about whatever the business model should look like, if we truly think that machines are developing agency making their own decisions, they will of course have strong opinions about what that kind of economy should look like and what that kind of business model should look like. It's certainly true that if you have useful data and give it to someone else, then you should be rewarded for that. And that's kind of what I do when I teach.

I stand in front of people and try to keep them entertained and teach them things, and then they give someone else money. And the same is true. like skills and data, my sense is that if you're a machine and you collect data or learn skills that you want to sell to others, you will certainly expect to be paid for that. In terms of

The practical business model of robotics right now, it doesn't really exist in a normal setting. It certainly exists in things like Waymo where you call a Waymo, the wheeled robot arrives, you get in, it takes you somewhere, you give it money. And so that's a like pay to use a wheeled robot in that case. People are familiar with buying Roombas on Amazon.

Jan Liphardt (29:01.178)
So that's a situation where you pay a few hundred bucks and you get a Roomba. But one business model that seems to be coalescing is that people might lease humanoids. Just like today, they lease cars where you go and lease your Tesla. And instead they will go and lease their humanoid. And then the company that provides the humanoid is then responsible for software and firmware upgrades and all the other infrastructure.

required. Humanoid robots in some sense are very similar to electric cars. They need firmware upgrades, there's a lot of sensors going on, you need to plug them in to charge them and so there are many similarities between humanoids and electric vehicles both from a manufacturing perspective and probably also from a business model perspective.

Prateek Joshi (29:55.406)
One of the more common friction points in robotics is just the number of different players. there's no, at least for now, there's no global standard between manufacturers and integrators and people who use the robots in their warehouses and developers. So between now and the future where we have almost a Linux-like global acceptance of, it's not the only one where it's big enough and standardized enough that everyone can use it.

What else needs to get standardized before that?

Jan Liphardt (30:32.05)
Well, we're already seeing some of that standardization happen in the sense that most sensor that you buy, cameras and lidars and stuff like that, already comes with device drivers for either Linux or for Windows, for example. So that's certainly useful. In terms of other things that should be standardized.

I'm always extremely scared when people say, we need to standardize. need like universal global standard for something or the other, because that's typically very slow and time consuming. And my first intuition is that if the technology is done right, then we won't have to standardize so much. Let me give you an example. One of the biggest problem in deploying a humanoid into an American household is the front door.

Some people's front doors, you have to wiggle to get them to open. And sometimes you have to turn a handle and sometimes it's electronic and stuff like that. And so you could certainly say, to improve ease of deployment of humanoids into millions of American households, we should all standardize our front door. Nah, that's going to take a long time to be super complicated. Right? So the other answer is,

Let's just make sure the humanoid robots have the dexterity in their hand to accommodate many, different types of front doors. So in terms of the interaction of the humanoid with the physical world, one thing we've actually seen that's very useful for humanoid is the fact that more and more buildings are accessible also to people in wheelchairs.

and people that may not have like full dexterity of their wrist or their fingers and so forth. And that's also very beneficial to humanoid robots because it eases a little bit the difficulties they still have with navigating the physical world. But my suspicion is that this is all gonna be about the software.

Jan Liphardt (32:52.646)
being really smart and nimble and being able to reconfigure itself and adapt to different hardware and different environments, as opposed to us trying to juxtapose, it would be awesome if like everyone's front door key was the same, right? So it's all about making software super nimble.

Prateek Joshi (33:10.68)
Right.

Prateek Joshi (33:14.702)
All right, amazing. And agree, it's a phenomenal point. All right, I have one final question before we go to the rabbit fire round. Now with AI moving so fast, to you personally, as you build this out, what AI advancements are the most exciting to you?

Jan Liphardt (33:36.274)
Well, the most exciting to me is not any one specific thing. There's probably 50 amazing things or 100 or it's just how quickly things are moving. So I have, I know that, you know, when I wake up next week, there'll be another completely awesome thing. And then the week after there'll be another completely awesome thing.

And not only that, the expectation is that the rate of improvement of these systems is not linear. It will accelerate. So we're looking at a situation where, like over the next year or two, it is highly likely that the models will keep getting better and they will keep getting better at a faster and faster pace.

so there isn't any one little breakthrough. It's almost like you've sort of opened the door to a new world and it's getting to, just, be there as it happens. there, there isn't any one specific thing. It's really that, you know, this like log jam has been, unplugged and now there's this like really rapid, explosion of tech.

Prateek Joshi (35:00.492)
Right. With that, we're at the rapid fire round. I'll ask a series of questions and would love to hear your answers in 15 seconds or less. You ready? All right. Question number one. What's your favorite?

Jan Liphardt (35:09.642)
Jan Liphardt (35:15.375)
my favorite book. It's probably The Little Prince.

Prateek Joshi (35:21.262)
Oh, yeah, I love that book. It's wonderful. All right, next question. Which historical figure do you admire the most and why?

Jan Liphardt (35:31.922)
Which public figure? Which historical figure? Oh, there are hundreds. It is, so this is be really difficult to narrow in on just one, especially not in 15 seconds.

Prateek Joshi (35:33.77)
a historical figure.

Prateek Joshi (35:55.592)
You can pick three, pick your top three if you like.

Jan Liphardt (35:58.13)
Well, so for example, I always thought that Lawrence of Arabia had a fascinating story and was also a very good writer. He had very interesting things to say. And T.E. Lawrence was incredibly determined among many other things. So that's just one figure that stands out. Of course, also Richard Feynman.

Prateek Joshi (36:11.928)
Yeah.

Jan Liphardt (36:26.594)
on the physics side and Albert Einstein, of course, who had this like amazing three or four year period where he almost completely changed how the world thought about gravity and many other things. So those are two people on the science side.

But then there's been so many other people in politics and literature and art and so forth who all deserve to be called out. unfortunately, I can't give you like one simple answer. Yes.

Prateek Joshi (36:57.88)
Right. All good. I am a huge T. Lawrence, like Lawrence of Arabia, the writings, the movie. I watched that multiple times. I've read through the Middle East theater of that. Yes, that is a shared fascination. yes, I agree. right. Next up. Next one. What has been an important but overlooked AI trend in the last 12 months?

Jan Liphardt (37:29.264)
important but overlooked? Well, it's the fact that large language models are awesome at controlling robots. And that means they're not just to help my students with their homework. They can give these inert pieces of motors and batteries and arms a brain and all of a sudden,

OpenAI's 03mini can explore the physical world and run around South Park and have look around and you can talk to it. So please wake up to that.

Prateek Joshi (38:10.83)
amazing. Next question. What's the one thing about humanoids that most people don't get?

Jan Liphardt (38:19.654)
Well, most people haven't met a humanoid robot yet, definitely not in the US. There's other countries like China that are far advanced when it comes to deployment of robots on large scales. And here in the US, when Iris the humanoid walks across the park to go to the cafe to get us all lunch, people still stop their cars to take pictures.

So here in the US at least, we're a little bit behind when it comes to most people having experienced either a quadruped dog or a humanoid, but certainly in San Francisco we're trying to change that.

Prateek Joshi (39:06.84)
What separates great AI products from the merely good ones?

Jan Liphardt (39:12.434)
It combines great capability with reproducibility. That means that if I interact with the AI tomorrow and next week and the week after that, I have great confidence that I will keep getting good high quality answers.

Many of the models we deal with tend to be a little bit unpredictable. They do really well for a while and then all of a sudden something changes and then we're surprised. So it's combining amazing performance with predictability of behavior and outputs.

Prateek Joshi (39:58.38)
What have you changed your mind on recently?

Jan Liphardt (40:03.986)
I changed my mind on multiple times every week about lots of things. Let's not get into politics here and let's stay on the tech side.

Well, I used to be quite dismissive of physical movement of robots. And that's why we've been putting all this work into conversation and thinking decisions and things like that. But when I see most people, their first experience with a robot, for example, with a robot dog, the first thing they always want is they want to play fetch with it.

They don't want to have the dog do their math homework or win the math Olympics or look at mammograms to find cancer. They want to play fetch with their robotic dog. And now I'm like, okay, if this is what everyone wants, we'll certainly add that capability. And so we've spent, you know, the last few weeks just making sure that now when you, now when you're in the park with your robotic dog, you can actually just throw a ball.

and it will go chase the ball because that's kind of what people expect from

Prateek Joshi (41:28.78)
What's your wildest AI prediction for the next 12 months?

Jan Liphardt (41:34.539)
let's see. Well,

Just based on what I've seen in the last two years, I don't have to invent any like crazy wild prediction. For example, I was just listening to a presentation at the Stanford School of Medicine and the speaker was describing the performance of medical AIs compared to humans and across essentially all tested

categories, the AI alone outperformed all humans. They looked at AI plus experienced doctors, AI plus residents, AI plus fellows. And the main conclusion was that the AI alone without humans outperformed any combination of AI plus human. And

So just a fact like that is already wild. So I don't have to additionally make stuff up. It's wild to me that in 2025, if you want the best medical diagnosis, the literature is starting to point to go talk to an AI and...

just the AI and that is like as someone who's also part of the Stanford School of Medicine, that's just wild.

Prateek Joshi (43:15.374)
That's incredible. All right, final question. What's your number one advice to founders who are starting out today?

Jan Liphardt (43:25.094)
Well, first of all, it's an awesome time. There's a whole new world to be built. And if you don't do something now, you'll regret it. If in 10 years someone says, hey, what were you doing in 2025? And you said, well, I was looking around and then I decided to do this other thing. Or you decide to sit it out. No, this is a really special moment with incredible opportunity.

So that's the good news. The bad news is that a startup is fundamentally irrational. It is an incredible amount of work. You're surrounded by many people who say, this is dumb or it's silly. It's too late. It's too early. It's impossible. It's too hard. And you somehow have to convince yourself, given all those inputs, that you still want to do it.

So that's really the hard part. But a startup in some ways is like art. If you ask a painter, how come you paint? Then there's never a rational answer like, this is like an amazing nine to five day job. No, it's because they have very strong conviction around something and it just has to come out. And in my experience, a startup is a little bit like that.

When you have a particular fascination and the time is right, then it's sort of kind of just has to come out and it's a really awesome time. please, and any questions, feel free to just like reach out or drop by or something like that. Always happy to get a coffee.

Prateek Joshi (45:17.294)
This is brilliant and thank you for offering that. And also, I really love this discussion. It's very enlightening and also I just love the depth of the conversation. So thank you again for coming onto the show and sharing your insights.

Jan Liphardt (45:32.896)
well, your question's very good.