Infinite Machine Learning: Artificial Intelligence | Startups | Technology

Underwater Robotics

December 04, 2023 Prateek Joshi
Infinite Machine Learning: Artificial Intelligence | Startups | Technology
Underwater Robotics
Show Notes Transcript

Max Grigorev has been building ML products for a long time. He has worked underwater autonomy and has spent time at Google, Airbnb, and Teralytics. 

In this episode, we cover a range of topics including:
- What is underwater autonomy
- State of play in underwater robotics
- Challenges of underwater operations
- Differences between underwater vs land robotics
- Unexplored opportunities in underwater autonomy
- Regulatory hurdles
- Future of underwater autonomy

Max's favorite books:
- Dune (Author: Frank Herbert)
- Pattern Recognition and Machine Learning (Author: Christopher Bishop)

--------
Where to find Prateek Joshi:

Newsletter: https://prateekjoshi.substack.com 
Website: https://prateekj.com 
LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 
Twitter: https://twitter.com/prateekvjoshi 

Prateek Joshi (00:01.298)
Max, thank you so much for joining me today.

Max (00:04.654)
Of course, it's a pleasure.

Prateek Joshi (00:06.802)
Let's start with defining the term underwater autonomy. What is it and why is it important?

Max (00:18.082)
So underwater autonomy is there in the names. It's underwater. So we're talking about robots that have none of their pieces emerged above the sea level, right? No antennas, nothing. You're completely submerged underwater. And autonomy is exactly that. It's actually an interesting detour here, is why does it matter, right? So underwater, the problem is the electromagnetic

radiation underwater is very strongly attenuated. So basically what it means is normally let's say you can send an electric signal from your cell phone electromagnetic signal even for like miles potentially it was a direct line of sight basically you can be miles and miles away from your cell tower and yet your cell phone can like communicate and selling megabits of data to a cell phone tower.

Underwater we're talking about potentially like meters or like maybe tens of feet, but that's it There are exceptions like basically the attenuation grows with a square of the frequency What it means is like you can actually send shorter waves, sorry longer waves Uh over longer distances, but you also need like very strong power. So for instance, uh, the u.s Navy at some point was maintaining this gigantic antenna in greenland that allowed you to send

allowed them to send a very, very short messages very slowly, but to all their submarine Atlantic fleet. But we're talking about basically megawatts of power and then 10 is the size of hundreds of miles. So and we're talking about a character every 20 seconds. So one bite every 20 seconds or something like that. So it's.

Prateek Joshi (02:00.434)
Hahaha

Max (02:11.05)
It's basically untenable to actually control or receive any information, including like very basic telemetry from a robot. And that's where autonomy comes in. Right. So like, you basically want to describe a mission to a robot, like put it in. And then the robot goes off and does its own thing. So you basically have no communication there. And eventually the robot basically just like emerges and let's say uses.

uh, radio or potentially satellite communication, which has actually gotten like way more affordable and, uh, it's actually very possible to use it, um, to, to communicate, okay, mission complete or mission fail, but basically when it's underwater, it's on itself. So beforehand, people were using this very, very expensive Tevit robots to do stuff underwater, uh, where humans cannot go, or it's like too expensive to actually use humans.

But those come with a set of limitations. So the tether is obviously just a piece of wire that potentially needs to go all the way down there. The water in the ocean is pretty aggressive, actually. So any kind of electronics, the wire itself, the sheathing of the wire, because it needs to be bendable as well, right? Like it's just it's dangling there and the robot is moving. So you have to be like very careful about moving.

alongside the robot so it doesn't get, the wire doesn't get damaged. If you damage it better, the robot is gone. You'll just lose potentially millions of dollars.

Prateek Joshi (03:44.274)
That's amazing. And the attenuation factor that you mentioned, it's so, and people don't think about it, but it's insane. Like if you, the communication, which is so key that like the fundamental communication gets hampered as you go underwater and all these challenges about the size of the antenna, the speed at which we can communicate, all of that, it's insane. All right. Now you briefly described what's kind of

what works, what doesn't. Can you describe the state of play in underwater robotics? Meaning, what technologies are available? What companies are doing it? What is the use case? And what sectors are using it? Apart from the government, obviously, who uses this?

Max (04:33.474)
So I propose we move this a little bit in a more structured way, move this conversation a little bit more structured way. Let me zoom out a little bit and basically describe the different aspects of robotics. And then we can zoom in on each one of them specifically and look at normal robots, like surface robots versus underwater robots and what are the challenges combined. So robotics in general is a study of three things. One is perception.

So you have robotics perception. The other word is planning. So you perceive things, you figure out your environment, you figure out your location, and then you basically plan. Or what am I going to do? And the last piece of the whole puzzle is motion. So once you have the plan, how do I actually execute it? So let's start with perception. So perception was very much an unsolved problem until about 10 years ago.

But the neural networks absolutely blew up that area. There was a lot of very complicated mathematics, very complicated code that governed perception in robots. But ImageNet and all of the consequent developments in that space, they absolutely changed the game. So suddenly, self-driving cars became way more possible. And it was mostly due to advances in perception.

Um, so another, like one of the most important aspects of perception is self-localization. So you need to figure out where you are in respect to the environment. You obviously also need to have a map on the environment. So it's like your localization does not make any sense without the context of a map. Right. So if you're thinking, let's say the most, uh,

basically popularize robots these days are self-driving cars. So if you think about self-driving car, you need to have a map of the environment, let's say the city, and you need to know exactly to like, specifically to like single inches where you are in that city. So specifically on the road, this lane, and this much distance from the one side of the lane, this much distance from the other side of the lane. So the challenge in self-driving cars is let's say precision of your localization.

Max (06:55.126)
The other challenge there is actually being able to tell, oh, there's a pedestrian crossing the street in front of me. So you have to perceive the objects around you, classify them correctly, and predict how they're going to move. That is the other problem. So now if we move and look at this problem in the context of underwater robotics, the equations change completely. So the precision of your localization doesn't matter anymore because the ocean is relatively large. And even the ocean is relatively large.

detection of objects outside of the surface of the bottom of the ocean. Let's say whales are rare. There are not many whales around. It's mostly unlikely that you're going to hit the whale or the submarine, maybe the hull of the boat, but still it's pretty unlikely. It's not as dense as the city traffic, for instance, right? But the other side of this, the problem...

that is basically the largest problem in underwater robotics. Robotics is actually localization in general. If you think about it, the way, let's say, self-driving cars localize is they use a rough locality using a GPS. So they get like a rough location fix. So we're talking about...

tens of feet, basically, right? So up to like 10 feet, maybe if you're lucky, if you're using something called assisted GPS, you can get there. But they need like slightly more localizations. So what most self-driving cars are doing these days, they're using something called a LIDAR. It's a laser radar, basically. It shoots laser beams in all directions and measures the time of flight of the reflected laser. So it knows roughly the surface of the space around it.

And this way, using something called an NHD map, they actually can localize themselves for a percept on the road. If you think about the ocean though, right? You don't have the GPS, because GPS is electromagnetic waves. So the penetration of GPS on the water is measured in feet. So you cannot really get it fixed on a GPS. The other problem is like, yeah, you can potentially use LiDAR there, but like LiDAR...

Max (09:12.754)
at any wavelength basically, LiDAR is also very much attenuated by the water. And you also have like the water is way denser than the air, so you have a lot of dispersion. So like it doesn't matter how powerful LiDAR you have. Basically, like first of all, there is nothing to scan around you, it's mostly water. Even if it's at the bottom of the sea, we're talking about basically...

We don't have the map of that. And we can talk about the applications of this technology later, but it's one of the applications. We actually have not, we don't have the map of the bottom of the sea. We just know nothing about it, right? And even if we do, a lot of, much of the bottom of the ocean is featureless. It's basically this flat surface and you cannot really localize yourself using that, right? So the way localization has been,

done, let's say, for the last 50, 60 years, let's say in large submarines, was twofold. First of all, you can have a barometric depth sensor. So you actually can tell how deep you are. So that gives you one of the three axes, right? Like, this is already pretty good. The other way people have been doing this is basically using large accelerometer.

So very large, very precise accelerometers, which allow you to tell, which basically tell you how fast you're accelerating in some direction. Like once you figure out the acceleration, you obviously can figure out speed, and then you can use the rough maps to more or less tell where you are. And of course, the problem with this inertial navigation systems is that there is an error, right? And the error accumulates.

like the error multiplies by the error, it's like compound interest. Like you have this compound error that eventually you have less and less precision around your position. You're basically losing precision constantly. Right. And your, your potential location just, it's not a spot anymore. It grows into this, into this basically Gaussian, Gaussian basically representation of where you can be. Right.

Max (11:34.374)
And there are obviously a lot of problems with that. So what the submarines usually do with that is they try to kind of, first of all, we can emerge obviously and get a fix on the GPS. It basically collapsed back to a point, go down again, use the inertial system for a while, come up again, get a fix and so on, right? But obviously in some situations it's very undesirable. Let's say you're at the bottom of like a three kilometer

um, low ocean patch, right? Going up and down three kilometers every now in a while. Uh, and we're talking about like, it's basically two months, right? Going up and down two miles is, is a significant expenditure of energy and like significant expenditure of time. Right. So you might not be able to afford that. So you need to figure out something else. Right. Um, so basically inertial navigation systems, they're there. They're expensive. They're very large.

the more precision you want, the precision level grows with the square of size. Sorry, it's a quadratic root of the size and cost. So it's cost and size, a quadratic root of precision that you want to eventually get. All right?

Prateek Joshi (12:50.898)
Right, right. And also, that's an amazing level of detail. And as you think about, this is one, this is hard to do. This obviously it's expensive. You need amazing hardware. You need a lot of, looks like you need energy. You need a lot of software. So, I'm assuming, garments.

or like government funded agencies are the ones who can afford to do this. Is there like any other sector where large companies are using this or is it just purely a government endeavor?

Max (13:24.926)
Oh man, like whoever figures this out is gonna be a billionaire. Basically, right? Like, so, uh, we haven't finished with all the problems there, but, uh, I've gone very deep into the most, the biggest problem actually, right? The other one is communication, but basically someone who figures out, figures out localization and or communication underwater is going to be a billionaire and there are obviously good reasons for that. So the first one you zero exactly on it. It's basically defense. It's the government sector.

Prateek Joshi (13:28.274)
Yeah.

Max (13:52.002)
They really, really want that. And if you look at what was happening in Ukraine, in the Black Sea, right, in this war with Russia, what's happening is all these bots are now a very important factor in the war, right? It's basically controllable suicide robots or like intelligence robots. And on the surface warfare, it's pretty...

it's pretty obvious what's happening. It's these quadcopters that are remotely controlled. They just go and do stuff. Or even if they're autonomous, it's way easier to get the autonomous flying robots to work. So if you look at what was happening on the sea and several of the Russian boats that were sunk or they attacked on the Crimea Bridge, it was actually surface autonomous drones. Maybe not completely autonomous, maybe they were radio controlled.

But primarily, they were all surface vehicles, right? Because first of all, your localization problem, it's more or less solved. You've got a GPS fix. If you want to hit an object that is hundreds of feet long, like you're talking about a warship, right? Where exactly are you going to hit it? It would be nice to know, but it doesn't really matter. Or on top of it, you have a rough fix to get to a position you need to go, and then you have

visual cues, like you can detect this shape of the ship. And then using that shape, you can direct yourself at a specific spot. So like when you're on the surface, the problem becomes way simpler. You can even get telemetry, you can know what's happening, you can send corrective signals till the problem is not there. So this robots, they're mostly underwater, but they have a piece of them that is merging, right? That is over the surface.

And that allows you to maintain communication and allows you to maintain a localization fix. So imagine you could move all of that underwater. And you could actually, so over the surface vehicles, there's a huge problem with them. They can be spotted, obviously. When you see a shape moving towards it, you might even be able to get a radar fix of that. Of course, they can hide in the waves and all of this. But like,

Max (16:08.178)
they can be detected and most of the time they are and they're destroyed. But imagine now you can actually send one underwater. It's way harder. You can obviously use sonars, but those objects are very small and sonars are not very precise. So you obviously get a huge benefit against your enemy, right? Like if you have a completely autonomous underwater bot that can go and do intelligent missions for your real estate. The other huge obligation of that in defense is transportation.

So imagine you have a bunch of Marines that landed on an island. How do you supply them? Like in a war situation where you actually have to move ships around and they can be subject of attacks on the planes and the submarines. Like the supply missions are suicide missions very often. And war these days is mostly about logistics. Right? How can you like move?

specific weapons and people to specific place, how to kind of supply them, right? So underwater robots, huge, have a huge, huge application there, right? Like if you can do supply runs over the stretches of water, that would be incredible. And actually now of a startup that is trying to build exactly that, they're zeroed in on this specific application of underwater time. So the...

Prateek Joshi (17:29.298)
That's brilliant. I wanted to touch upon the two key areas that you mentioned that, hey, if somebody solves localization and communication underwater, big, big opportunities. And it looks like government, at least the US government, they're becoming bigger and bigger buyers of more like modern technology and defense tech is becoming increasingly important. So if you look at...

Max (17:40.59)
Mm-hmm.

Prateek Joshi (17:59.09)
the so the two big areas so communication and localization and you mentioned that 10 years ago until 10 years ago perception was a big it was a big challenge but neural nets really accelerated that so so going forward for us to solve localization and communication underwater what breakthroughs need to happen on like hardware software material science energy like what

breakthroughs need to happen so that somebody would solve that someday.

Max (18:33.45)
Yeah, that's a very good question. So let's think about the ways you can actually do localization underwater. The first one is the way it's always been done. So inertial systems, right? So there are very precise inertial systems in submarines. Let's say they're not a complete solution, but they are pretty good. But the problem with those, like,

One system like that can be weighing like two tons and cost millions of dollars. So at that point, you might as well build a submarine around it. Like it was people inside it. Like it doesn't, they still will need that. But like at that point, you're kind of losing a lot of benefits where your underwater autonomous vehicle can be very small and can continue on missions that would not be feasible for people being in it. So.

A breakthrough can absolutely happen there. So you could potentially start using ML to actually figure out that the drift, when you're drifting, let's say, and the drift happens for several reasons. Let's say you might have a current. You might have miscalculated the drag on your hull. There may be a lot of issues when you have that drift, just like a physical imprecision of your instruments. So imagine you use some kind of, let's say,

even like very simple, like Mr. Ofgassian models on top of that sensor, right? You potentially can actually train a model that will allow you... And you have a training loop. So you go underwater, you navigate a bunch, you merge, get a location fix, propagate that error. But as with any robotics problem, so why ML hasn't solved robotics yet? In my opinion, just because we have a very, very expensive...

Max (20:31.567)
collect training data, training loop. In an environment where it's easy to collect data, ML has absolutely revolutionized basically any kind of actions. So ads are all done using ML. NPC characters in games, the same.

basically when it's not a physical domain, it was relatively easy to collect training data. Even look at what's happening with the chat GPT, right? They're a large F kind of popping on that model on the language model, even though collecting the data was not really cheap, it was still cheaper. So they were able to build enough of a data set to fine tune that model really, really well, right? But if you look about robotics, even with simple

simple robots like let's say robotic hands. Google tried to scale that actually and basically tried to build a bunch of hands that did something and then you collect the data, you feed it back into the model. Let's say it's a simple manipulation task. Take a cube, move it somewhere, let it go of it. And it has to be going precisely from one zone to the other, just using vision.

So Google was trying to basically build a very, very large scale of a lab, basically like hundreds of those hands doing things and learning using reinforcement learning. And basically, the experiment didn't go anywhere. So there were a couple of papers published, but there was no breakthrough there. Imagine with a submarine, if you have to do the same, it's going to be even more expensive. You have this very complex underwater engine that goes down, moves a bunch, emerges.

And now we have one data point. You just have one data point after potentially at least minutes or maybe even hours of work. So that is a huge issue there. At some point, there was a lot of excitement about basically SIM2Real transfer. So basically, the idea was we built a very, very precise simulation of the environment. And the idea was that we built a very, very precise simulation

Max (22:42.71)
then simulate a bunch of actions there, and then eventually transfer the results back into the real control systems, right? But again, there was some work, there was a lot of work in fact, let's say at Google as well, in OpenAI at some point, where people are trying to simulate the behavior of these robotic systems and then transfer them into the real world. But in general, those activities ended with no breakthrough.

Right. So like, it seemed to real did not happen. There was, let's say a lot of people are still using simulations, um, uh, in the, um, self-driving space. And in fact, I even worked a little bit on that. Um, but even there, there are a lot of challenges. Uh, it works better and there was a lot of effort put into it. The simulators for the underwater activity is gonna, are gonna be way, way more.

because simulating water as the medium where you're in was all the currents, all the wave situations, like even the different salinity will affect how things behave, right? And in fact, like US government spends incredible amount of money to measure, to basically build a map of salinity, temperatures, currents inside the ocean, because it matters a lot for many, many applications, including inertial navigation.

So they're actually dumb, these crazy pipes with a lot of equipment off the plains. And they're single use and they cost thousands of dollars. But it's worth that. And obviously, they're not sharing that information.

Prateek Joshi (24:23.026)
Yeah, yeah. It's incredible how many different things have to come together to do something like this. So when it comes to the lack of data, as you mentioned, there just isn't enough data available for machine learning system to build a reasonable model. So in that case, you try to build a simulator that can accurately simulate.

Max (24:51.51)
Mm-hmm. Yep.

Prateek Joshi (24:52.69)
the environment and then you can hopefully do something. And so in that case, then is the future then physics-based simulation, like explorations and advancements in that domain that can lead to a big breakthrough because clearly we aren't able to collect enough data to make a meaningful amount of progress here. So what is the, I guess what I'm getting at is the next five years, what does it look like within this domain?

Max (25:23.03)
We haven't touched on one potential avenue for localization, let's say, and that's visual localization. Right. So that is where we have the least progress. And obviously, there is probably the most low-hanging fruit in there. Right. Because other domains people have been looking at for a while. Like, let's say, inertial navigation. People have been doing that since, like, Second World War. Right. It's been 70 or 80 years now, and we accumulated a lot of ideas around how to do that.

But visual navigation, visual localization, no one has really done it properly. There were no real resources put into it. And in some narrow domains, there have been very good results. But in general, the general problem, it's also really hard to solve it. Because if you're in the bulk of the water, or just, let's say, 20, 30 meters.

below the surface, there's usually no visual cues for you. But imagine instead of going right at the surface, you can actually drop all the way down, and it's relatively cheap, let's say, to drop all the way down to the bottom of the sea and navigate using that and then emerge when you think you've arrived. And let's say you have a rough map of the territory. It could be potentially feasible. And I was working for a company that was actually exploring exactly that.

direction, right? We were basically, we forwent the inertial navigation systems. We did have a GPS, so we get a lock, we drop down as fast as possible that alleviates issues with surface currents, with the waves that actually move you around as well. And you go down and the water down there, like miles down or even like hundreds of feet down there, is way calmer, obviously, right?

Once you've reached the bottom of the sea, you can actually try to estimate your speed using visual models, right? So you potentially can get a lock on some kind of texture on things like that. And hopefully your model is good enough and you can use reinforcement learning models on top of this to actually emerge, try to update your understanding of how fast you moved.

Max (27:46.25)
or maybe like retraining models completely. You obviously will need to collect data as well, but no one has really tried it. And look, we don't know how much data you'll potentially need, right? It may be a very, very interesting direction moving in towards. But I touched on something very important, like to, in order for this to work, we need to actually have an idea of how the bottom of the sea looks like, right? So you can actually use key points like,

like, oh, this hill is this hill on the map, right? Underwater of this kind of like crevice is the one that is the crevice on the map. And that will simplify your problem a lot. However, it turns out we actually don't have the map of the ocean. About like five percent of the ocean floor is actually mapped to any precision. That's interesting. We're talking pixels the size of kilometers. Like we don't even know the depths of like.

one kilometer per one kilometer, so like 0.6 miles per six miles for most of the ocean. We don't have that kind of resolution. So if you think about this, the bottom of the ocean is mapped way worse than, let's say, the surface of Mars, because we have a very, very precise maps of Mars. So in a way, that's absolutely insane. 80% of our planet, or 70%, I forget how much, but like a significant part of our planet

Prateek Joshi (28:57.874)
Hahaha

Max (29:11.418)
majority of our planet is covered by water, it's covered by this world ocean, and we know less about it than neighboring planets. If you think about it, it's absolutely insane. And that's why this technology is so important, right? And as we exploit, unfortunately, the surface of the planet more and more, let's say...

the mineral mining or let's say farming is taking more and more of the resources of the surface of a planet, we will necessarily, before we can harvest asteroids, trust me, we'll have to go down to the bottom of the sea for those things, for like farming for food, for natural resources extraction, but maybe even transportation, if you think about it, right? It's definitely, that's how we transport our bits already.

There are all these underwater sea cables, which, by the way, need surfacing, which is, by the way, very expensive right now, because you have to use people in scuba gear or ships that, like, let's say there is a cut in the cable, you have to get the ship to the cut, which you don't know where it is, and then raise that cable somehow up. They have very, very elaborate ways to do that, by the way, and then repair that cut. But imagine you could actually get this autonomous

robot to follow the cable, find the cut, repair it, done. This would be very, very advantageous, right, compared to what we have right now.

Prateek Joshi (30:42.066)
Amazing. I have one final question before we go to the rapid fire round. And it's about the potential opportunities for a young company. Let's say there's a company, like a few people working on, they love robotics, they love underwater and they want to build useful stuff for the government. So in this case, like what, what is the government willing to buy from companies?

Max (30:47.182)
Mm-hmm.

Prateek Joshi (31:10.418)
the practical realities of you cannot just build a submarine because it's very expensive. A young company won't be able to do that. So what opportunities exist for young companies to build useful stuff for the government considering the affordability of the startups and also the willingness of the government to buy what is getting built?

Max (31:33.042)
Right. So in my opinion, the way to go right now, until there are significant breakthroughs, is not to solve a general problem where you have a drone that goes and does any mission. I think the right approach right now is to focus on specific sector, very specific problem, and see if it's feasible to solve it with current levels of technology. Let's say we're talking with shallow water that is relatively well mapped.

Oh, visual navigation might be a thing, right? You can absolutely try to do visual navigation there. And there are a lot of applications there. Let's say there are this seepage of natural gas in the Gulf of Mexico. The gas companies would love to capture those. But right now, the expensive part is finding those kind of holes where the gas is seeping from. Imagine then you can actually, instead of it just venting into atmosphere.

you would put like a half dome over it and just capture it. You're done. You're collecting gas without any drilling. And actually, you're improving the environment by doing that. Right now, the problem is locating those seepage points. It's actually very hard to do right now. But you have a pretty good map of the seafloor. You can visually detect the bubbles. Here you go. Like it's very possible using the current technology. The application is large enough for you to do it. Like someone should absolutely go and do it.

instead of trying to solve the general problem, you know, or this transport submarine idea, right? Like it makes total sense. Like you, you can actually narrow your problem and solve way, way simpler problems than general autonomy underwater. Yep.

Prateek Joshi (33:14.738)
Right. Amazing. With that, we're at the rapid fire round. I'll ask a series of questions and would love to hear your answers in 15 seconds or less. You ready?

Max (33:20.054)
All right.

Max (33:26.395)
I would tempt it. I'm not sure if I'm a fast enough thinker for that.

Prateek Joshi (33:27.378)
Hahaha

Prateek Joshi (33:32.018)
Alright, question number one. What's your favorite book?

Max (33:36.922)
I don't know, like you cannot have a favorite book really, right? But if you put me to the wall, I would need to like the my favorite fiction book is Dune by Herbert. Like amazing universe, amazing characters, the development is just beautiful. Right. And my favorite scientific book probably has to be the Bishop's machine learning book. It's, it's such a classic. Uh, it has incredible depth and take incredible breath. It's.

It's very readable. I love it.

Prateek Joshi (34:07.89)
Yeah, I'm a huge fan of Dune and I think everything about the book just in terms of character build up the environment, just the scale in which he has written. It's amazing. All right, next question. What has been an important but overlooked AI trend in the last 12 months?

Max (34:28.29)
That's a good one.

Max (34:33.814)
Well, I don't know if it's overlooked. What I think is happening in AI is a hype. It is definitely a huge, huge hype period. And I think that has been overlooked. So people are too excited about applying generative models everywhere, right? And like that has happened before. Like three years ago, we're using crypto for anything, everything, right? And I'm excited about that. I've been trying to like push

large language models starting in 2017 and people were looking at me like I'm a crazy person. And now I love that they're getting the attention that they're getting. Right. But it's too much like people are trying to solve everything with them. And I'm consulting right now and I get all this requests, which don't make any sense. Like, it's just guys like just cool down. I think the trend is the hype. The trend right now is everything is so overblown and we need to cool down a little bit.

Prateek Joshi (35:31.698)
There are so many things about underwater autonomy that most people don't get. But what's your favorite thing that most people just don't get about this topic?

Max (35:44.874)
My favorite thing is that, coming back to the previous point, we can get to actually know our planet. If we solve this, we can actually learn so much more about the place where we live and the only place where it can live, right? We can solve climate, like the solution to climate problems, in my opinion, lies in the ocean. We can feed more people. We can, I don't know, just figure out. It's, it's a frontier.

The final frontier is not space. The final frontier for us right now is oceans. We need to go and study it.

Prateek Joshi (36:20.978)
Yeah, that's amazing. Just because people think that, oh, we live in such an advanced age of technology, obviously we would have mapped out the planet inside and out, but turns out a very, very large chunk of the ocean, we haven't mapped that. We don't even know what's there. So it's amazing. All right, next question. What separates great AI products from the good ones?

Max (36:45.762)
continuous improvement. So unfortunately, compared to normal features in products, sort of like products themselves, you're never really done. You're never really done with any product, but imagine you have a feature set, you've built it, you're finished, like a normal software engineering, software development. With AI products, you are never done. There is always this extra percent. You can never be 100% precise. You can never have zero percent, like 100% recall. Like there's always something that you can improve there.

The good ones, the good products understand that and they build this kind of idea of this cycle of improvement into the product itself, into the development process itself, into the way the company is built.

Prateek Joshi (37:29.33)
Right. As a machine learning technologist, what have you changed your mind on recently?

Max (37:42.59)
I've changed my mind on the idea that you'll need huge amount of resources to do anything. So that has been the prevailing understanding of the situation there. But, oh, with Llama being released and I have this machine with like three GPUs in it. And I can use a pretty large model and fine tune it myself at home on a single machine.

to a good extent, right? Like it's pretty impressive where we've gotten here and you can get very good performance while just a couple of years ago, to run GPT 2 or 3, you needed a huge cluster and a team of engineers to do that. So there is this idea that to win the race right now, you'll need just incredible amount of resources. And to a very large extent, it's true. But if you're not trying to win the race, you'll be able to do a lot with very limited resources.

And I've absolutely changed my mind on that.

Prateek Joshi (38:42.162)
What's your biggest AI prediction for the next 12 months?

Max (38:50.734)
My prediction is that nothing will really change in the next 12 months. Like, so basically we're gonna be in the state of companies trying to build larger, more performant models, but there's not gonna be a quantum breakthrough. There are rumors of this and that, but like I predict that nothing will really change. And I might be completely wrong, but people overestimate the probability of...

low probability events and underestimate the probability of high probability events. So you're going to be always safe predicting that nothing drastic will happen.

Prateek Joshi (39:25.938)
I'm... I'm...

Prateek Joshi (39:31.89)
Right. I think that should be like a poster on a t-shirt. But it's a fun thing to think about. All right. Final question. What's your number one advice to machine learning engineers who are starting out today?

Max (39:36.581)
Yeah.

Max (39:50.862)
Study the basics. Start with the, start from the ground up. Don't assume that there's gonna be layers that they're gonna conceal complexity from you. If you do not understand a piece of technology, you don't really own it. You don't really wield it correctly. You can't, right? So start from ground up, learn algebra, learn probability theory, learn calculus, then proceed to actually learning the technology. And from the ground up as well,

basic neural nets work, then go on to like learning about CNS and transformers. And like, then go ahead and try to understand the dynamics of larger models. Right. Um, there, there is a very huge tendency for younger people. And I was one of them at some point, uh, to jump to actually, to, to like, trying to use high level tools immediately. But yeah, I think starting from, from the ground up is a good advice.

Prateek Joshi (40:49.49)
amazing. Max, this has been a phenomenal episode. I've been, I've done many, many episodes and this is the first time I got to discuss and talk about underwater robotics and it was really fun and thank you so much for indulging me and it was, I really enjoyed the discussion. So thank you so much for coming on to the show.

Max (41:09.314)
Thank you, Brutex. Thank you for inviting me.

Prateek Joshi (41:12.37)
Perfect.