GOTO - Today, Tomorrow and the Future

Machine Learning for Autonomous Vehicles • Oscar Beijbom & Prayson Daniel

November 11, 2022 Oscar Beijbom, Prayson Daniel & GOTO Season 2 Episode 44
GOTO - Today, Tomorrow and the Future
Machine Learning for Autonomous Vehicles • Oscar Beijbom & Prayson Daniel
Show Notes Transcript Chapter Markers

This interview was recorded for GOTO Unscripted.
gotopia.tech

Read the full transcription of this interview here

Oscar Beijbom - Co-Founder at Nyckel
Prayson Daniel - Principal Data Scientist at NTT DATA

DESCRIPTION
Self-driving vehicles have been a hot topic for a while now and everyone is waiting for the next breakthrough. Prayson Daniel, principal data scientist at NTT DATA, and Oscar Beijbom, co-founder at Nyckel, stuck their heads together to review what type of machine learning data is needed to run an autonomous vehicle. Furthermore, they talked about topics such as security, language choices and when the time of deploying a model has come.

RECOMMENDED BOOKS
Phil Winder • Reinforcement Learning
Kelleher & Tierney • Data Science (The MIT Press Essential Knowledge series)
Lakshmanan, Robinson & Munn • Machine Learning Design Patterns
Lakshmanan, Görner & Gillard • Practical Machine Learning for Computer Vision
Aurélien Géron • Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow

Twitter
Instagram
LinkedIn
Facebook

Looking for a unique learning experience?
Attend the next GOTO conference near you! Get your ticket: gotopia.tech

SUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!

Intro

Prayson Daniel: Hello, welcome to the "GOTO Unscripted" episode with Oscar Beijbom. Today we have something awesome to cook. And who else to have than Oscar himself? So, Oscar, let's begin by introducing yourself, like, who are you? What are you up to?

Oscar Beijbom: Cool. Thanks for having me, Prayson. I'm excited to be here. My name is Oscar  Beijbom, I'm currently co-founder of Nyckel, I'm even wearing the T-shirt for those that are watching this on video. I've done machine learning, and applied machine learning for the last 20 years. My previous job was in the self-driving car industry, where I ran a pretty large team, I'd say about 100 engineers, doing the full-stack experience of delivering an AI that can drive a car.

So infrastructure, all the data, mining, all the data selection, the actual modeling teams that try and figure out which networks to use, and the team devoted actually to metrics, and performance analytics. Yeah, so those are my last two jobs.

Prayer: Ok.

Oscar Beijbom: I can go further back if you want. I don't want to bore your audience.

What kickstarted the self-driving vehicles trend

Prayson Daniel: Definitely. So I think if this was also one of the subjects I'm excited to hear about, is this autonomous car, self-driving cars. Just thinking about it, it's a bit crazy, right? We started with a horse pulling a carriage, then we removed the horse and have a horseless carriage. And now it seems we're going back to the horse again, but now the horse is somehow a robot. Oscar, so if we start toying with this idea, like, how do we even begin, where do we even start talking about self-driving cars?

Oscar Beijbom: Do you mean the origin or the technology or...?

Prayson Daniel: Both. If we even start from scratch, how do we even start coming up with an idea of collecting data or thinking of how should everything be from all the components that take us to the first step of building a self-driving vehicle?

Oscar Beijbom: I see. It's one of those things that...I mean, so the most recent sort of boom or push to build a self-driving car started with the DARPA Grand Challenge about 10 years ago. There was a U.S. government competition, a bunch of top-tier universities competed in putting a car together that was supposed to drive on unknown...mainly in an unknown area.

So that showed a lot of promise and people started feeling, okay, the technology might be ready for this. So the big teams were like CMU, Stanford, and MIT. And they all were what I would say, "classic roboticists." Or, of course, they don't think of themselves as a classic. They think of themselves as cutting-edge, latest advances in probabilistic robotics, right? But it's still a worldview where it's more or less, a programmed, hand-engineered system, a robotic system that perceives the world, reasons about it in a probabilistic way, right, and then make the right moves.

So that's, what most of the companies, including the one I joined...when I finished my research, it was called Autonomy, it was an MIT-based company. So it was a robotics-based company, and there was no machine learning at all in the pipeline, right? And what typically happens is that in the perception system, that's typically where machine learning gets its foothold, right? And what happens is that the main sensor during the DARPA competition was a LiDAR, so you just get its point cloud back, and it's an amazing sensor, right?

You get a 3D representation of the world around the vehicle. And you can...by doing stuff like background subtraction, you can sort of figure out which points are "foreground" and are moving around in the world, you can start clustering them, you can write sort of a step-by-step pretty reasonable perception system based on that. But there are, obviously, edge cases where this won't work. For a pedestrian that stands next to a pole, on a LiDAR, for example, they look very similar. But, a pedestrian, you need to treat them very differently than a pole because, even if they're standing still, they may start walking at any time.

Once the company started doing this seriously, they realized they need also cameras and computer vision. And once you have cameras, you have no choice but to use machine learning, right? So that's where it started. So I was hired to do...originally, just as a machine learning research engineer to help bring that up.

Now, finally, to your question, you're saying, in this world now that we're doing machine learning, what are the pieces? I think that's what you were getting to.

Machine learning for self-driving vehicles

Prayson Daniel: Yes. Because it's one thing collecting data, it's one thing telling the machine, this is a pedestrian, or this is the road for these kinds of things, right? But we know we have all these things. And we have all these things in play incoming data, analyzing the data, and then making that decision. So all these companies need to work together and not just in a...they need to work together in a very rapid way. So you have computation and speed that one has to take into consideration.

Oscar Beijbom: Yes. It's almost this, like, false promise of a wonderful world where you "just empty data," and then it just works. You don't have to program anything, and it improves itself and so on. But what ends up happening is that the amount of effort you need to put into, what I call, the data engine is immense. Because you're just compiling data onto code, right? And if your data has issues, your code would have issues. So you end up with this endless cycle of trying your model on new data, seeing where the issues are. And that's not easy.

It's not easy. I mean, if you're doing it manually, with a human in the loop, it's easy, because you can just see what mistakes are being made. But if you're doing it automated on aggregate, it's not that easy to even write down specifically what you should be looking for, so errors. But especially if you're thinking about a self-driving car, certain errors are much worse than other errors.

If I see a bird in the sky, and I don't detect the bird in the sky, that doesn't matter. But if I don't detect a dog in front of me on the road, that's bad. But basically, you need to develop this data engine or this cycle of identifying issues and retraining. That's just one of the cycles you need. The other cycle is, as you mentioned, that you want to iterate quickly. So what does it mean to iterate quickly? Well, you typically need to train models very quickly. Because if you're not careful, retraining could take a couple of days. So now you're in a cycle where you're just, you're only improving your system, like, once or twice a week, and that's, obviously, not where you want to be.

So you're suddenly in the business of building up a highly scalable, like, GPU cluster. Most of the toolboxes have very good support for like multi-GPU on a single node. But now you go beyond that, so you have multiple nodes. Now you have to figure out how to accumulate the gradients and set tuning your learning rate and all those other parameters to make sense. So that's a lot.

Gathering data for training ML for autonomous cars

Prayson Daniel: Now, I'm going to rewind when it comes to machine learning and data collection first of all. So when we usually build machine learning, classically, we just collect some examples from which the algorithm has to learn. So when it comes to self-driving, how do you go about that? Do you just put drivers into a car and they just have to go through the streets, and then that's how we gather the data and the decision to stop and look around too? So how does this play in?

Oscar Beijbom: Yes, that's a good question. Typically it's a bit of a bootstrapping thing where you start with just fully manual drives to get just nominal data, normal data. Then you train the first model, and you sort of bring up the self-driving capacity in that area. You can then begin a long cycle, it could be several years, where you have a safety driver that is in the car ready to take over, but the car is driving autonomously. A lot of times, you could have errors. It depends on how...like I said, it depends on how severe the error is.

So you can not detect the pedestrian, but the pedestrian maybe never interferes with the trajectory of the vehicle. So it doesn't, "matter," or it doesn't result in a takeover, right? If the AV, autonomous vehicle makes a bad enough mistake, the human will take over. And that's a very good signal for something is wrong here. You can collect data around that timestamp.

But there are other issues that you want to capture because...just to be more efficient, where the AV didn't stop, .there was something else in a subcomponent that didn't work.

Real-world data collection for machine learning

Prayson Daniel: Yes. Here is another question. So we know that in machine learning when we get the data and everything, there are these boundaries from which we can collect our data. But then here, we are dealing with the real world to which one does not know that the next day is something like a marathon where there will be a lot of people running. And during our training, we never considered, "Oh, there will be this event in town with so many people, around, etc., etc."

So what do we do about all these things? Do you usually go into simulations of different scenarios to make sure that you capture almost as much possible of the real world before putting the car out there?

Oscar Beijbom: That's a great question. I want to say that the company I worked for was Motional and Autonomy. At Motional, we were...there is this very gradual rollout that I thought was compelling. So it's not like you build the system and then you just release it. What's going to happen, and what Cruise is doing now, I'm sure, even though I don't work at Cruise...so Cruise, for the listeners, is another AV company that recently, actually, just a few days ago, got approval to run fully autonomous robotaxis in San Francisco. So they will be the first company...with paying customers, I should clarify.

As far as I know, that's the first. So what happens is that it's not like, you go from zero to fully autonomous. You go from zero to having, basically, one remote operator per vehicle. Of course, that remote is not going to be real-time because of the latent telecommunication, but you have a very close loop with a human that observes everything that's going on.

What you can do is then smoothly, go and say, "Okay, one person is now remote for any two and three. So you can roll back the number of people you need.

Now, to your point if there are these events that happen, I think initially, they will just not try that day. They will literally be talking to the city and find out when these big events are happening, and just not drive. Or if they don't, if they didn't hear about it from the city and they started seeing it in their sensor data, right, because they have cameras, they will just, "Okay, we'll just close it down."

So from a safety perspective, from a deployment perspective, that's what you do too, not get into trouble when that happens. Does that make sense? But then I think what you're asking was how do we learn from that? How do we, in the end, solve that problem? I think you're right. I don't like using simulated data to test. I always want my test set to be real data.

So what I would do if I was, in Cruise and there was this demonstration, is that I would go next to it with a fully manual, and I would just collect the data. I would just collect the data and I would add it to my test and validation set. Then, of course, when you train, as you know, as your listeners know, I'm sure you need a diverse set of data and a lot of data. So then you might need to go to simulation to get what you need, but you're still evaluating all your data.

Prayson Daniel: Okay. It's because, in my train of thought I was thinking of when it comes to a self-autonomous car, I want to be at the airport and just call my car to come to pick me up, right? So in this case, there is no human in the car in the first place. I am calling my car, "Hey, I'm landing at this time, can you come to the basement at this airport at this time?" At this time, we're not even at that level. At this time we're just thinking of autonomous cars, but not being left free to drive by themselves.

Oscar Beijbom: That's a big controversy, right? So Tesla is trying to build what you're saying, which is a car that...and they take on a very, very difficult problem. Because they're doing what you're saying. They're like, building a car, and then just giving it out to the public, and they can't control it. I mean, they have ways to say, "Okay, we've disengaged, the autopilot is only available in certain areas and under certain circumstances of the data. But still, it's just this open-world problem where they just released these vehicles. And I think that's the controversy around Tesla.

Now, I think the jury's still out on whether or not they are saving lives, I suspect they still are saving lives because they avoid a lot of issues with fatigue and so on. But there's been real accidents with Tesla and people have died because of their deployments. So that's the business case of Tesla, which is different from Motional, Autonomy, Cruise, and Waymo, where there's a robotaxi service where you don't call your car, "you call a robotaxi" and that fleet is monitored very closely.

Prayson Daniel: Okay. So I know I'm bombarding you with so many questions, but I think that's my role here, to fry you a bit.

Oscar Beijbom: No, no, it's good though. I love talking about this.

Are we ready for autonomous cars?

Prayson Daniel: Now comes another heavy question. So when we talk about autonomous cars, do you think we are ready for such kind of technology? Do you think our cities and the people, in general, are ready to allow autonomous cars to start roaming our streets?

Oscar Beijbom: I don't know. There are so many ways it can play out. So the utopia that I follow, the reason I signed on to this, is there's this utopic vision where, essentially, people will not own cars anymore. The taxis will become so cheap that car ownership goes down and, essentially, there's this is the shared fleet of transportation, right?

I call it a utopia because what happens then in that world is that...I don't know how much time you've spent in the United States or California, but the cities are very dominated by cars, not just by cars, by the roads, but by the parking lots. So like when they build there are building codes to say, like, X square foot per indoor should have X square foot of parking spots. So the cities become very sparse and uninteresting. So there's a world where if you can reclaim all those parking spots, you can have denser, more interesting, more walkable spaces in cities, right?

And in addition, people don't need to own cars. So it'll be cheaper for people to get around, and there'll be fewer carbon emissions, in general, because you just need to produce fewer vehicles. So that's the utopian vision that I think is very interesting.

But there's a dystopian vision as well, that sort of goes like this, where if the cars drive themselves, you can sleep in the car. Everyone still buys a car, everyone still buys as many cars because they like having their cars. But now you can sleep in the car, and that changes your math on how far you're willing to commute.

You can be commuting for two hours, even three hours. Say you get up at 4:00 in the morning and you just...or you even go to sleep in your car. Then at 3:00 a.m., the car will just start driving you to work and you wake up at 8:00 a.m. in front of the office. Of course, it allows people to live in the countryside and outside of urban areas, but it's an immense waste of energy. But obviously, that energy could come from green sources, but it's still kind of a crazy thought. And I don't think the cities are ready for that at all, because you'd have so much more traffic and the congestion would be just immense.

Prayson Daniel: Do you think the adoption of autonomous cars will depend on a massive, like, every one of us joining this kind of journey, then we only have robots against robots driving cars?

Oscar Beijbom: Certainly, the way the technology is being developed is assuming human drivers with all that complexity. So I think the question is, will it lead to less car ownership and more sharing, or not? And I think it's very hard to know, which way it's going to go.

Prayson Daniel: Yes. Nice.

Oscar Beijbom: And the cities, I mean, I don't know, I can't speak to the cities in Europe, but the cities in the U.S., I mean, the U.S. government is very reactionary. They don't plan, they don't think ahead, or have big visions, at least not at the city and state level. It's just kind of letting private enterprise run free, and then they react to whatever happens, like with the scooters. Suddenly there are scooters everywhere. And I think that happens in Sweden as well when I go to visit, and the cities just have to react to it as best as they can.

A day in the life of a software developer building autonomous cars

Prayson Daniel: Well, Oscar, let's go back to the basics, so back to what we do most. What's your role as a developer when it comes to all this? So what does your day-to-day look like when it comes to building autonomous cars?

Oscar Beijbom: That's a good question. I joined as an individual contributor, and then I ran a small team of, you know, 5 to 10. Then suddenly, I ran three of those teams, and then suddenly, I ended up with 100 people or so. And obviously, my day-to-day changed a lot during that time. Maybe most of your listeners are engineers or aspiring engineers. I think one myth, if you're applying for a machine learning, engineering job, at least in the U.S., it's extremely popular, at universities, and a lot of people focus on machine learning because it's seen as a sort of the new, the next hot thing.

The expectation is that you spend a lot of time if you look at Kaggle, or like any of these data sets, or ImageNet, the data is given to you, and the metric is given to you. So for ImageNet, it's like the top one accuracy or the top five accuracies, and the data is fixed. And now you're just working on the algorithm, right? So you have this sort of nice playground, where all you're doing is tweaking hyper-parameters of your neural network, applying it, trying new learning strategies, whatever, right? Like, sort of, "core machine learning."

But I think what happens in practice is that, first of all, the data is not fixed. So you're sort of it's up to you to find your data, and find good data. And then the objective function is also not fixed. It's very, very difficult for AB to figure out what the objective function is. Obviously, at the system level, it's pretty straightforward in the sense that you want as few accidents as possible. You will want to go from point A to point B as fast as possible. There's probably something around comfort as well so you don't get seasick. So it's like those three components of the system level metric.

But imagine now your computer vision engineer working on pedestrian detection. If you go to the textbook, computer vision texts, it would say, "Okay, if you're doing detection, you should use mean average position, and that's the metric that you should be optimizing for." But in any given application it's not clear, or I would argue it's never that the best objective function. Your objective function should somehow be tied to the next subsystem downstream. So if I make a certain type of error, in this example, how does that affect the planner, right, that sits in the NAV and tries to plan the route?

How does that then affect the controller? And then how does that then affect the ride and the final quality for the consumer or for the customer? So that's something we thought...we wrote a few papers on that as well. How do you think about, I call it, metrics backpropagation? Or like variational analysis on...if my subsystem performs like this, how does that affect the final system performance?

I guess getting back to your question, sorry, I keep going off on tangents, but the day-to-day. What does that mean, for the day-to-day? Well, I think it depends on the size of the team, right? So typically, you would have a team, at least we had a team for every one of these components. But assuming it's a smaller team and you're kind of then touching everything, you're probably going to be working a lot on the speed of iteration cycles.

So you're going to make sure that you can run experiments quickly and see the results quickly so you can compare them to other results.

I ran these 20 things, how do I visualize myself, and which of these things work better? And how do I define better, right? Then, once you have that figured out, you're going to go to, "Well, how do I figure out which data to train? And also that is a lot of infrastructures.

Language choices

Prayson Daniel: I usually say that 90%, or even more than 90% of machine learning is surrounding data, and how you handle data. And then the last small percentage is the algorithms. So let's just say, okay, we have our data streaming, and we have everything flowing well, we are in this Kaggle utopia, but at least we are fighting their objectives yet.

So when we dive deep, when you guys begin building your models, are there, like, preferable languages that you probably use, Julia, Python, or whatever? And how does it affect your daily flow?

Oscar Beijbom: Got it. Yeah, so we were a PyTorch shop exclusively. We were a very early adopter of PyTorch when it came out. Because I had used Caffe previously, the Berkeley deep-learning framework, that was falling out of fashion. And TensorFlow was sort of the dominant one. I didn't particularly like TensorFlow at the time. So when PyTorch came out, we jumped on that. I think that was the right decision. I think Pytorch is a wonderful piece of software. So that's appropriate if you wanna get into the space, becoming fluent in PyTorch, I think, is a good idea.

Prayson Daniel: But then how...because I hear a lot of complaints when it comes to Python. And I, of course, knew PyTorch, TensorFlow, and NumPy are written in a deep language, such as C, but how does this affect the speed? Because I hear some say, "Well, maybe you should use something like Julia, or maybe you should go to C++." How does Python affect...because we know, we're dealing with very, what we call, like...the system's response needs to be almost live from the sensor capturing something for the car to break.

Oscar Beijbom: I see. So my team and my org were a Python shop. But in a way, what we did was compile data into networks by the weights. Then the code that operated the vehicle was C++.

Prayson Daniel: Okay.

Oscar Beijbom: Yes. And as you point out, of course, the real-time, like, the latency of the system is extremely important. So what we spent a lot of time on is optimizing the inference speed of the network by the architecture of the backbone itself, but also using stuff like TensorRT, which I think it's a powerful runtime optimizer for GPUs. There are other ways, things like pruning and quantization, you've got the 8-Bit precision or you remove the connections in the network that you don't need. So we spent a lot of time on that to give a very, very light, fast kernel. We were running object detection and segmentation and classification at 200 hertz, in the end, on a single GPU. We were running like eight cameras on one GPU active on the hertz.

When is a model good enough for production?

Prayson Daniel: Nice. So now we are going to the next part. When is the model good enough to go into production? Like, how much error can you tolerate? So how do you even start answering this part of metrics, what percentage of issues we can allow deploying this model?

Oscar Beijbom: If I had a quarter or a dime for every time I was fighting with our safety team. There are these two cultures that were clashing, there was the autonomous industry...sorry, the automotive industry traditionals, where you would design a system and the specifications of that system, ahead of time, and then you would sort of code to spec, and they would check if it works. That's how most vehicles are built, in the auto industry and most industrial systems. But that's not how software is built. Software is more, like, you put something together, you ship it, and then you find issues, and you iterate.

So you had the software world where, deep learning belongs, and then you have these robotics or, like, industrial verification processes on one side. And we didn't make much progress until we essentially fired all the "safety people." Because it's impossible, you can't put up these hard barriers for yourself and say, "Okay, until it's 90% accurate, I'm not going to ship." Because the thing is that, it's not like I'm against having these goals, right, it's more like 90% on what, right? It's 90% on some data set, right? The point is that I can give you 100% on a data set, but it's not going to be, like, the hard data that. You can only find the hard data by deploying an intermediate product as we talked about it earlier in this conversation.

Deploying some earlier versions of your system in production and then finding issues and learning from them. So the processes of finding out the issues, you can only do that when you're deploying at scale, right? Because the more rare these issues are, you just mentioned this marching band, they don't happen that often. So, you need to just be out there and find them.

Prayson Daniel: But when we talk about what is at stake here, some errors are more tolerable than others. So if I crash a company's business because my model predicted a return of investment that did not pin-up, probably some people will lose their job or probably they will fire the data science team, but no, what we call, this risk kind of issue. When we talk about a car, we can talk about it killing someone, which will be there. So here, the weight becomes heavy compared to the other one. So, in this case, when do we say we are okay enough? But you answer this by saying, in your scenario, you will always have someone in the car to make sure that such an event does not happen.

Oscar Beijbom: Yes, you have someone in the car initially and then you would have someone outside of the car remotely ready. But I agree with you. I feel like I don't have a fully satisfactory answer. All I can say is that this is a sort of concept of, as soon as you have machine-learning involved in a system, trying to define, beforehand, goals or performance level metrics is a little bit of a false error because it's conditioned on the type of data that you're measuring on. If you talk about pedestrian detection, you need pedestrians at certain distances, you probably also need them under certain weather conditions, right? And then you need them with certain colored clothes, right, because it's just gonna look different. And the list goes on and on. So you end up with these strata of your data that are combinatorial. So it's very hard to exhaustively test all the different strata.

Security and autonomous cars: Can hackers take control?

Prayson Daniel: Let's bench that one under the bridge for now, and go to progressively climb to all the issues that I think you encounter in any machine-learning problem. So let's move on to security. So once we are having a model or data, we have a model in place, and we have to consider things. But now we have all this connectivity, and with connectivity comes security. If someone is out there and he can control my car, that means a hacker, somewhere in, you know, those countries can take control of my car. So how does this play in the whole design and the whole autonomous car when it comes to the security of someone else taking over control?

Oscar Beijbom: It's a good question. I must confess, this is something I didn't interact with a lot, I just kind of assume that it would work. I know for sure that we decided early to not rely on an internet connection. We don't want any communication between vehicles. There are a lot of things you can do, potentially, where you say, if you have a big enough fleet, you can just share information. So you kind of have a shared world view and you get a better understanding of what's going on. But we decided not to do that because that's too much communication and too much safety risk.

Now, we did decide that we need a remote operator as I talked about the car. So obviously they need to communicate to the car, and so there is an open channel that can be hacked. I don't have a good answer for you. I think it's a risk. And I hope they do it right.

Nyckel: A machine learning platform

Prayson Daniel: We don't always need to have all the answers. It seems I have exhausted all my questions. Is there anything you would like to add? Anything that you're planning for that you would like to share with the whole GOTO community?

Oscar Beijbom: I guess I'll take just a few seconds to tell you about the company I started with a friend called Nyckel. So I did quit my job a couple of months ago to create a machine-learning platform company. And the motivation there was I have friends that are software developers, that are complaining about, even as much advanced stuff that has been in machine learning, it's still like a big lift for someone who doesn't know the technology to bring it into their app or their system.

What we try to do at Nyckel is put a very simple API around the whole machine-learning box. You post your images and your labels to Nyckel, and we will train...automatically. We do all that stuff that I talked about. We just try all the state-of-the-art methods on your data and give you the one that works best. We deploy it to an endpoint for you immediately. So it's something, frankly, I wish I had at Motional, at my previous job, because there are so many situations where, even at a big company, even us we had a big machine learning team, there are so many situations, we just need something simple. For example, just take an image here, you just want to know, is my AV in this image, for example, for data mining purposes, right?

Well, it's not hard, but it's a lot of work, you still have to train something, you have to check that it works, then you have to deploy that model somewhere so you can call it as part of your data infrastructure, or general training infrastructure. So being able to spin up light, really fast machine-learning functions is something I was missing and something that we are working on.

Will self-driving cars become a reality in the next 5 years?

Prayson Daniel: Okay. So here comes, you know, like a gardener kind of question. Do you see any autonomous car driving anytime soon, within the next 5,10 years?

Oscar Beijbom: Because in San Francisco Cruise has been making a lot of progress. So they have an internal autonomous taxi service going. If I understood the news correctly, they will be opening it up for paying customers soon, or at least they got the permit to do so. I don't know what that means. I'll see what that implies about the timeline. I feel pretty optimistic that they'll do it. And again, you know, it's all autonomous in the sense that there's no driver, but I bet you a lot of money there is a remote operator.

Prayson Daniel: Because we are talking about Level-5 fully autonomous cars, right?

Oscar Beijbom: I see. So like, you mean like the Tesla, the end game for Tesla?

Prayson Daniel: Yes

Oscar Beijbom: Tesla, if anyone is going to do it, I have a lot of respect for Elon Musk, but also Karpathy, Andrej Karpathy there. I think that he's good as they come as far as like, applied machine learning, and applied AI. So I think if anyone can pull it off, I think they will. They also take it step-by-step, right? So they're already doing it on highways, on bigger roads, and on parking lots. They're going to sort of slowly but surely expand the operational domain that they support. So sorry to give you a non-binary answer there, but I think it's happening. It's happening slowly, and it's, I feel pretty optimistic that it will come to fruition.

Outro

Prayson Daniel: Definitely. Well, with that, I will bring this to a conclusion. Thank you so much, Oscar, for your time. And we hope to see you soon again. Peace, have a great day.

Oscar Beijbom: Cool, thanks for having me Prayson. It was great talking to you. Bye.

Intro
What kickstarted the autonomous vehicles trend?
ML for self-driving vehicles
Gathering data for training ML for autonomous cars
Real-world data collection for ML
Are we ready for autonomous cars?
A day in the life of a SW dev. building autonomous cars
Language choices
When is a model good enough for production?
Security & autonomous cars: Can hackers take control?
Nyckel: A ML platform
Will self-driving cars become a reality in the next 5 years?
Outro