Banking on Information

Inception at NVIDIA: Empowering Startups in AI and Beyond a SPOTCAST with Matt Akins

Rutger van Faassen Season 1 Episode 12

Send us a text

In this special @Money20/20 SPOTCAST episode I speak with Matt Akins, Startups @ NVIDIA about how NVIDIA is partnering with startups, some of which are here in the AI Pavilion at the Money20/20 conference in Las Vegas.

Matt Akins discusses the Inception program, which supports AI and data science startups through technical enablement, software resources, and marketing access. He explores the future of AI, robotics, and autonomous vehicles while emphasizing NVIDIA's commitment to democratizing GPU access and advancing inference-based models for efficient computing.


Key Words

  • NVIDIA
  • Inception Program
  • Startups
  • AI
  • Data Science
  • GPU Scarcity
  • Technical Enablement
  • Open-source Software
  • Robotics
  • Autonomous Vehicles
  • Future of AI
  • Compute Power
  • Inference Models


Hello, and welcome to another podcast. We're doing this live here at Money 20/20. This is Matt Akins, who is startups at NVIDIA. Welcome to the podcast, Matt. That's right. Yeah, thank you so much. We're always starting with this very important question, which is, why do you do what you do? So, Matt, why do you do what you do? Yeah, so I've always been very into technology. From when I was a kid, I was building my own computers. I went into college. I was a computer science major, physics major, always been a tinkerer. My first job out of college was actually working with VMware. And it was obviously very early days there, which became extremely significant technology from an infrastructure standpoint.


So yeah, ever since my time there, I've always been gravitating towards uh the latest and greatest in technology-passionate about technology. So no wonder that you're at Nvidia now, absolutely! So what do you do? So startups at Nvidia? What does that entail? Right? So we have a program here at Nvidia called Inception, and Inception is a program for startups to give them really anything the startup might need to to get to begin their journey or advance their journey in and around AI or data science. The creation of the program was really during the COVID era when it was when cards were such a scarcity, and Jensen saw this problem of you know, the big guys were buying up all the GPUs and it became nearly impossible for two guys in a garage to start a new company around AI and get all the resources that they need, yeah.


So that was sort of the creation of the program, and now we've matured, and we have relationship with VCs. We do a ton of technical enablement. So a lot of what I do is around software. Many people are aware of our GPUs and hardware, but what folks may not realize is that you can get those kinds of exponential speedups in a lot of free software enablement. So that's where I spend most of my time and that's what we try to do. And then also have events like this. A lot of our partners are presenting here and getting access to our I'll say global marketing engine. Yeah, that's pretty good. Pretty cool here. We're at the NVIDIA AI Pavilion at Money 2020. Now, how do your customers, the startups, get value from what you guys do?


Yeah, so our partners, as I said, our program is free to join. So there's almost no overhead in joining the program. I start many of our conversations with just an early technological. Focused introductory call where I try to get an understanding of what they're doing, how they're building things, and really look for ways that we can help them do what they're doing better. A lot of folks, you know, obviously GPUs can be costly in a public cloud environment. So, what we try to do is introduce software, free software, open-source software that can help them do what they're doing in a more timely fashion. So, they kind of limit those heavy training hours. So, you make sure that even startups get access because the big guys could all buy it up, but you're actually making sure that there are some GPUs available for startups as well.


Exactly. That's great that you do that. And it's also very powerful and valuable for them. I like to do this thing called futures thinking. So we don't know what the future holds, but we could think about possible futures. So if you think 10 years out, so 2034, what does the world look like when it comes to partnering, when it comes to AI? Give me some ideas of how you see the future, a possible future. Sure. So in terms of what the future will look like, obviously, it's going to be very interesting from an AI standpoint. If you look at the lifecycle of AI right now, to put it in money terms, being here, we're in Q1. We're in probably very early Q1. So we're just beginning to see what's possible.


In 10 years, I will think we will see the realization of a lot of the dreaming that we're seeing right now. So probably the most I prevalent will be in robotics and in that space, you know, the assistance and and driving autonomous vehicles, things like that. Yeah. So in terms of where our business will be, I mean, it's really hard to say because, you know, again, we're we're pioneering. We are to put it in the gold rush terms. We are providing picks and shovels for that early gold rush. And then as to what all of that gold becomes, it'll be exciting to see for sure. We are still very early innings or early quarters, however you want to say it. It is very exciting.


Now if we didn't take that possible future and think about today, what can we do today and what is NVIDIA doing today to get ready for that future? Sure. So, I mean, we're obviously always pushing the envelope from a, even just from a compute standpoint. It's probably the easiest to see on paper, right? So when you look at Moore's Law, you know, the growth of speed every five, ten years, that's obviously come to a halt. And now that we have GPU accelerated computing, we're seeing a much more exponential growth than that growth. But it'll be interesting to see as these models become more prevalent from open stream or open source upstream as they become more advanced, less of that compute may possibly be required in which case you know we will probably focus more on the inference side, so everyone will be running small, you know, small language models or small versions of you know these complex models as we see them today, and so having those resources available


at the far edge or in our pockets, wherever the commute may be. Right. Well, very exciting. Let's wrap it there. Thank you very much. This was another episode of Banking on Information. This podcast right here at Money 2020. Thank you so much for being on the podcast. Yeah, thank you so much for having me. Perfect. And until next time, choose to be curious.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

LendAPI Podcast Artwork

LendAPI Podcast

Rutger van Faassen
Bankadelic: The colorful side of finance Artwork

Bankadelic: The colorful side of finance

Bankadelic: The colorful side of finance
Finovate Podcast Artwork

Finovate Podcast

Finovate Podcast
Mr. Open Banking Artwork

Mr. Open Banking

Eyal Sivan, Quill Inc.