
Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
NVIDIA's GTC Keynote Breakdown in 30 Minutes
NVIDIA just unveiled its vision for AI's future at GTC, but what does it mean for media professionals?
We break down the key announcements from Jensen Huang's keynote, including AI factories, the new Blackwell chips, and their new project Newton. Plus, we discuss NVIDIA's partnerships with DeepMind, Disney Research, and GM, while analyzing how these massive technological advances might reshape creative workflows.
In this special episode of the Denoised podcast, we're going to do a breakdown of NVIDIA's GTC Keynote. Let's get into it. All right, welcome. New location, Addy. Yeah, this is great. Thanks for stopping by. All right, so we got a special episode. The NVIDIA Keynote just happened this morning. We watched it. Let's do kind of a play by play summary. Coming in at the angle of M& E, media and entertainment. Yeah, sounds good. And we also saw the CES Keynote together. Yeah. Yeah. So we'll talk about sort of the differences between the things you notice that are different, things that are coming and kind of what that means and a summary too. So you don't have to spend two hours watching the Keynote, which also had a couple of live stream issues. Oh yeah. And also it cut out the whole thing cut out at one point. Oh really? Yeah. The entire live stream. Oh my gosh. The first thing that stood out to me, and this was from writing the intro, they had the garden area that they showed at the CES Keynote that was talking about rendering and how it's being rendered in real time, real time ray tracing. And he, Jensen noted again that the, for every one pixel that was generated, uh, 15 were rendered with AI. For every pixel that's rendered, artificial intelligence predicts the other 15. And I feel like I, I didn't check the notes, but I think in CES, he said for every one pixel it was rendered, eight pixels were. generated. So I don't know if that's just a increase or misspeak, but or an increase in, I think he's talking about DLS, DLSS. So DLSS didn't talk about at all in this Keynote. Okay. Yeah. I mean, that was a big part of CES Keynote. And we use DLSS a lot in, uh, virtual production because we're dealing with really large walls with large resolution, but we're limited by the amount of GPU and the power of Unreal Engine, if you will. So DLSS is a quick way for us to go from, like, a 2K render On a computer up to an eight k render, whatever that the wall needs. So in this case, there's without, there's, without having to process every single pixel. Right. So this, like, the computer figures out what the other pixel should be. It's, it's crazy. It's, it's generatively filling in all of the details. And I friend James Blevins also been a big fan and of DL Ss Yeah. And hyping it up and talking about like, people need, need to pay attention to this. Yeah, absolutely. He's right. Uh, DLSS is supposed to be way more computationally efficient than just rendering those extra pixels out. Um, it also pairs well with Denoisedrs. So Denoisedrs and DLSS together means you just have to do less in real time. Give it a noisy image, use AI to decipher and upscale what that final image should be. That's it. Which we have talked about here in past episodes. Yep. On Denoised. On Denoised. But that was the only kind of mention and that just stood out to me because I don't know if it was just a mix of the numbers or if there was an improvement that they didn't really highlight in DLSS or how they're able to do real time rendering. That was pretty much it for a lot of kind of MNE specific related stuff. Moving through sort of what the highlights were, he kind of did a recap of stuff he talked about at CES and sort of the development of AI and that we're progressing to agentic AI agents, which we've talked about here on the podcast, and that the next step to that is physical AI and robotics AI, which we just talked about. In the episode that went out today, uh, perfect. Yes. And DeepMind does come back to play a role at the very end. Yeah. Spoiler alert, spoiler alert. The next thing that he moved on to, which stood out a lot is AI factories. And this is a fancier word for data centers. Now, this is a very big idea. Whereas in the past, we wrote the software and we ran it on computers in the future, the computer is going to generate the tokens for the software. And so the computer has become a generator of tokens. Not a retrieval of files. From retrieval based computing to generative based computing, from the old way of doing data centers to a new way of building these infrastructure and I call them AI factories. There are AI factories because it has one job and one job only. Generating these incredible tokens. That we then reconstitute into music, into words, into videos, into Research into chemicals or proteins. We reconstitute it into all kinds of information of different types. Yeah. I love the terminology here because Jensen and NVIDIA is really differentiating AI data center from the rest of the data centers by using a colloquial term that we all. No, and love and admire factories. Factories are the basis of revolutions. Industrial revolution was all factories. So, you know, in the same sense, the next era of human achievement is going to be a I factories. And I feel like that's the way he puts it. Yeah. And when they did sort of frame the conversation at the beginning was how computing in the past was done from retrieval computing model and that the future he's trying to go to is generative computing model. So instead of Retrieving the files or finding the things you're looking for generating the things you're looking for. Yeah, generative. AI fundamentally changed how computing is done from a retrieval computing model. We now have a generative computing model, whereas almost everything that we did in the past. That was about creating content in advance, storing multiple versions of it, and fetching whatever version we think is appropriate at the moment of use. And that was sort of another theme that came up recurrently in the Keynote. I mean, a lot of what Jensen said is so 30, 000 feet up CEO speak, if you will. That I, I have a hard time following because I, you know, I'm a concrete example kind of guy and I was look for examples and cues and things like that. But again, this is in the, in the realm of what do you mean you're going to generate the thing that I'm looking for? So is it just, is it going to remain there after I finish it? Or is it just like, I just make it every time I need it. Right. Yeah. It was like, what does that look like? What does that mean? What is it? Yeah. Are we getting rid of? Buckets and files and, but it's his job to stay super ahead, you know, two, three, four steps ahead of what we are and this Keynote was a combo of like, here's things that are coming out that you probably already knew. And then here's our road map for the next few years. And I feel like coming at this too. He's kind of got to be like, well, because this is coming into after deep seat comes out and it's like, well, hey, Okay. Do we need AI factories? Do we need these massive things anymore to like run these models or train these models? And just to put it into context, you know, DeepSeek did a lot of damage, not a lot of damage, but a significant amount of damage to NVIDIA stock prices. Yeah. And we did see the stock price dip during GTC and it never really recovered, uh, from that initial dip. Yeah, and I don't know what it's going to be like after people see it, but yeah, it's a bit of like, well, do we need these huge factories? Which. When people build factories, they need to buy a lot of your chips, a lot of hardware to buy as much of the chips. And I'd say there's still some cases of what he was talking about of. Where yes, you would need that. Yeah, I mean, there's no doubt that the world is already building quote unquote AI factories, right? Like if you look at any of the major tech companies like Meta or Google, they have data centers that are specifically doing AI computation at scale, quote unquote, hyperscalers. So these are all the new terminology, at least for us here in M& E that we're having to be accustomed to. Yeah, and one of the examples that he did give and he sort of spent a lot of time explaining the challenge for an AI company and that they are trying to generate millions of tokens and tokens is sort of the backbone of generative AI of like the things you make that turn into text or images or speech and generating millions of tokens quickly And sort of the battle rate of generating the right. And if that basically gave example, the same issue that search has, where if you type a search query and you want like a response quickly and so the longer it takes, maybe you get a better response. But it takes so long that people bounce from a user perspective. Absolutely. I love what NVIDIA is doing with terminology and language around all of this. They're really putting into structure what will be saying just regularly over the next few years. So tokens From what it sounds like is like the currency of inference. So, for example, for ChatGPT to generate one word of response, they need to generate, you know, a hundred tokens in order to get to the end result of one word. So, if you want to serve a hundred customers and each customer needs like a thousand tokens a second to be served, Then you're, you know, you're making 100, 000 tokens a second as a factory. Yeah. And the factory is the back when he said their AI factories because it has one job and one job only generating these incredible tokens that what that we then reconstitute into music into words, into videos, into research, into chemicals or proteins. And the fact that you need the factories, you need their processors to make the foundation. But I love the fact that Jensen is generalizing all of the different segments of AI down to tokens, which is like the US dollar. Like, you can buy so many things with a dollar and it all comes down to how much dollars you can earn so you can buy those things. The solution to this was one of the big announcements, NVIDIA Dynamo. NVIDIA Dynamo does all that. It is essentially the operating system of an AI factory. Which is basically, it seemed like, a new operating system for AI factories. Yeah, so, uh, the way Jensen put it was, Right now. A AI factory has to ride on top of many layers of technology. You know, you have to run VMware, VMware will then build virtual machines on the cloud and that cloud then runs on the data center. The data centers use GPUs. So there's all these types of layers of technology that are working together. How he puts Dynamo is like that goes directly to the NVIDIA infrastructure, which encompasses their GPUs. their connections from GPU to one GPU to the next one rack to the next. And so basically kind of turning your racks of GPUs into one single massive GPU that this language is able to communicate with that quickly. And also the other one of the other products announced integrated silicon photonics, which it's a mouthful. And I think you better understand this than I did, but it was a way to speed up networking all of the different racks. Yeah. So right now, if you put A data center together, you know, it's going to have, let's say a giant data center has somewhere like a hundred racks and within a hundred racks, you have 15 or 20 different servers, but they all have to interconnect. They all have to behave as one single organism. The way they do that is there is a fiber connection from one machine to the next, to the next, to the next. And then that rack is fiber connected to the next rack and down to a mainframe switch, if you will. What if. You were to bypass all that and go, that fiber can connect directly to your GPU and then to the next GPU. So that's what this is. This is an optical interface right out of the silicon. I've never seen anything like this before. That's why, yeah, that's why that's so trying to wrap my head around that. But basically, yeah, faster, better way. You can make these more direct, look like higher through a GPU. Yeah. You're no longer using copper or electricity to transport information. You're using light, which is going to have a higher throughput. Always. You can do terabytes of information per second on a fiber. And then what's even cool about it is because it connects directly at a GPU level, stringing together tons of GPUs becomes easier than ever. And now we're still talking data center scale, massive scale. These are like enterprise. I just want to go back one step and just really look at the flavor and feel for this conference. CES felt more like NVIDIA. is doing a lot of cool stuff, quote unquote. Mm hmm. Obviously, consumer focus at CES. Yeah. Uh, so we got to see, you know, more omniverse stuff. Uh, we got to see tons of, uh, like consumer, like, more autonomous car stuff. Digits. They announced digits. Digits. Personal AI computer. Like actual computer. Put on your desk kind of thing. Yeah. This one, this is their own conference. And because of that, they get to drive the message and what they want to share with the world. It felt like NVIDIA's, like most of the energy here was spent on data centers. Yeah. And looking at Pat last year's as well, this is a big enterprise focused. Data center conference. Yeah, so clearly where they see their business or the bulk of their business is at the hands of hyperscalers who are going to build these massive AI factories. I mean, we're talking about 100 megawatt factories. That's like the output of a nuclear power plant, right? And that's why you need a nuclear power plant to power these. Yeah. Yeah, that was mentioned in the right. That was a 100 megawatt factory using Blackwell chips. Which can run or generate 12 billion tokens per second. Yeah, I believe currently if it was on hopper chips, it was the comparison was like 300 million tokens a second or something. Which is still a lot. Yeah, but the argument Jens is making is As your need for AI becomes more sophisticated, you'll need more tokens per user to generate that response or whatever a video or and especially if we're going to go with his vision of a generative computing future. Yeah, where we need the factory instead of retrieving your generating. Yes. So Yeah, this all ties into his prediction and his worldview. How much do you think that will be the case? How much do you think like, I mean, good question coming from our industry where we probably have some of the largest files deal with the largest files of raw media, raw video and kind of saying, Oh, we're not going to do a retrieval system anymore. We're going to do a generative system. Yeah, I don't know about that. What does that even mean? Well, maybe not for media and entertainment. Maybe it doesn't apply, but certainly for, you know, uh, social media, user generated content, a lot of that could be just generated on the fly. I could see that. Yeah, I mean, or, or, like, maybe it's Combo, where you have, like, your source files, and, but you're generating your output, you're generating, like, your On the fly. Yeah. Social media ads based on a collection of photos and videos. Again, I like retrieving both of ours. Both of us are super speculating here because he didn't give actual concrete examples of that. No. And it was, yeah, this is more. But what I will say this is like the entire Keynote was based around. One theme, to me, is that you can just keep throwing hardware at the problem as the problem gets bigger and more complex. For, yeah, for a lot of stuff. So, not only GPUs, but now you're throwing factories at the problem. Or calling, yeah, calling them factories. Data centers, which we've had for a while, but yeah, factory, AI factories. Sort of on the flip side, one of the things they did call out too, in one of their little pie charts of new Features or code bases was, um, uh, 6G, Edge 6G. But one of the areas that I'm super excited about is Edge. And today we announced, we announced today that Cisco, NVIDIA, T Mobile, the largest telecommunications company in the world, Cerberus ODC, are gonna build a full stack for radio networks. Here in the United States. And that's going to be the second stack, so that this current stack, this current stack, we're announcing today, will put AI into the edge. You've worked in 5G? Yes. Where does 6G go, and what does this potentially mean for AI? I'm assuming this is some sort of, like, edge. AI functioning on your phone locally. You don't need an AI factory to like, run a model. Maybe we could, we should ask ChatGPT, but it's, when we were working on 5G, we were deploying 5G at Verizon. This was, uh, 2018 or so, a few years back. Verizon at the time was already moving on to 6G. So at that time they were specifying what exactly 6G meant. Is it one terabyte a second download speed? You know, is it sub millisecond latency? Is it a edge computer that's always connected to your phone that does all the heavy lifting and your phone is just a thin client. So all of these things, so it's not clear to me what 6G means, but clearly. We're past 5G and moving into a world where the interconnectivity needs are so great that we have to create a whole new set of architecture to accommodate it. Interesting. The other thing announcement and kind of big focus was on autonomous vehicles, self driving vehicles, Cosmo, which you talked about in the last episode of their data set for, Driving and understanding the real world to create autonomous vehicles announced a partnership with GM where GM is going to be using the NVIDIA chips to power their self driving fleet. Yeah, that's practical use case. Pretty dope. Yeah. First of all, and we don't know how this has to be something to that has to happen in real time. Yeah, obviously, we literally just covered this last episode. Yeah, self driving vehicles is already here. I'm in Santa Monica and I just literally saw 10 Waymo cars just driving around here. Uh, the question is, how do we get this to scale? How do we get the cars to be more autonomous? So, if a car is offline, it could still do what it needs to do and just do inference locally. On on device, right? Yeah. So yeah, driving the canyon areas here and you're like, you lose cell phone reception. You're like, I can't pull up my maps anymore. How do I get out of here? You're driving the car. Not a big deal. But if your car is reliant on computing, I had to figure it out. And then, uh, like you and I were saying, it has to compute in real time at hundreds of frames a second because the car is moving so fast, right? So even like a 30 frames per second camera, yeah. You know, whatever milliseconds at whatever speed you're, that's the chunk of distance you're covering, right? So it's, it's the requirements are enormous. So I think NVIDIA is right to be in this business because it's also such a massive business. It's trillions and trillions of dollars in people reinvesting in their vehicles, buying a car that could drive itself over the next 10, 15 years. Yeah. Yeah. Talking about something that will revolutionize like transportation. Right. So I'm curious to see where that goes with GM and kind of how they release that in the cars. Yeah. Uh, I just quickly did a ChatGPT on 6G and yeah, I was right. So 6G is supposedly 1 terabytes per second, which is 100 times faster than 5G. Okay. Lower latency? Expected to be under millisecond, 1 millisecond. Yeah. So I was right about that too. So, uh, 5G is anywhere between, I believe, it's a single digit. So two. Would it? What does that even look like when they're like, we got to update to 6G? Is that more antennas? Is that different antennas? How does what is that completely different infrastructure? So when we went from 4G to 5G, the 4G stuff could not be used. So they had to go and do they repurpose existing antennas? Do they build new ones? You have to have more. How does, how does that work? The 4G antennas remain because your cell, your phone can interchange between it. It has two antennas. Um. But the way 5G coverage worked, and I think I went over this on one of the podcasts, is you just need to blanket more cell towers because it's connecting at different frequencies at the same time, whereas 4G is like a single direct link. Imagine 5G is like 10, 15 different links at the same time. So in an area like, you know, this area we're in, one 4G tower will cover like 10, 15 city blocks. But you'll need one or two 5G towers per block or something insane to cover it correctly. Okay. Yeah. So I would imagine 6G is even crazier. Even more towers for 6G? Yeah, okay. Possible. Possible. Yeah. Trying to scrape every M& E thing that was sort of covered. They always have these really cool animations flying around through, uh, their San Jose headquarters. And so they had these 3D animations moving around the office. And then he did mention, like, these were all Gaussian splats. Gaussian splats, just in case. Uh, so they scan the entire building and we're doing these animations with gauze and splats that they turned into animations. Yeah, it's cool. I mean, I'm scraping for M and D stuff here. Well, omniverse omniverse is everywhere. Omniverse is now the primary engine to train all of the, uh, autonomous vehicle models. And if you combine omniverse with Cosmos, which is their world model, you can pretty much simulate any robot in that environment or Rather, you can have a robot train in that synthetic environment. Okay, right. Okay. Yeah, there's a ton of use omniverse is very omni. It's very big. Yeah, I've seen that we've seen with self driving vehicles. We've seen a lot with like factory planning and building out your factory and running simulations. They also talked about building out your AI factory data center and running tests and simulations, which is a new tool. Feature use case that they focused a good amount of time on. It's like, how do you actually, uh, design and build this hundred megawatt insane AI factory? Well, NVIDIA has a tool for that. Yeah, we have a tool that we have a tool. They're basically making it frictionless for you to spend a billion dollars. on a factory. Yeah. As far as new chips and updates, uh, they did announce the new Blackwell Ultra NVL72, just bigger, better Blackwell chip coming out later this year. And then they laid out the framework for the next generation of chips, which they're going to call the Rubin Rubin Ultra named after the person who discovered dark matter. Yeah. And that's going to, they said, would come out in the second half of 2027. Wow. So that's a big announcement for also like That's hella early laying out. Yeah, I'm not seeing this now being like it's going to come out about two years from now. Right. Yeah. I think I just want to. Give people this road map of like there is bigger and better things. I think it's also like a stock price play. Uh, I mean, look, the biggest product for any public company is their share price. So the entire conference is just meant to build confidence in investors. Yeah. And especially when it's like. They're riding so high, but it's everything's tied to a quarter and it's like, well, what's next? What's that? What's bigger and better? And it's like, Oh, we got a million chips everywhere. And like training AI stuff. It's like, it's not like this stuff isn't happening fast enough. We need bigger and better. The other hardware thing was DGX stations. And so these are ranging sort of like personal desktop size, AI stations, basically AI workstations for AI training and data science that they're going to partner up with all the major computer manufacturers and sell versions of these DGX stations. I have no idea what the pricing has been. It seems like much more higher end computer to do. Yeah. Have you seen, yeah, I, I, I, we almost bought a DGX station, uh, six, seven years ago when we were doing heavy on real work at the time, I think it was like four or maybe eight GPUs in a single box that were retailing for a hundred thousand dollars or so. Okay. Yeah. I think, I don't know if there's a max or just what the average is, uh, 70, 184 gigabytes. Unified memory. That's so important for AI stuff, the unified memory. As I'm finding out, doing a lot of local stuff on my computer. And reading more into what we talked about, the updates to the Mac Studio. That was something that also 512 gigabytes. Unified. Yeah. Um, people are very impressed. We're able to run. Yeah, when it when a memory is unified, it's shared across the CPU and the GPU. I think, uh, so there is no sort of transfer between the two. What would you use the DGX? I mean, aside from like, was that use case? Well, let's just get a souped up station so we could run on real. Is that better than just doing a build out yourself? Like what are the use cases for, uh, what I would do or what NVIDIA envisions both? So for me, it's just running a really fancy version of comfy UI and doing a lot of local inference, local, you know, video generation, style transfers, things like that as a creative technologist, just running AI locally. I think how NVIDIA envisions it is any kind of professional work that you do, whether you're an accountant, whether you're a software programmer, whether you're a content creator, you're going to be using AI for it. And you need a different type of compute infrastructure for the next generation of tools than what you've traditionally had. Yes. Of a CPU, GPU. If you ask NVIDIA, they'll say you need DGX for every type of professional work. That's just my guess. Um, yeah. I'm curious to, yeah. Yeah. I'm curious to kind of see if, where the heavy VFX or yeah, where the kind of M& E applications are of getting a souped up computer like this. And then they touched on robotics, Omniverse brought that up again. Physical AI and robotics are moving so fast. Everybody pay attention to this space. This could very well likely be the largest industry of all. Cosmos, which is their, uh, real world kind of data set of car driving. Yeah, it's Understanding the 3D world. It's the closest thing we have to a world simulation. Mm hmm. So it takes into account things like air, water, you know, um, physics based collisions, materials of different types, I'm sure steel, bricks, wood, what, what have you. The more you Put onto the Cosmos model the heavier gets and the more computationally expensive it is to Simulate it and lastly big finale. And so today we're announcing Something really really special. It is a partnership of three companies DeepMind Disney Research and NVIDIA and we call it Newton. And then he brought out the, uh, BDX droid robot, uh, from Star Wars that we just talked about on the last podcast. And now we've seen it in some, uh, Imagineering videos and the whole training. That robot's gone viral, dude. Yeah, I mean, it's extremely cute and then it's going to go even more viral after this because it, like, kept talking and beeping to him. I did notice the robot had a mic pack taped to its back that kind of plugged in. Why do you think that is? So that it could pick up the beeps and the noise and stuff. Yeah. Yeah. So it was, uh, yeah, it was very cute interaction with the robot and Jensen. They didn't really go to, it didn't go into much about what Newton is or what this partnership means. The only thing he did mention was that the droid had two NVIDIA computers inside it. Uh, just so you know, blue has two computers, two NVIDIA computers inside. Didn't say anything else. It goes back to your question of why do you need DGX? Because when you're doing local inference, you absolutely need a ton of hardware to compute it in real time and we covered this literally on the last episode. It's like the, the reasons robots are back into the spotlight now is because we're pretty close to having real time AI systems. If not already. And, uh, stuff that can truly be autonomous. You don't have to program it. It'll just figure out the world. And so that, that plays right into this Newton partnership. Mm hmm. Figure out the world. If it has the chips on it, they could also process it. If it has Cosmos loaded on it. Real time. Yeah. Yeah. Right. And something that's small enough it could fit in this pretty small robot. Yeah. And, uh, one of the words that you'll hear a lot is distilled model, distilled model. What that means is taking like a big heavyweight foundational model, um, essentially making a lightweight version of it, a travel. Pack, if you will, that can then run on limited hardware of a robot. Yeah, that was pretty much the highlights. Uh, Groot N1, which was their, uh, foundational humanoid robot data set. They're open sourcing it. So, I guess the idea there is to make it easier to use. How do you open source a robot? I think it's the training language or model. Okay, sure. Yeah, from highlight of knowledge from the, uh, Keynote. Did I want to see the DeepMind humanoid? Tesla Humanoid, NVIDIA Humanoid, all like just battle it out. Wouldn't that be cool? Through an actual battle or through a challenge? Through a mind game. Through a, Siri, like an obstacle. I want to see like, uh, How about they play Blackjack together? Or like Ninja Warrior. Yeah, yeah. But with the robots. With an obstacle course. Absolutely. What were your thoughts on the whole conference? How did you feel and? What did it feel like that NVIDIA was driving towards? Not being someone who is opening billion dollar data centers. Felt a bit above my head, but yeah, I didn't feel like there was as many. Oh, wow. Moments. There wasn't even, I mean, I get what this Keynote is targeting. It's not targeted. I was thinking there might be some more even, uh, gaming related stuff, which yeah, for M& E and for filmmaking, we adapt a lot of things from the gaming world, obviously with Unreal Engine and beyond that. So yeah, I was wishing there was a little bit more there, even if for anything on, uh, real time ray tracing. Yeah. But the, definitely the end focusing on robotics. I wish I knew a bit more about what Newton is. Um, that was exciting. And it's kind of sad in a way where it's like, well, yeah, you know, they could just be making bigger and better chips. Like, okay, what else is new? Um, but you know, when we step back and think about it, it's like the things they're building are just absolutely insane. It's insane. And we haven't, we're talking when we were watching it, just like all of the areas and fields and everything where it's not just, but they're building the chips, but they're building the hardware to manage the data centers and building the things to optimize. The data centers itself and just so many things and then so many industries when they just display the board of like, yeah How like quantum physics and weather prediction? Yeah chemistry. Yeah. Yeah everything I don't know. I don't know how Jensen sleeps at night because We obsess over one that he really is a robot like five of them five of them. He is Newton. Yeah Look, there's no doubt that NVIDIA is clearly, uh, like leaps ahead of other companies as far as AI innovation goes. I think they really are pulling the world into a direction that they want. My only note here is, does the world need multiple 100 megawatt data centers or AI factories? Is that really necessary? Do we really need to burn that much energy to get to the next revolution, right? Of, uh, the next level of technology? I won't, I should note too, one of the things they talked about or highlighted, I don't have the specific numbers, but like with the improved chips that are coming, like the new Blackwell and like the Rubin down the line, that part of the gist of them is they can do more and they could do more running off less power. So, you know, I, it seems like, yes, they're aware that these things are. crushing energy and other issues that come around that. Do you know the LED light paradox? No, what is it? So, supposedly, uh, the, when we went from incandescent light to, for a second there, the compact fluorescent, and now to LED lights. Uh, everybody, all the leaders around the world, and especially energy, uh, you know, scientists on the energy side thought that we would consume less energy. Just because it's like, credibly more efficient, like a hundred watt light is now five watts to run, right? We leave them all on all the time. That's it. Yeah. Yeah. So it's like, uh, yeah, I'm not going to turn that off. That's like the exact same thing happened with, uh, the plastic bag ban here in California. Oh, what happened? The same exact thing. They ban plastic bags 10 cents for a bag, but if you've ever shopped in California and you do buy a bag, usually the plastic bags are like a very nice, like thick kind of heavy duty plastic. You feel like you're getting your 10 cents worth. Yeah. And so most people are like. Not bringing their it didn't change the behavior to like bring your own bags. That was the intent. Uh, they don't bring their own bags and they're just like, I'll pay the 10 cents. These bags use way more plastic than the cheapy, like the flimsy bags for free. So now people are just paying for the bag. So they're using the same amount of bags, but now these bags use more plastic because they're thicker. So now you're throwing away way more plastic. So now, right. And so, and they're not reusing them. Yeah, basically no behaviors change. Became like a tax fee for shopping and the bags are using more plastic. That's what I'm saying. That's what I'm saying. So even if we get to a world where we're doing way more GPU computation on a much smaller piece of energy, we're going to consume way more energy because we're just going to have more complex models be possible. So I think we're What is the limit? What is Well, AI computation The entire world is a digital twin. We have a multiverse of digital twins in Sure. AI factories. And we could just, ultimately it's the matrix, right? It's like a complete simulation of everything. You know, we know how they powered their robots, their computers. I don't know if it was the most efficient version of it, but okay. That's a, that's a great way to end this episode. Yeah, we'll end it there. All right. Thanks everyone for joining us on this special episode. Uh, links for everything that we talked about, which is the Keynote and other press releases and stuff we'll put on, uh, the website, Denoised podcast. com. And we'll see you on the next regular episode from our regular studio. Thanks everyone. Bye.