Denoised

Qwen Edit - The Free AI Imaged Editor That's Better Than Flux Kontext? Plus More AI in Film Updates

VP Land Season 4 Episode 55

Chinese AI giant Alibaba drops Qwen-Image-Edit, a free open-source model rivaling FLUX Kontext – but what's their long-term strategy? This week we dive into the flood of impressive AI tools coming from China, explore Runway's major platform updates (including Veo 3 integration), and examine and emerging AI audio solutions.


--

The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.

All right, welcome back to Denoised. We're going to cover our week of AI updates for the filmmaking worlds. 

Again, another week and another round of updates. interesting updates, 

yeah.

Yeah, actually, first, I want to tell you a story. So we've got the new video I would talk about last episode on the channel covering Vu and some of their new updates and stuff that they're building, kind of building like a holodeck. That's sort of their vision, like an AI holodeck. Sure. You know, they built they have these big portable LED screens and kind of roll around.

And they're like, what if you could just talk to that and create with that and use that as like a super smart creative board? Yes. Kind of the gist of where they're going. But the funny thing is, when we shot it, the opening is with like a demo of a KJ saying like, create an image of an office. And when we shot it, we didn't get a shot of like the final office image appearing on the board.

And it was really bothering me that we like, didn't have this final hero shot. So I'm like, maybe we could generate it. So I took a screenshot from the film and then I did the little Veo 3 trick of like marking it up of like what I want with like some text instructions. And then I gave it to Veo 3. And it generated it.

No way. And it's up there. And then I didn't. I sent a cut to Tim, the CEO, and I'm like, hey check out the video. And then he watched it. Yeah, that's awesome. And then I'm like, anything seem weird to you? Yeah. He's like, no, like, as one of the shots is from Veo 3. He's like, no way. Yeah, go check the video out.

And if you see the shot leave a comment. Let me know what the shot, what shot you think it is. 

Look out for it. Now you're 

going to be looking for it. But if you watch it before, that's awesome. 

That's such a real world use of AI saving the day. 

Yeah. An insert shot you didn't get. 

You're also a super practical use.

Schtickler, You're a schtickler. 

You need the payout. You need that shot. You needed that shot. It hasn't really bothered me. We have the setup. We have no payoff reveal of what the thing was created. 

That's awesome. 

Yeah. put that in the toolkit. Things that can work. 

The whole sketching and then writing notes on that frame.

Yeah. 

It works. 

Yeah, it's so interesting to see some of these kind of tricks that people discovered and that that they're able to apply. So like Veo 3, but I some other models people realize, oh, you can kind of do the same trick with that too. Yeah. And then also another sort of recent trick is specifically with Seedream, which we've talked about a bit ByteDance model that I really like.

People realize that in the prompt, if you tell it like change shot to something else and then change shot to in the one generation, you can make like three different shots, because you're in the same generation, you get that, that world consistency. Right. and you get a couple of shots out of the same starting frame environment.

Yeah. Same lighting. Same, same sort of, which 

reminded me of that thing we talked about from that hackathon a few months ago with Dylan, our man Dylan and that trick where he was just generating in the same generation, like doing a bunch of shots. And it was like, was, he was onto this a while ago.

 It brute forcing 

it.

And then now you can just do it. Yeah. Now it's a lot more. cleaner looks great. And Dylan 

anymore. Sorry, Dylan. 

Dylan GPT. 

Just joking. 

Yeah. So and then I think some other models have realized like, you can kind of do the same thing with this model too. So it's interesting. Like one model figures something out or someone figures out a hack and then other people adapted and tested out with other models.

And you realize, oh, you can kind of do this too. Oh, that's so cool. Yeah. All right. So first update, uh, probably the biggest one I've seen this week is when the image model we talked about last week came out from Alibaba open source model. Yep. They now released Qwen image edit. which is basically another version of the model open source, but you can speak or give prompts and modify an existing image.

Sort of like a FLUX Kontext, but this is open source. You could run it on your computer. make changes and kind of your own portable, 

Comfy, Comfy workflow company. 

is a Comfy native workflow already built that you can like load up yourself and download the models or you, know, you can run this off file or any, or replicate or any of the other APIs, but yeah, awesome to see something that is like FLUX Kontext level.

For free. free. Open source. Again, as we talked about last week, was like, what's play here? Why is this free and open source? But yeah, pretty impressive. 

As we're wondering what the Chinese AI company's secret game plan is, and I have some tinfoil hat theories on this, every week it's just like boom, boom, boom, boom.

And this is a 20 billion parameter model. You know how expensive that is to train? It must cost millions of dollars, thousands of GPUs. I mean, they do. It's Alibaba. They've got the GPUs, right? You still need the electricity, engineers. Dedicate the time to run it. To train it and watch it. each training takes months.

So they started on this probably end of last year or something. Yeah. It's crazy. Yeah. Where 

your tin 

hat Oh, you guys want to hear it? You want to hear it? Okay. All right. I'm going to lose some viewers on this one, but I think it's worth it. 

Now we're going to hear what this is. 

I think the Chinese are bombarding us with advanced AI models.

So we lose the ability to make our own high-end models and we become more and more reliant on China. This is a big geopolitical play. 

Is this like flood the market with like cheap manufacturing equivalent? 

Yeah. And it worked, right? Now we're trying to bring manufacturing back to the US but we can't like, Oh, I haven't built a a plastic toy for three decades.

Toys are us used to buy from Minnesota back in the seventies, but now the Chinese have made it for the last 30 years. now what do do? I think they are just making sure that all of our consumer and enterprise AI needs are met from their models. And eventually our. AI muscle will atrophy. That's my theory.

Come on. What do you think? don't know. I don't have a thing. I, I mean, I, I, my only thought was from before it was like, get people used to, and it, and hooked into this and their workflows and then switched to a charging model or switched to, 

I don't think they care about the pennies that you make. I mean, honestly, they, they, they don't like, they'll never make the money back on Qwen.

No, I don't know. I don't know. 

Yeah. the other, aside from 

just a geopolitical play of just like, would rather the world be dependent on Chinese models. 

got to also remember China is always playing the long game. They're thinking a hundred years out. Right. And because AI is such a disruptive and transformative technology, they know America now has the upper hand.

Like the best models are still here and like open AI is here. Google is here and so on. But then if they do this for long enough time, then the open AI business model. breaks the Google stuff, they transition out of AI, go back into search engines or whatever they do. And they just use the Qwen models, the Alibaba, the Tencent models.

And maybe that's how they sort of make sure that they still have the upper hand. 

Or this is the only way to sort of compete in a noisy space is to like make it free. Sort of what Meta was trying to do with Llama, where they like were kind of late to the AI game and then they built Llama and they're like, we'll just make it open source.

hey, use it, adapt it because... They're too late to compete with like ChatGPT 

or plot. Also meta doesn't need the money from llama. Exactly. That's, think it's a combination of maybe both of those things. 

Yeah. Just the only way to play is make it free. Yeah. Yeah. I'm messing around with it a bit. Definitely on par with FLUX Kontext and, don't know how it compares.

mean, I've seen some test demos comparing it to Nano Banana, which we talked about last week, but I can, it was on LMArena. think, I mean, when we talked about it, but I went to LMArena and I can't find it there. I said, I don't know they took it off and I don't know how people are playing with it.

Nano Banana? 

Yeah, 

it was on LMArena as a model you could test. 

Yeah, I was showing you the website, NanoBanana.org. told me 

there was a nanoBanana.org. I'm still, I think, I'm not convinced that that is a real NanoBanana. 

Oh, I paid for it, man. Where did my money go? I think 

you're getting FLUXed. I'm gonna guess it's the same thing as the NanoBanana.ai.

It's just another opportunist, like, spun-up website. Wait, 

what? I showed everybody NanoBanana outputs, and I got FLUXed? 

I mean, what, did you find it? Did you find this NanoBanana website output to be better than using FLUX? 

I didn't do an A-B comparison. 

I mean, just in your general experience, I don't know.

you? I thought it was highly impressive. 

Yeah, I've used FLUX Kontext in the past. I thought it was equivalent. Maybe not. OK, maybe not better. 

Well, this website is saying that it is better than FLUX Kontext. Look, I mean, if this is really a model from Google that is not on their API or anything, then how would this website be using NanoBanana that looks like every other?

Because 

it's in limited preview, right? can't. From my research, it sounds like NanoBanana is out and in limited preview. 

So limited preview to NanoBanana.org. They're the ones that got access to it. Damn it. I think I got spoofed. I'm curious. Also, if you run NanoBanana.org, you can reach out and let us know if we're totally wrong.

But I'm suspicious of everything now. 

You got more tinfoil hats than I do. 

Maybe I do. I mean, look, it was truly from Google. And I was hoping we maybe would get some of the reveals at the Google event on Wednesday around the Pixel camera, but nothing in the image generation model there. I don't think I didn't rewatch it or anything.

Yeah, so you think the Nano Banana stuff will tie into the Pixel phone? 

No. I mean, I was thinking maybe they would announce something with it, but they didn't. And there was just a phone update. whole Jimmy Fallon 

thing? 

I've only seen blurbs about that. didn't really like. Dig into it. Grange. Yeah. Was it cringe?

It was pretty cringe. Sorry, 

Jamie. The only 

commentary I saw was people saying like, I miss Apple's live presentations. But after seeing this, I understand why they are not doing it live 

anymore. Yeah. Yeah. I mean, like, I don't watch 

it, but kudos for them for doing something live. Yeah. 

Like the whole like late night show is such an outdated format anyway.

Like all those guys, I mean, Stephen Colbert retired or got pushed out. Yeah, it was canceled. Conan has his own thing now. Jimmy Fallon any day now, and then Seth Meyers. 

Conan successfully transitioned to podcasting. 

Right. Yeah. Right. And he had a late night show on YouTube for a while, or not YouTube. TBS.

It was on TBS after that whole Jay Leno fiasco. 

Yes. 

Yeah. 

Yeah. So end of an era. Yeah. I mean, 

I don't really watch 

late night. 

mean, the only thing is if you see clips or something that pops up on YouTube and. Yeah, I think Jimmy Fallon. 

The last one to kind of hold on to it is Saturday Night Live. Like, they're still somehow.

That's different, too. I mean, it's once a week, you late night. 

a skit show. 

Yeah, it's skit show. It's commentary. It's once a week. And they always 

bring in a big celebrity. It plays well for 

clips. But yeah, mean, late night, late night is four nights a week. And it's like, 

oh, I know. 

I'm not keeping up. Like, I'm watching something that consistent, it's something else.

Yeah. Like Alien Earth. Super good. Dude, 

So OK, so I try to sign up. I signed back up to Hulu the other day and it was tied to a my Disney account. And then I was like, I have a Disney plus account. put my email in, couldn't find me. And now I'm like in this weird zone where my login has disappeared from Disney plus.

And also 

Hulu. Oh, Okay. I mean, I think you could probably access it through Disney Plus because there are some setting Hulu eventually, and they're just rolling it all into Disney Plus. Eventually, I should 

get a Disney Plus account again. And then I 

think I might be wrong. think everything that's currently on Hulu, I think you can currently access it on Disney Plus.

If not now, that is the future. They're rolling. They're killing Hulu and just going to move it all into Disney Plus, which the screenshots of seeing stuff like Saw 6 like thumbnail with the Disney Plus icon on top. It was really funny. Right next to Mickey Mouse Clubhouse. So yeah but I think it makes sense for a long-term play.

Yeah. I can't let you have all the fun, man. I mean, you were ahead of me on The Studio, and then I caught up. Studio's so good. Your recommendation is spot on. I'm going to watch it later. 

I try to just pick the hits. I mean, if you're into Downton Abbey kind of stuff, The Golden Age is also great. That's going to be a tough sell.

You know, I'm just saying. Yeah. Yeah. Until today, it's sure, sure. Awesome show. 

Have you seen The Gentleman, the show on Netflix? 

Yeah. Better than the movie. Yeah. I love the show. Freaking amazing. then I watch, I actually never saw the movie. I watched the show and then I watched the movie and I'm like, oh man, I'm glad I watched the show first because the show is way better.

Right. Okay. Yeah. All right. The show is awesome. Good soundtrack too. All right. Let's on. Moving on. AI, guys. We're turning this into a TV show, TV commentary talk show. All right, back to AI stuff. Runway had a bunch of updates. I'd say like little bits of updates that collectively are like, oh, they dropped a lot of stuff this week, but nothing like, wow, crazy.

First one, they added voices to act two. So act two, their motion capture system, record a performance of an actual person, translate it to an AI or just an image character before it would just be whatever audio you had in your video would just transfer over. Now you could restyle that audio as well only to like a limited selection of voices they have.

I feel like it's probably better if they connected it to ElevenLabs or something. which might be in the future, because we'll talk about it in second. yeah, so you could change voice, limited selection of voices. Yeah. 

think the key here is the timing of the voice and not necessarily the voice itself.

Cause you can always go from an audio to audio model, like ElevenLabs, put the Runway voice in there and then prompt for the, like, I want a deep voice. Right. And then get the right voice out, bring it into Adobe's sound suite. and then add reverb to it or what have you. 

Right. think the main thing would be doing that route is making sure that the timing stays the same because you want the lip sync.

Yeah. 

So I think if it comes from Runway, like a, know, Veo timing is good, but Veo quality is not good. Do you agree? Veo timing is good. when Veo 3 generates audio, their audio, yeah. Like lip sync is perfect. Yeah. But then it sounds like AI. Yeah. Yeah. 

There's always like a little weirdness to it. 

I

You were stuck with whatever audio was recorded on the video. 

Yeah. Sound is so especially voice voices so far behind from video generation or 

you do animation style. The reverse you record or generate the audio first and then lip sync your performance to it and then run that through of that Runway.

Yeah. Old school animation techniques coming back. Okay. And then the other thing with Runway, a couple of updates is the big one was they are now opening the platform to other models. So not just Runway models, but the big one is they integrated Veo 3 as a model in the platform. Select third party models will now be available directly within chat mode, allowing you to choose between a more robust set of pipelines to better accommodate your specific needs.

So that's interesting. Cause yeah, sometimes I find like Runway and Runway Aleph works great for some stuff, not so great for other stuff. And I think in this kind of ethos of like everyone's trying to be the one stop platform. So you have to do less 

model hopping. They're trying to build a wall garden.

Yeah. Runway's had a pretty good interface, too. Yeah, you 

buy $1,000 worth of credits in Runway, how do you spend it all? Give you more ways to spend it. 

Yeah. Yeah. Without having to jump around to other systems. But that's why I'm also saying maybe that opens up the door for an ElevenLabs integration, because we definitely have the best voice models.

so maybe the Oh, I'm sure 

Cristobal's looking at ElevenLabs. How could he not? 

Yeah. Goes to Runway. 

That seems to be the most buzzy. audio model at the moment. Like I've heard so many other people talk about it too, not just us on the show. 

No, think ElevenLabs has been the like standard for any sound related stuff.

It's like been the best voice quality, audio quality of the real time audio that now they have music, they have sound effects. They've definitely done pretty well established themselves as like the audio AI leader. And the last Runway update Runway game worlds, which I believe we talked about a while ago.

They kind of demoed it as a private beta, which was, it's not a immersive 3D world like we've been covering with like world labs or a genie three. It is more of a game world, like text-based world generator of like more of like creating the dynamics of the world of a game. And it's more about generating the world itself and the characters and the dynamics, but more in a text or image based environment.

So more of like a game mechanic engine. So anyways, that is now out of beta. I can start messing around and playing with it. don't 

think I can get my brains around what is, what is the offering? So It's not a world generation. 

is the world generation in the sense of 

And others that need for novel mechanics and interfaces.

Yes, this is good. And this is strictly in what makes Unreal Engine so great. Yeah, go on. Yeah, it's not so much the generation of the world. Yes, Unreal's really good at that, but it's the blueprints. Yeah. It's world mapping. It's logic trees and decision trees and then having multiple levels, having the scoreboard and player stats, reward systems.

So like more of 

like the scripting phase or the planning phase for the world. actual game design itself 

is 90% of that and 10% visuals. Yeah. 

So that's what they're building with Game Worlds. I mean, I only imagine that this will eventually translate into videos and something you can move around and explore.

As soon as you have real time image generation, real-time video generation, like this stuff is going to be ready by then. Yeah. And 

then these are the mechanics, and these are like the custom-built unique experience that you as a gamer might want to play and And I 

think with the real-time generation, it won't be that the cloud is doing all of the generation and then we're pixel streaming down to our phone.

I think generation will get efficient enough, and the phones will get high performance enough to where it's doing local inference on the phone. Yeah. And so like, let's say we're going to get the iPhone 17 this year. By the time the iPhone 20 comes out in a couple of years, this stuff is ready. And now you're playing AI games on your phone, but the average Jen alpha won't even know it.

It's like, Oh, this is just a cool game. And we're just like, Whoa, right thing is running AI. 

Everyone has one of their little Nvidia AI computers. Yes. 

The DGX Sparks things. Yeah. I mean, Imagine that miniaturized down to a mobile phone. Like you can take the guts of that and I'm sure put it into Android phones.

Yeah, eventually. Yeah, 

for sure. Generate, generating on the fly and the latent space. You know it's 

funny when we talk about hardware, I was listening to in a proper news article, which I rarely do, very opinion-based, but, there is just such a massive shortage in Silicon right now across the world. Like it's not just Nvidia can't make enough GPUs, you know, TSMC that Taiwanese silicon manufacturer, like in order for them to make a chip, it's like 31,000 steps.

It's so complicated. 

There's like only one company in one spot of the world that can make all of these chips. 

Yeah. and, um, like because the, the chips have come down to like such a small size, I think we're down to like six nanometers for a single piece of wire. That's like a thousand times thinner than a human hair or something like that.

That's crazy. That at that level of complexity, there is only one company that could do it. And so like we went through the automotive chips during the pandemic, right? So many cars were just sitting cause they didn't have chips on them. And now we don't have enough chips for the GPUs and there's just chips missing for physical AI.

like in order to build a robot, you don't have enough, like embedded chips to go in them. Because of that, think we're going to shift to more efficient models by just the sheer necessity. 

mean, it sense, mean, even from an energy point or just resource point, it's like with the Deepseek argument, too, it's like, you need to train the big models first to then be able to like shrink them down into smaller, more efficient models.

You still need to that can distill them down. Yeah, you need to have the big training first. for that for the second part. Yeah, 

and the training, I don't think it ever get more efficient, right? I mean, we're at 20 billion parameters today. We're going to get to 200 billion parameters very soon. And there's no way around more GPUs.

You've got to use more GPUs. But the inference, think there's ton of improvement in just the local inference. And we're going to see some really exciting things in the near future. 

Yeah, especially when it's just like regular kind of everyday stuff you need to do. Yeah. You don't need to solve complex physics problems on the go.

Yeah, yeah, right, right. You just want something you could talk to and you know, can like handle things for you. 

Yeah, like I want to make fun of my brother all the time and it takes so long to generate a video of him being fat because he's very fat. 

That is what you want to use your silicon for. Silicon shortage.

I can't make fun of my brother. Here we go. All right. Next update. this is a new integration on file, but I hadn't heard of this model before and I thought it was kind of interesting. I was called mirror low. SFX and so it is a audio sound effect generator. No prompt. You just give it the video. It looks at the video and then generates what it thinks the background sound should be interesting.

Kind of more targeted for synthetic AI generated content that doesn't have sound. I got tested with a couple of shots of like some people on boats and it made like the wind blowing and the how good was it? Waves. I mean, it was like definitely better than nothing. Okay. And good. Like I would add a couple more elements to it.

But as a baseline foundation, good starting point. 

Okay. 

And like 

Foley. 

Yeah, it was just like some background. It was like as if it was like your on-camera audio was recording like some background sound. Oh, okay. So yeah, I thought it was a, know, I'm no, no one sound effects and sound design for AI and AI generated stuff is something not really talked about or covered.

Right. 

Definitely. I mean, look, sound is super important. And anytime I watch a lot of the AI content and then you turn the sound on and then the sound design kind of falls apart or adds to the uncanny valley nest of the whole thing. If you have really good sound, you can. trick the brain or cover up a lot of weird looking visual things.

100%. Yeah. 

Yeah. mean, one of the reasons why Star Wars, the original one, still hold up is because of the, like, so many of these icons. right? Because those sounds are just etched into our brains, the sound design on that show is so good. 

Yeah. Yeah. And you believe it. Yeah. Your brain believes it. Right. That this is real.

Yeah. Even though they're just holding sticks and little models in a black box that they shot. 

Right. 

Yeah, so yeah, I thought this was 

One of the sort of technical marvels that's running under the hood for this, as well as the Runway voices on Act 2, is a model called VLM. We typically talk about LLMs a lot, large language models, but we don't talk enough about visual language models.

So it works in reverse of image diffusion. So you can give it an image, and it'll detect features and objects within it to a very precise degree. and then feed that into an LLM. It's like this, you know. So what would you use that for? So in this case, if you upload a video of a boat in an ocean, that's the systems that that's detecting the ocean, the boat, the waves, the velocity of it and all of that.

And then telling an LLM, this is what sound should generate. And then that's going to an audio model. Then it's generating the sound. 

Okay. Is this also like tied into like a model being a multimodal model? where you can give it text input or image input. I think 

by now, we can assume that all the models are multimodal.

Yeah, they do pretty good job of- Yeah, like, ChatGPT is 

multimodal. Yeah, think- 

Yeah. All, yeah. Maybe not 

video. think Gemini can handle video input. I don't know if ChatGPT or- should try that. I don't think it can. I don't know if they haven't seen- Even I think Gemini can, but usually if you give it a video, it's like, it's not- looking at the whole video, it's like one frame a second.

To me, image generation, video generation are like cousins. I mean, so much similarity between the two. I just kind of bucket it as one thing. Audio is another bucket and so on. So you can think of VLMs as the eyes of the AI system. So I'd imagine if you have a physical AI like a robot, the camera inputs go into a VLM and that's how it's detecting a person and a camera and so on.

That's how it'll see the world. It's already doing it. I sent you a video last night. Did you take a look? No, I don't watch that one. It was robots. You're scared. You're scared. What does he say to me? So it's always people messing with the robots. So this robot's trying to pick up these objects from the box.

And this guy has like a golf club, out of all things. And he's just pulling the box away from the robot. And the robot's pulling it back. They're going to remember this. 

I know. It's so mean. 

All right. Other, this one kind of more just grab bag, a quick updates ElevenLabs music now has an API so you can plug it into your systems, Comfy systems, whatever.

Amazing. 

And generate music through however you want. So handy to do that. And then last one that I saw this one is kind of a quick update, but I'm actually think this would be pretty handy. Google Gemini will now read your Google docs out loud. So This is actually kind of useful because like I you know, especially I'm driving a lot over here.

so, yeah. 

So you know, sometimes if there's like a long article or like some document or I'm like, I don't have time to read this, but I'm gonna be in the car. Like ElevenLabs actually has a separate dedicated app where you can give it documents and it will read it and like an ElevenLabs voice. And so it sounds really good.

So I'll use that for documents sometimes, but just also having an ability. If I have a Google doc and I'm just like, Hey, I'm busy doing something else, like just read it out loud to have a pretty good Gemini voice realistically read a long document to me while I'm doing something else is actually pretty useful feature.

Yeah, 

that is. you know if this works on the mobile Google doc? don't 

know. 

If it does, then that would be ready for the car. Yeah. I mean, 

I imagine, I mean, everything's so mobile focus. Imagine if, if not in this release, probably a couple of releases in the future. Yeah. 

So for me, I'm like a convenience for our sucker for convenience.

Right. So like If

Yeah, that's NotebookLM. 

NotebookLM. does that. Yeah, they do that. 

Yeah, that one is a yeah, a little like that one will actually take the data and then synthesize it into a two person fake podcast explaining the concept and they launched a video version semi recently, but the video version that I tested was more like it built out a fake PowerPoint presentation.

It wasn't that interesting because it was like, I don't really want to watch power. Like I don't want to watch PowerPoint presentation of this document. I gave you like just 

be honest when you're sitting on the four or five, you might there's nothing else to do. Yeah, 

for those of us that don't have autopilot on our car and actually have to pay attention, it's hard to watch a video.

Oh, yeah, sorry. So Tesla FSD 14 is dropping. It's supposed to be a big game changer in autonomous driving. What will that do? Elon said quote unquote, it's sentient. It's supposed to be a big game changer. I'll let you know how it goes. All right, yeah. I 

told you this earlier, Waymo almost sideswiped me today.

You live in Waymo city, though. 

Yeah, Waymo's are everywhere. But yeah, we don't have the Tesla taxi thing yet. RoboTaxi? 

Yeah, that's 

not in LA yet. That's in San Francisco Bay Area. 

Yeah. Oh, we don't get anything out here. 

I would try that purely from the price point, because the Waymo's are not cheap. They're on par with regular ride sharing.

Well, mean, they're expensive. The LIDAR systems, the software research, the AI models, somebody's got to pay for all that. I 

think it's more demandish. think they're just. 

Oh, they're surge pricing. 

Yeah, and I've launched the app and it's been like, prices are higher than usual because demand is high. I 

heard you book it through the Uber app or there's a separate?

you 

can. There's a Waymo app. There's a dedicated Waymo 

app. Okay. 

Yeah. 

All right. I'll have to try one of these days. All 

right. I think we have enough filler in this episode, Yeah, sorry, folks. It was 

a light week. We were just kind of 

riffing. It's the end of the Hey, man. 

We gave you Nano Banana first before anybody.

We did 

cover Nano Banana first. Maybe Qwen-Image-Edit will be on your radar, too. Yeah. Until we can, until Nano Banana, the real Nano. Will the real Nano Banana please stand up? 

All right. That was good. That was real good. 

We'll end it there. Thanks for everything you talked about at denoisepodcast.com.

Shout out to olalee92 for leaving us a wonderful comment on Spotify. Thank you for your support. 

Thanks, everyone. We'll catch you in the next episode.

People on this episode