Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
Nano Banana Pro, Meta's New SAM 3D, and World Labs!
Joey and Addy dive into Google's Nano Banana Pro, testing its enhanced 4K resolution outputs, text accuracy, and superior relighting capabilities compared to other models. They also explore Meta's SAM 3D for instant 3D model creation and World Labs' Marble launch for virtual production environments.
--
The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.
Welcome back to it's time for our weekly roundup. A lot of Nano Banana updates to talk about. Joey, you're back in town time for the AI roundup. Let's do it. Hello, Addy. I'm back. I'm back in person high five. Dude, you did it. We can, we can physically interact with each other. LA is not that big of a town after all. Mm. Traffic would beg to differ as I spent 30 minutes waiting to make Aleph turn. All right, so. Big news this week. Mm-hmm. Nano Banana 2, but it's technically called Nano Banana Pro. Well technically called Gemini 3 age 3 or something. But everyone knows it by Nano Banana and they're calling it Pro. Interesting naming convention, which we'll get into. Alright, so biggest updates? Well, well for one. You can now do 4K at resolution outputs. Nice. Which the only other model that I know of on that par is Seedream. Seedream. 4K. 4K. Mm-hmm. Which has been great. And just having that extra resolution. I've been testing those two side by side. Yeah. You notice anything off the bat? I mean, look, viewers, let's just face it. We got the best image generation model to date and we're gonna talk about today. I'm super excited. Yeah. I mean, the biggest thing, just crazy diagrams and text. Is on next level. Like, and I've seen cases of people giving it like research papers. Yeah. And being like, make a diagram, explain you this. No way. And it's like make mix a diagram. So it has a reasoning model built into it. Yeah. Mean it's built.'cause also the other update from Google this week, which Gemini 3 came out. And so you have an under the hood, you know these multimodal models. So it's not just a denoising image model. It is understanding the world and turning it into an image. Yeah. And it's got probably a lot of LLM engineering built into it and it's plugging directly into the logic from. Gemini. Yeah. I mean, I'll say right here in their blog post with Gemini's 3 advanced reasoning, Nano Banana Pro doesn't just create beautiful images. It also helps you create more helpful content. Yeah. So yeah, it's using it under the hood to understand. I saw a crazy example what you want, where somebody took a Gundam, you know, the action figure toys. Yeah. And said, uh, give me this as, um, as like a assembly, like a toy thing on those little sprockets. Oh. And it did, it, it just like disassembled it and each parts and components. Yeah. Accurate diagrams. Like make a flow chart here. This one. Like make a recipe a try. Recipe. It's like images. Text. Text is rendered super crisp and clear. Mm-hmm. It's accurate. It's not gibberish. It's so versatile. Yeah. However, I was just testing what I know and love, which is film and TV applications, uhhuh, and even then it outshine Nano Banana one significantly. Oh yeah. Like what app, what things did you see? Yeah. So when you were testing it, we're we're, do you want to get into the Yeah, lemme run through the other thing. Like, so let's see. Uh, text, I mean, so yeah, text is a big update. Also character consistency. Mm-hmm. Has been, has improved and we'll probably talk about that more. Yeah. The, uh, film applications elements. So you could do elements before where you give it a couple images and kind of, it makes Yeah, whatever you want. Now you could give it up to 14 images and it says it's now been a pro. You could blend more elements. Use up to 14 images and maintain the consistency and resemblance of up to five people. Mm-hmm. So five characters they could handle and understand and be consistent, which is also, also the fact that they have that number and are clear like it's five people, it, it should perform well with consistency with five people. Yeah. And I would say that's an understatement. Some of the examples that I've seen online, you're looking at 10 plus people. So imagine. You and your family of 10, you put everybody's photos in there and say, you know, put Addy in the middle, this person on the left, this person on. It'll just, just make it. That's crazy. And then obviously like reframing and spatial understanding. And that's where I tested it too, of just being, giving an image and being like, reframe here, reframe there. Which it was Nano Banana Original was good at before it's now. Even better. Absolutely. And more consistent and more accurate. And I, I've got some tests too that, that I'll run through. Oh, I'd love to see your tests. Yeah, I, I think spatial understanding of. 3D environments without being a 3D model, or I don't actually know how it's architected under the hood. Yeah. But it has almost good enough spatial understanding to be world accurate, like as far as units go, maybe. Yeah. And I mean, as we talk about, these are all leading to world models that physical AI understand the world and. Accurately reproduce and recreate and, and, and behave in it. I'm gonna say your line today. Okay? Yeah. What all things lead to robots. All words lead to robots. So you did some tests. Mm-hmm. And you did whiplash test. So what's this test we're looking at? Yeah, so I just picked one of my favorite movies of all time, whiplash. And you know, JK Simmons'? Most well known role? I would say so, yeah. Uh, I took the iconic frame of JK Simmons at the Opera House, and then I wanted to do a background swap. It sounds really easy to do, and most models can do it, but really it's in the fine details and the subtleness of how well it does. Mm-hmm. So my prompt specifically asks for relighting in a cloudy environment. Mm-hmm. So on the top right, I have the Seedream example. On the bottom left, I have Nano Banana one, and then bottom right I have Nano Banana Pro. Now, JK Simmons, like me, is a ball guy. And as ball guys, we have, uh, usually a hotspot, uh, where the light hits us, right? Uhhuh. We're like a, a giant chrome ball, if you will, HDR map. So. When you light somebody from a direct light source to an indirect light source, that hotspot should go away. Right? And that doesn't happen in Nano Banana one or Seedream. Mm-hmm. However, in the pro, as you can see here. He's being lit by like a very diffused light source. AKA clouds. Yeah. Yeah. I mean, look at this, the Seedream, it got the environment, but the lighting looks pretty similar to the lighting from the original, where he's back lit, very warm, harsh lighting. The glare is on his head. Yeah. Now I don't want a little softer, but also his face was warped a little bit. So the is also not as consistent. Mm-hmm. And then, yeah, not been on approaches. Nailed it. Just nailed it. Like he is pose, his face is exactly the same, but the glare on his head is gone. Yeah. The lighting. Feels like it's an overcast cloud, uh, lighting. Yeah. And the fact that it's a natively 2K model and can out output up to 4K, I think that just intrinsically helps with small details and it sees more, it has more resolution to our eyes. We just pick up on that stuff so much more, and to us we feel that it's more authentic. Yeah. Yeah. This is a good time. This is a good side by side test. Okay. All right. Next one. I got another image. All right. So I was trying to invoke a little bit of the cantina from space, uh, from Star Wars. Star Wars, okay. Yeah. So like, you know, uh, like a cowboy saloon space style, and I was like, okay, what would it do with all the reflective materials in there, like the symbol? How about the band mates in the back? And I just wanna see how this will perform a style change or style transfer. It makes this entire CNA space sci-fi opera without changing the two actors in the foreground. Alter the instruments to be more sci-fi, but keep him in the same place. Yeah. Do not alter actor poses or camera perspective. Only relight them. Cool. This is moody and door and dark. Yep. So I'm gonna just skip to Nano Banana 1 did a decent enough job, but then the symbol has this like weird electricity across it. Yeah. And the stuff in the foreground doesn't really match the background. Like the lighting kind of matches. It's not really re-lit. Whereas with Nano Banana Pro, it took a whole different approach to lighting. It completely re-lit the whole thing. Everybody has this purple tinge to them. The reflections on the symbols are so accurate. Even put a little bit of an outfit change on JK Simmons, which I didn't ask for, but it it is, yeah. It gave him a little, a little robo flare. Yeah. Yeah. And, and like a little padded, like a judge dread type esque outfit. Yeah. He is the judge dread of, uh, band band directors. Yeah. And, and again, it's just so much more realistic. Like it doesn't have the neon accents that Seedream has up top. Yeah. You know? Synth pad too. And 'cause you said change the instruments make it more futuristic, right? Yeah. Yeah. So again, very impressive tests. Yeah. And then let's go to the last one. Okay, so, uh, this is, uh, miles teller's love interest. I forget her name. Look, I just want to now try the visual prompting here. So I use Freepik and then you could see my examples. I wanted her to be in a Starbucks, in a barista outfit with a espresso machine in front of her. But I didn't wanna change the pose. I didn't wanna change the. Um, sort of the moody pose that she has. I wanted to keep the original intent. And guess what? Nana Banana one completely changed it. Uh, so she is, uh, in a different camera perspective, different framing, and slightly different face. Mm-hmm. She's not buying a bar. Yeah. Her face changed a bit. Right. She looks a bit more serious. Mm-hmm. And, uh, Nano Banana Pro kept that ambiguous. Mood to her and you know, it's, it's very sort of what, how, what am I feeling? It's a little complex and yet everything around her changed. So realistically, like you have the syrup bottles in the back, the espresso machine is spot on. I mean, I'm trying to look at the menu too, 'cause like also the text on the menu and I'm not even sure if I can't read it because just the image you saved mm-hmm. Is lower resolution or, 'cause it does not look like the normal gibberish. That you would get from a, from like AI outputs, it looks like actual font. That's just, I can't read it'cause it's like out focus. Yeah. I did not, and I didn't give it, uh, Starbucks reference images. I just No, you just said Starbucks, right? Yeah. The other thing here, uh, that you'll, Lugo Lugo is accurate, that you'll notice is she's also re-lit correctly. I think that relighting stuff is really just impresses me more, more than anything else, just 'cause that's such a hard problem to solve in the world of VFX. Yeah. Yeah. I mean, it did keep the same window motivation 'cause we see the same mm-hmm. Light on the espresso machine, light on her face. That feels consistent. Yeah. I mean like six months ago we were looking at Anna Banana one outputs and we were thinking, boy that is so realistic. And now you have the pro output next to the original and you're like, that's so much better. Mm-hmm. Yeah. I will say, you know, and I've seen some other examples of this, there is like a little, like 2% difference on her.'cause here it's like her lip is like a little aga and you know, it looks a little bit more longing. Then in the output it closed. Her mouth closed. Yeah. Made a little bit more serious. Right. So it is reinterpreting her and regenerating her. Mm-hmm. But the fact that it could get to like a 98% close. Yeah, I know, right? Let's, let's talk about a little details here, right? Yeah. This is a good test. One of the issues I've had with. Nano Banana or just any model that has spatial understanding is if I wanna get a reverse angle, so like I have a shot of something. Mm-hmm. And I'm trying to storyboard or just something else and be like, Hey, gimme the reverse angle of this. And I'm asking for the camera. I'm using film lingo, but I want the camera to do a 180. Yep. And show me the other side of the scene. Okay. And I have a lot of trouble. I've had a lot of trouble with that off and on. Sometimes it gets it, sometimes it doesn't. And then this one was the original image. This one was one where the tests still did not work well. Um, I've asked for, I wanted to reverse angle like over the dad's shoulder. So we're looking at the children's book. Sure. And in this case, it kind of just gave me the same. Move the dad, same angle, tape the same angle. So this one didn't work as well. And maybe if I tweak the, um, the prompt, it would work better. Or if you switch to visual prompting, perhaps like place camera here, uh, perhaps too. And also if Freepik added the, introduced a new camera. A 3D camera feature. Yeah. Where it sort of turns the image into a block and you could reposition your cube. Saw that. And I think, again, I'm guessing it uses that as an input. Yeah. I'm guessing that's an extension of Nana Banana's, API, that they just, I dunno, it's API Maybe it's one of those things they realize, like if you map an image to a, uh, a, a cube Yeah. And rotate the cube, and you give that as an input, it Oh. Understands it. That's smart. Maybe they figured that out. Yeah, sure. Uh, my, my theory. Okay. We can get Joaquin back on Joaquin's, like, you're so wrong once again. So this one I said, show me a reverse angle over the shoulder looking at the line of patrons of the movie theater. And it got pretty close, but I mean, it didn't gimme over the shoulder in the sense of, I was thinking like dirty over the shoulder so that it would be partly in frame, but it flipped it. Her shirt, that same pattern. The hair is still in a ponytail. It still has the same little purple markings on her, uh, shoulder. And it's like, looks like the opposite image of the movie theater. I'm just impressed by the Schiller number of people it generated because as you get, as you exceed like three or four people, AI tends to just mess up the rest of them. Yeah, they're all relatively sharp. I did, I did say looking at the line of patrons in the movie theater and there's a big crowd of people thats, that's definitely a line. So then I did, I modify the, the. Prompt a little bit and I said, you know, over the shoulder, but the woman's shoulder should be right frame. And we're looking over her left shoulder and it's like kind of partly in frame and then it, it nailed it. Nailed it. Yeah. Yeah. That's nice. Yeah. So this has been the most successful model for the movie posters. Like who asked for that? I mean, just whatever. That's crazy. Isn't, its like big vacuum of its uh, world understanding. I'm curious what it's like, looks kinda like what movies are those Kinda looks like The Hulk. I think we're just looking into the Matrix at this point. Like it's a simulation. These are just movies that play in the lean space. Yeah, in the theater. You never are gonna get stuck there. You're gonna just keep watching them in loops. I'm sure a MC would love to see a movie theater that this crowded. Oh, ouch. Uh, yeah. After they did the Nicole Kitman thing, I don't know if that'll happen again. I, yeah, they should all be protesting that they want the full 30-second Nicole Kidman spot back. Yeah. So that was my quick test with it, and you need to have a pretty specific prompt of what you want, but that's been the most successful test out of any of the models to be like, gimme reverse angles. Of something in space. Yeah. And that what we talked about is, is because it naturally has a, a really good spatial understanding of spaces. Umm, what's the Google World model that we covered a few months back? Oh, uh, Genie 3. Genie. Yeah. I'm wondering if there's some Genie guts in there. In the bottle. In the bottle, yeah. I don't know. Yeah, good question. Yeah, I mean, I imagine that, you know, these are all eventually heading towards just one. Just being one thing. Super world model, create anything that understands everything. I mean, I, I think architecture wise, it, it'll still be a separate block, like the whole diffusion model thing is a separate block from the LLM, but how it's, uh, API'd and interacts with the rest of the world. It'll just be one giant black box, Gemini 3 or Gemini 5, by the time it gets there, will have a thing of Genie in there, a thing of Nano Banana in there and thing of whatever in there. And this will also just be in other products. Like, I mean, we know about Nano Banana and we talk about it and stuff, but like most just regular Google users, it's, these are built into like now Notebook, LMS upgraded. Yes. So if anyone used that, Gemini 3 is powering it and Nano Banana images. Right. They wouldn't know. Under the hood, they don't care, you know? Yeah. If they're using Google Slides and they wanna make an image, yeah. It's be like, wow, these images look way better or something. Mm-hmm. You know? Now no amount is power in it, but it's built into the suite, so you wouldn't know. This is that paper diagram. Where, uh, this person copied the text from a paper and then just, my God told it to like make a diagram and it made a whiteboard even looks like a real whiteboard diagram of it. That's crazy. Yeah. Uh, these are some other. This was a roundup from Google, one shot at an entire detailed menu without a single spelling mistake for a Nano Banana fictional restaurant with little illustrations that kind of looks like the islands menu. I think it could be a little decorative and fun. Yeah. Uh, scientific diagrams. This one was fun. A flow chart for how to toast bread, make it as wacky over the top and complicated as possible. Oh, love it. It just looks, yeah. Yeah. Like who would have time to do this in real life? Yeah, it was like you could take your jokes and memes to the next level. Amazing. Uh, you know, uh, Joey, I was just thinking about like the, the AI race. Yeah. And all the trillion dollar companies and OpenAI is just kind of falling behind at the moment. So R2 came out like a month ago and you're like sleeping on that. Step it up. Yeah, I'd be curious. I mean, 'cause um, yeah, for a while ChatGPT image was before Nano Banana. They were like. Really good with the image generator. The image generator. Yeah. With world understanding, being able to make very specific mod modifications and reference insertions and stuff like that. Yeah. Yeah. And it's still good. It just has that AI sheen, very synthetic look. Yeah. And you know the stuff out of, not a banana, it's just looking. Photorealistic, I heard that, uh, when you upload like a photo, like if I wanted to put Joey in a Chad JBT generation and I take a photo of you upload it in there, it'll intentionally not generate you a hundred percent just to avoid, to prevent deep fake stuff. Yeah. So it's like just naturally dialing it down. Yeah. And I don't know what the. Safety features or limitations are with Nano Banana? Yeah. Pro. Because I have seen people say, or I mean they've made images, you know, of Trump, El Musk. Oh, celebrities. And celebrities, yeah. That look very realistic. Obviously. It's not really them. Yeah, I haven't, yeah, I have, I don't know what the limit is of censorship. Of Yeah. What you can, um, make within. Nano Banana Pro. I mean like a company as as safe as Google. Like they've obviously navigated through a lot of legal hurdles over AI over the last few years. I wouldn't think that would make a. Bad move. Like it's a very calculated thing that they're doing. Yeah. Yeah. They, they're not like a startup that's like, yeah. They're like, we need to like build up a Yeah. They're not a Higgs field. Right. Yeah. Yeah. We need to be edgy to build, make a name for ourself. Right. Yeah. The other thing too, and I haven't really seen anyone post about this, is the pricing. Mm-hmm. So there is a bit of a different pricing as of right now. So regular Nano Banana, you make an image. It is about 4 cents an image. Point oh three nine. Yeah. Nano Banana Pro 4K. 30 cents an image. Yes. So about 10. 10 x. I saw my credits just go. Yeah. Yeah. Well, also Freepik and a lot of the other AI aggregators, they're all kind of running unlimited generations with Nano Banana Pro for the week. Okay. So you could use that there. Free pick's, unlimited model has, um, added a lot of. Asterisk as ASEs. Yes. Yes. We, we talked about this couple episodes back. Um, even more so now, I think, oh, and I haven't dug into it, but I think Nano Banana they cut down onto Okay. Um, yeah, they've, they've, uh, they're, they're, I'm sure trimming the fat trim. Yeah, sure. Some people were taking advantage of that, especially if you have something like, man, as like an agentic browser or something. Yeah. Like, oh yeah, yeah. Keep churning away. So I'm sure there was abuse of that, but for now. Not, I been at Unlimited for a week, except also you can only generate one at a time. Yes. Because like I, in Freepik, I like to crank it up to like four, eight. Yeah. Like four, eight, just gimme all the options. Mm-hmm. But with that one, they're just like, no, just one. Do one and wait. Uh, so yeah, this is cool. The, um, you know, the one thing that, you know, I, I feel like's on par probably better at some things than Seedream, but the one thing that Seedream does that I still haven't seen anyone else do is that batch generation in the same. In the same, in the same space. Noise seed and, yeah. Yeah. I'm wondering if they'll eventually do something like that too. Or if you're like, I need to create a couple of images outta the same realm. Mm-hmm. And get even more consistency. Or they might say like, what's the point?'cause like our reference images are super accurate. Just keep like reference image consistent and the outputs will be pretty consistent. Mm-hmm. When I was looking at your movie theory example, the reverse shots, I was thinking the same thing. I was like. Yes, it did reframe, but the people are completely different. So if it was the same seed and the same prompt, potentially when you go from over the shoulder to the straight on that, it's the same number of people. Oh, I, well, in their defense I didn't, it was a re-prompt from the original image. I didn't, oh, I didn't take this image and modify it and say rephrase. Oh, gotcha. It was your source image was the same. You re-roll it? Yeah, yeah, yeah. I just changed the prompt. Same reference image. Mm-hmm. And ran it twice. Sure. So, okay. And their defense, uh, it was not, I did not try to reframe this image. All righty. Yeah. Okay. Next up. New paper model. No, it's a full model'cause you could use it now. Uh, for meta, uh, Sam three. This one I've been seeing going around a lot. I like the name, very easy to remember. I know it stands for something. Ah, don't ruin it. Frame.io. Okay. Anyways, Sam, give it an image. Mm-hmm. It will, uh, extract objects or people, there's two different models for that. Mm-hmm. And turn it into. A full 3D model that apparently looks very, very, very good. Insane. Yeah, insane. Yeah. I mean, you know, just like a year or two ago, we were lucky to get image to a single 3D object. Done. Like if you have, uh, like this rollercoaster thing, each of the cars would be like a single image and then you would put it all together in a 3D environment. Uhhuh yourself. And now it's giving you all of that at in one shot. So then you export it into Blender or Unreal or whatnot, and you're off to after the races. Yeah. Look, I'm just looking at, I'm looking at this picnic thing, this food image, and then just like every I, every individual piece of food turned into an object. That's 3D. Yeah, this is, um. This is in the, I think the, one of the last frontiers of generative AI is gonna be 3D. Like we're we, everything that we do in 3D today, right. Not just the modeling and the texturing, but also rigging, um, optimizing meshes. Mm-hmm. It looks like the people are also rigged too. Yeah. Shoot. Yeah, it is. It is placing a basic skeletal rate in there. Mm-hmm. So I take that back. I was gonna say those, those things are coming, are coming. They're here. They're here. I mean, this makes sense from, I mean this is really impressive and cool for meta, but also makes sense for them since I don't think they have, you know, they're still pushing the metaverse a hundred percent and this is like, oh hey, how can we make things more 3D faster? Yeah. Give it existing 2D images and we turn it into 3D objects. One of the biggest hurdles to the Metaverse 3D world Yeah. Yeah. Is, is just. Getting the world populated. Yeah. And if you're gonna give the users the ability to make their own worlds, you can't expect those users to be three P modelers in blender. Like they're not gonna Yeah. You're gonna wait forever for that stuff to build out. Exactly. Or you're gonna have to do something lo-fi, like Roblox or something. Well, even with Roblox, Roblox has Roblox Studio, which is actually, I tried using it. It's pretty cumbersome to put a game together. Really? Yeah. It can be an average Joe and do it like, you gotta be pretty like on a Yeah. Blender level ish level five out of 10 in blender. Okay. You know, you, you gotta understand meshes and rigs and stuff. Okay. So like, you gotta wanna do it. Yeah. Yeah. Right. And Fortnite, uh, Fortnite Islands, which is the Metaverse side of Fortnite where you can build your own islands. That's like a level 10 out of 10. Like you really need to be a game level designer. And so what a lot of production companies do like a special version of Unreal to do that, or is it? Yes, they do have UEFN, uhhuh, unreal Engine Fortnite. Um, which is basically like, um, gosh, they're gonna hate me for saying this. It's a strip down version of the engine because a lot of the core engine features are not supported in Fortnite. Because the whole entire thing has to get like flattened down to a few gigabytes for it to publish. Okay. So like you can't like bring in the full photo realism of Unreal in there, like eight K textures. I mean, you can, but then you're gonna go through that limit real quick. Mm-hmm. This is the complete antithesis of Roblox in Fortnite. It's like if I were to build a world in Horizon Meadows, uh, metaverse Os then. I'm just gonna mess around with some generated photos, some generated people, put it into Sam 3D, it's gonna come out rigged. And then I just dial in the game logic with the llama or whatever other engine they have. Yeah. Or I mean, yeah, Llama's been a bit slow, but yeah, imagine like, you know, taking the, you know, this blended with something like Google's, Jenny Yeah. And Gemini. Right. Uh, 'cause that was the other thing with Gemini 3 was a big improvement in. Codeine vibe, codeine. Mm-hmm. All that stuff. Building out full interfaces. Yeah. Uh, but in a gaming world where it's like, oh, if you had this Sam three. It builds out all the objects in your world and then something like Gemini, hey, I wanted to, you know, I wanted this game to exist. Mm-hmm. And then it builds all the interaction with your 3D objects or something crazy Vibe 3D, gaming, vibe, vibe, game, vibe, gamer. Yeah. Yeah. And then last one, Marble from WorldLabs just launched. So WorldLabs has sort of been in beta papers for a bit and we've, we've talked about, yeah, I def I've had videos on the channel. We've talked about 'em before. But basically you could give it an image or a text prompt and it makes a. Very small Navigatable 3D world, but enough where you can move around a bit. And all of your objects are there. Mm-hmm. If you move around too far, it kind of falls apart. Mm-hmm. But now they're actually released the full product, it's called Marble. Mm-hmm. And a lot of, I mean, a all roads lead to robots and gaming. Obvious, you know, understanding the world from that case. But, you know, for our sense, uh, their launch video was like a virtual production. Showcase. So like they generated the worlds on a wall, had actors interacting with them. This is what we've talked about of like Yes. You know, you could, you know, don't have to have a huge fad budget. Yeah. You could generate your world and get enough where you could have reverse angles, some parallax, some camera movement. Yeah. And generate your scene there. What a good use case to go after to showcase. It's a sexy one with the volume and everything. Yeah. And you, you don't think wall could actually have people there. You could have interactive lighting, um, you could export the scenes you generate as GA and splats. Yep. Or as, uh, rigged models, um, you know, bring it in. Unreal. So a lot of use cases, there are a lot of applications. Mm-hmm. Um, and so it's cool that it's, you know, finally out in, uh, as a public product. How is the, uh, resolution or sort of the, uh, the poly count, if you will? I mean, yeah, Sam 3D from what I just saw, looked pretty good. Yeah. And I don't know if this is maybe as good as that. I mean, some, it's tough to tell too 'cause some of these demos are more in the game world. Yeah. They, they styl so they stylized they look like, and there is also an editor too. You can edit the things that generate, so that's. Cool. You can give it an input image. You can do text to three DI mean, if you're building, let's say like Grand Theft Auto for the Metaverse, right? And you just need miles and miles of city, city blocks, restaurants, arcades, whatnot, what better way to do it than just let an AI agent go at it? Yeah. I mean, I feel like that's to be like a kind of like the Jenny thing. Yeah. Um, you know, this feels like an extended use case of the House of David. Breakdown that we had. Oh, yeah. You know where they were talking about? They were, it was a battle scene, it was desert and they were just sort of generating 2D plates on the fly. Right. On a wall and shooting with that. Right. This is like the next step after that. Yeah. You know, something where you could actually move around a little bit interactive, A little bit interactive. You have some camera, have some parallax, have a more consistent environment, but you're drastically cutting your VAE time down. I mean, that was, that's the whole promise of Unreal is you get the game interactivity with the photo realism needed for cinema. Mm-hmm. Yeah. But it just takes a lot of time to build an unreal agency. And this is, you know, it depends on the project. It depends what you're doing. It depends on the project. Sure. But if this makes, you know, having more locations accessible for smaller budget productions. Yeah. Great option. Exactly. Like, uh, you wanna pull that un unreal lever. When it's absolutely necessary to spend a hundred thousand dollars Yeah. On that scene. Yeah. But for if it's a shot of five, 10 seconds long mm-hmm. Just you can get away with this or a vertical. Short. Yes. Yeah. All right. Good place to wrap it up. Awesome. Thanks for everything to talk about@denopodcast.com. All right y'all. So, uh, thank you again for our support on Spotify, apple Podcast and YouTube. We have some new commenters. I hope you guys don't mind if we shot you out. We just love all of the comments and some of them are feedback, which is great too. We need it. Uh, we have Ms. Sharp, 16 Peter King, day 7, 7, 5, 7 9, and Misha Belo. Thank you for your comments. Hope to see you on the next video. Yeah, you need to stop talking over me. I know. I know. I'm so, I'm so sorry. Yeah, so it's not Addie's fault. The riverside delay sometimes happens, but now we're here in person, so we have, we shouldn't have it as much crosstalk there. It is a little bit my fault. For some reason my internet sucks. That is, yeah, it is the internet service provider issue. I live in a part of LA where they don't have fiber optic. It's just too old school. Gotta get starlink or just move houses just so we don't have latency. Yeah. All right. Thanks everyone. We'll catch you in the next episode.