
Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
From VFX to Speech: Pika, Flora, Scribe, Wan2.1 and More AI Updates for Filmmakers
Addy and Joey break down the latest AI tools reshaping creative workflows. From Pika Labs' impressive video manipulation features to Flora's intelligent canvas connecting multiple AI models, they explore how these tools are changing VFX and content creation.
Plus, insights on Midjourney mood boards, Alibaba's open-source video model, and ElevenLabs' new speech-to-text technology. Subscribe for more technical insights on the cutting edge of media production.
In this episode of Denoised, we're going to talk about a bunch of new AI tool updates that should be on your radar. We'll talk about Pikadditions from Pika Labs, Flora, and Alibaba's new model. Let's get into it. All right. Welcome back to the Denoised podcast. I'm Joey Daoud. I'm Addy. Hey man. Good to see you again. Yeah. So when this is coming out, we should be kind of talking about like, Hey, did you see the, uh, you see the Oscars? Yeah. Did you see the Oscars? Yeah. So, uh, due to our schedules, uh, we're pre recording this. It's the week before the Oscars. So the Oscars have still not happened yet as we're recording this. So if you're wondering why we're not talking about the Oscars or if some other crazy AI update happened in the last like five days and why we're not talking about it, that's why. Cause it's still February for us when we were talking about this. Rest assured, we'll definitely be talking about the Oscars the next time we record because I'm sure there'll be some upsets. Yeah. Yeah. And some recaps to talk about. But yeah, in this episode, we're trying to do a grab bag of interesting AI tool updates that have been on our radar that we kind of just want to dive into. And if you're not aware of them, I think they're really interesting and useful things to just at least be aware of and kind of crazy how fast and how good some things are coming. Yeah. So, uh, yeah, first one, it's Two features that have been dropping from Pika Labs. And I feel like sometimes Pika gets forgotten about a little bit and then they drop some like kind of crazy new update and I was like, Oh man, that's why it's pretty good. So there's two, they kind of go hand in hand. One is called Pikaswaps and one is Pikadditions. And so Pikaswaps is upload a video and you can either text prompt it or give it an image reference and be like, Hey, I want to swap. This out with something else. And you can actually do in painting. So I did this one test with a previous podcast episode and had it replace you with a picture of a robot. Yeah. So that's kind of like what Wonder Dynamics had, right? Yeah. Wonder Dynamics definitely is more geared for professional professional pipelines because they not only track and replace, but then they'll give you the spline. They'll give you the animation data. This is just that. You give it a video, you get back a video, you take it or leave it. This is just for, like, fun social media content, user generated content. For right now. And that is something that's interesting with Pika, because I remember when they first came out and they were sort of talking about how, like, I had a quote, this was one of the first articles we did on VP Land, we're not trying to build a product for film production, what we're trying to do is something more for everyday consumers. Um, that was from Denis Qua. Fair enough. I mean, that's where the money is, right? A bit. Yeah, I mean to get a 10 to 20 a month subscription for times. Yeah, 100, 000. But seeing stuff like this, I'm just like, really? I mean, because you're getting, you're getting kind of close there. I just want to give a quick shout out to John Finger, who is somebody really talented in the A. I. Community post a lot on LinkedIn. I met him once that I'm meeting a really nice guy. Uh, he's been putting out some Pika Editions videos that are just, he's always testing and yeah, and, and pushing the limits with these, with Kling, with like every AI tool out there, runway, you know, like just shots of him walking around either his backyard or him walking around like Venice or somewhere. And I seen what he could add in. So is that where he's at? I've seen beach stuff and I'm like, it looks like Venice. Yeah, he's in LA for sure. Yeah. So, uh, he posted this one video where, uh, there is a shot transition and I think the way he does it is he just flips his iPhone back and forth. Hmm. And then he has a Roman soldier with a spear walking behind him, obviously synthetic. And then he reaches out and the soldier gives him the spear and he grabs it. Oh, that's cool. So there's like a digital object handoff. What was his original thing? Was he holding something originally? No idea how he did it. My guess is that he just had a hand gesture and had to prompt. Yeah, or maybe he was holding a stick and he had to replace the stick. Could be. With a sword. Yeah. It's been freakishly accurate in some ways. I mean, it's still got even in the full video. So in the one, one video with the swap, I had to replace you with a robot. Right. And the one thing that was freakishly good was the clip I gave that it had some angle cutting. And I didn't keep it the same shot. It had angle cutting. And so it honored, it figured out the 3D geography of the space. And so from the wide shot, the robot looked a little distorted here, but then it cut to like this camera and it had the robot arm partially in frame, which would be what you expect. And then it cut to your camera. Yeah. And it was like a full shot of the robot. It's perfect perspective. Yeah. And it was like the angle. It was like, wow, like if you could give this You know, right now we're always limited on like 5 second, 10 second clips, but like once you could be like Give it a whole edit of like a character of a human with all the cuts replace the you know We you did the edit kind of like Planet of the Apes. You did the edit with the human actor Yeah, replace it with this other character Yeah, or the performance like one is like the hardest one because we have a three camera setup here So every time you have a cut if you're not aware of what? The 3D geometry is how do you maintain continuity? Yeah, and I did it now. I mean, it's not like even on, even though I in painted you to replace it on my video, it's still like warped my face a bit and distorted it. So you still get that like squishy, soft AI edge. Stuff. But, uh, for version one, yeah, you can map that out. Like in post, you can map. Yeah, yeah. You can still get a lot of stuff. You can always clean it up. Yeah. The other thing I was really impressed by. And if you can play back the video of the octopus walking. Yes. So that's Pikadditions. And so that's their other feature, which is basically take a shot, give it another image of another object or something. Yeah. And then tell it what you want it to do. And then it just adds it in the scene while honoring the geography in the 3D space. I mean, so much of VFX is just that the shot. We gave it a shot of a squid and I said crawl across the table and it did a pretty decent shot of it crawling across the table. And again, this also had an angle of switching and it did a pretty decent job for the few frames of the other angle of correctly matching the size and perspective of where the squid would be. If you're doing this in VFX. Oh boy. Okay, so first of all, the thing that makes it easy in VFX is it's a static camera. So you're not tracking a moving camera. Okay, fine. So now you have to get a, I think it was an octopus. So you have to animate eight legs of an octopus and animate it convincingly enough. So the user feels like it's an octopus. So, so animating that takes maybe a few days. a good enough animator. And then you have to create an artificial 3D table. So the octopus has floor registration as it's crawling across this table, but this 3D table won't be visible in the render. So it's just a placeholder. And then you have to perspective match your 3D camera to your live action camera. You probably do that by eye, you know, the focal length of this camera. So, okay. So that's that. Finally, you have to render the octopus as its own thing on an alpha channel, bring that into something like new color, match it, make sure the image base lighting matches that of your actual lights. And then you have it all in all, I think two weeks is a very conservative estimate for something like this for one artist to do. If an artist can do all those steps by themselves. And how long did it take you? Yeah, five seconds. I mean, I would say it is not at the level of an artist right now. This is version one. I mean, there are, not every artist is at that level. There are, you know, junior artists, university students that are at this level. And for that, you know, this is incredible. It's crazy. Pico Editions is a peek into what VFX with AI looks like, would you say? Yeah, I mean, like, is this going to be the kind of future? Like, we got a shot. And it's like we just got our image of our thing and maybe at some point to I feel like this year is gonna be a big year for image to 3D or just other type of generative 3D. So maybe even have something where it's like a little bit more detailed was like, okay, this is what the full 3D object of my model of my thing looks like. Yeah, make it do this thing. And again, like in a professional setting, obviously this won't, this is not good enough right to put onto a movie. But if you have more controls, let's say you can input your own octopus in there and then have AI animate that across a table and then you have control over the compositing. Would this replace people's jobs or would it just allow VFX studios to just do exponentially more work? I think this is ties into what we talked about on the technical color. Yeah. Two episodes ago of, I mean, yes, I think it's going to replace like the huge, massive studios and teams where you'd have to throw a bunch of people at a problem, but it's going to power smaller, more nimble teams. It's just a crappy transition disruption period right now where the big studios and the staff jobs are disappearing right now, but more opportunity to do better things with smaller teams. Yeah, and it's just like you gotta get on top of this technology and figure it out so like you take advantage of it. I also think just having learned a little bit of computer graphics, traditional computer graphics work, like, uh, I've done some animation, I've done some compositing in Nuke, and those crafts are so hard to initially learn and then master. Takes years. Whereas I think somebody that is using an AI based tool will have a much easier time mastering it. It's not text to prompt. It's, there is going to be. And it's like, are you going to still need the high senior level, you know, someone who does know that to like get in there. They're going to be top level projects. I feel like this is kind of similar with coding to where it's like coding is getting more and more placed or augmented where you could just tell it. The thing what you want and it'll start coding it. But if you don't know the fundamentals, you don't know how to code, it could still break. You still need to know how to get in there to make it better. If you're doing large projects, like it's a limit right now. So you're going to need an architect level, like senior architect level person, or in this case, a VP sup or several VP sups. And then a lot of the tedious work will be junior artists using AI based tools. The part that's not clear to me yet is how the. Traditional CG and the AI tool sets converge and merge. It's lacking the tweaking. Yes. Now, too. It's like, and I think that that's what Wonder Dynamics was trying to build, where it's like, take this example of the Pika Labs, Pikadditions. Yeah. And it gets you to that first step. And it's like, yeah, it'll look all right. But now it's a rough cut. But we're giving you all of the, we're giving you the, the, the, the skeleton animation. We're giving you all of these files so that you can take it into your traditional pipeline software. Right. And tweak it. Yeah. And you're not stuck with because that's so many problems with AI now, where you're like, you spin the slot wheel, you get what you get. And then you either spin it again, and maybe you get better, maybe you get different, but you can't like I can't take this octopus animation be like, Oh, it went here and then went back. Well, maybe I just wanted to go here and then you know, jump on my face or something. Yeah, exactly. Like I'd have to regenerate it and Maybe I'll get it. Maybe not. So it's great for like a linked in post, but for an actual show, you need control. Exactly. Exactly. I feel like you still always have to keep in mind, you know, people see this stuff and it's always like, well, it has all these issues too. And it's like, yeah, we can, you can never judge these things by like, it's cool. We see right now, but you got to think like the two minute paper guy is always like, it's not about what the paper says right now. It's about the paper to papers from now of like, where are they, how fast have we gotten to this point and like how fast things are changing. Exactly. And the wonder dynamics acquisition by Autodesk is so cool. Brilliant. Because that is Autodesk is a company that's just building professional tools for the highest level artists. And so if anybody can figure out how to turn the power of Wonder Dynamics into a controllable tool, it's them. Yeah. All right. Uh, other new tool that just popped up on my radar this week. Well, by this week, I mean, end of February, uh, Flora. They are building, they're calling it an intelligent canvas. Uh, so it's basically a one stop. Shop for a lot of the main AI models, but with a node based. Build your own workflow, and it seems to be a bit of a middle ground of you've kind of want to do some more kind of creative stuff with AI that you can't really do with a single tool. You want to connect a variety of tools, but you're not as technically inclined to use something that's complex, like comfy UI, which is a very powerful node based tool. But it's got a technical learning curve and you have to run it on your own PC. Or you could, there's some cloud based versions, but there's no help. Like if, if you can't figure out the nodes and come for you, I, who do you call? You go to perplexity and you start asking it questions. You start asking AI questions. It's a, yeah, it's an open source. You're kind of just at the whim of YouTube videos and figuring stuff out yourself. If you're able to put up Flora on the screen here. I mean, it looks like a really fancy version of comfy UI with a UX UI that is more Catered for a prosumer. Yeah, like you kind of, uh, they had some demo workflows and one that was interesting. I mean, it's not the extensive custom models and stuff that you're going to be able to find and build with comfy UI pulls a lot of basically it's a connectors. So like any big platform out there that has API connection options available. So basically anything text, video or image related and you can connect them. So that one interesting example of taking like a hand drawn storyboard that you give the image to. And then there's like a chat GPT node that analyzes the image and then turns it into a shot list. And then there's another node that sends it to another chat GPT agent that again, that's sort of fine into the agentic AI of connecting all of the little AI individual agents to do something more powerful. Another one that turns each shot into a text prompt and then takes that text prompt and then runs it through flux and generates an image. And so you basically can turn a hand drawn storyboard into a raw photo image storyboard of all your shots. It's powerful, and I think we can all agree that all of the future AI suites will have agnostic things built into it so that you don't have to be loyal to Flux or Midjourney or whatever. You can switch between the two depending on the project, because like we talked about, every one of these generation tools has a specialty, is good at something, is bad at something, and so on. Yeah, yeah. And if you can sort of, when you go down the route of like trying all the tools, you're paying for a bunch of subscriptions and you're doing a lot of copy pasting or like, let me generate an image and one thing, save the image. And then let me upload it to like runway cling and just see who does the better output. I think this is going to be the future of like centralized one spot. I don't know if it's going to be floor. I don't know. Other, you know, more established, like existing tools, what bring it in, but like the more you can kind of centralize and not have to like bounce around between different apps, the better and faster it's going to be to create. Yeah, I mean, I have an analogy for you. I always do. If comfy UI is your like pro pro tool like that's the Venice to then this is like your black magic. You know, it's, it's, uh, it still retains some of that functionality, but it is really looking out at a much bigger demographic and a nice user friendly clean interface. But magic has very nice menu system. That's it. Yeah. So this is, yeah, this just came out this week, so I'm going to be messing around with it a bit, but it seems interesting and a good middle ground where not at the comfy, complicated, comfy UI level, but you know, you're at beyond like, yeah. Wanting to have to copy and paste or try to figure stuff out with a single tool. Yeah, I haven't taken it for a spin yet, but from their website, like the stuff that they're showing, I was like, Oh yeah, that's totally useful. Yeah. You know, of course you want a secondary level of control after your first prompt generation. I saw one other sample model too. I still got to dig into what exactly it's doing, but it seemed like it took. Like six images and merge them all into a single video and I don't know if it was automatically compiling like a generated video but take like first frame last frame and then use the frame to like make the next video. So basically it was like a kind of morphing between like six different images in like a 10 second video. Oh, that's cool. So it was like first frame six middle frames last frame. Yeah, which is another kind of interesting potential with with this workflow. Yeah, I think a lot of the video generation tools now support, uh, first frame, middle frame, last frame. Yeah. Um, so yeah, they said, yeah, the big use case was like animation. That was something they wanted to get after to help speed up animation. Yeah. We're getting back into the world of in betweeners. Yeah. You remember that? Like from cell drawing from like the Walt Disney days. So, uh, the really legit. Senior level animators don't have time to animate every single shade. Mm-hmm . Uh, every single paper. So they'll just hit some key poses and say, okay, for da, da, da da for snow white, this, this, this. And then the junior animators come in and they just fill all of those little frames up. So that AI is now in between. Yeah. Now it's ai. I mean, I guess before it was, uh, you make key frames. Yeah. And then the computer figures out what the frame should be in between that. Now it's. Yeah, interpolating. Yeah. Yeah. All right. Other update. So Alibaba came out with an updated video model that is free open source. It's called Wan2.1 and it's sort of kind of being drawn with comparisons to Sora. But this is free open source. You could write it on comfy UI, run it on your local computer and text to video or image to video. Or I believe there's video to video. I'd have to double check on that one. How is the quality? Like, I would say it's not like, uh, the physics and some of the demos I'd had seen from some people of like with text prompts, um, with some like kind of crazy physics, like a cat jumping off at a diving board were pretty good, like better than a sore in some cases, or even, um, I would say it's not, maybe not up to VO two, which was the Google one. That sort of still, that feels like the. Best one at the moment. Yeah. And Oh, actually this was a tip or sort of a tip ish, but VO two, I'm still waitlisted for through Google. And I don't know, I only know a handful of people that maybe have access, but, um, free pick free pick is, uh, is kind of like Flora where you can adjust to what model you want to. Exactly. They, I don't know if they have some exclusive deal, but you can access VO two through feet. Free pick only on their like super premium plan. But if you want to pay and mess around with it, you can access via to through them right now. Yeah, yeah. I did see something that I think it was like 2 a generation or the pricing came down to that. It was like thinking like you got a bad generation. Like that's a, that's an expensive slot machine. Yeah. What about our guy Dylan lure when he automates 100 different generations? That was like 200 each time. Now we're getting back to real film production costs. So yeah, Wan2.1, the physics were interesting. It's open source. It's free. Another, you know, kind of Chinese model just throwed it out there and it's like making it faster, cheaper, uh, free. I mean, quality wise, I mean, I don't know. It doesn't look like it's not at the, is good. I wouldn't say it was like, Oh, wow. This is like, uh. Out of all the Chinese models, would you say Minimax is probably the best one? Or Kling maybe? I think probably Kling. Yeah. From the stuff I've tested and used, I usually get better stuff with Kling, but Kling's pricey and takes a while to generate anything. Yeah. Okay. Well, this, it's Look, it can be bad, right? Having another model that's free just for as a test bed for people to learn on and stuff. Yeah. And also you can run a locally and like start generating stuff on your computer, especially if you're looking to just kind of brainstorm or storyboard stuff. Oh yeah. Let me go melt my GPU right now locally. Okay. I'll do that. All right. Other tool. So this one, it's not new ish, but I think it's interesting. I want to put on people's radar. And when we were talking about the Rob Legato hackathon that I was filming at, uh, I was interviewing him. And then afterwards, we're all just gonna talk about like AI tools and what people are using. And so he uses Midjourney. And then I was like, Oh, hey, do you use mood boards? And then he's like, no, what's that? And so are you familiar with Midjourney with style references? Yeah, a little bit. Uh, please describe it to me. So it's another way. So basically Midjourney, you give it, you can give it a text prompt, you know, text image, that's what it's like known for. But a lot of the really powerful Midjourney users that get really kind of very like specific, consistent styles and really get what they're like after they don't really do elaborate text prompts anymore. They use style reference codes. And so it's this code you add to your prompt that is. I don't fully remember how you make the style reference, but it's basically a very distinct look. It's a code. You can kind of go on Twitter and there's like a bunch of people posting like images from a code. And it's like, these are the style reference codes I use. You can stack multiple codes together. And so they're really kind of clever creators, like simple prompts with like. A combo of their kind of secret sauce of like different style reference codes. That's brilliant. So that's you've had to know what the code was, or I think maybe there was a way to, well, anyway, the mood board way is sort of a way to train it. So mood boards is you can make a mood board and then you can just give it all of the images that you like of your reference images. Wow. And you could potentially like I'm saying in this case, this is a previous brainstorming case. I'm not advocating stealing anyone's style, but you could go on like shop deck or one of these. Grab stills from existing things, train it with a style. People grab mood boards all the time. If we, I'm saying, I'm not saying, make a film with this. If AI hasn't pissed off the artist community enough, like, I'm gonna piss you off more. But yeah, this is, this, we're getting to such a sensitive area. I'm not advocating to steal anything. Look, there's enough. Look, I don't have to tell anyone anything they already know. If someone wants to, like, use someone's existing style, they're gonna post stuff. How many, like, Star Wars, Batman rip off things have I seen online? For me, the style that is so difficult to replicate and the one that's the most sought after is Spider Verse in the animation world. The team did such a good job of taking traditional, like, what you would say, Pixar style. Animated film, just completely shattering it. So, uh, spider verse looks like a comic book, you know, it has the cell shade separation. It has 12 frames per second animation choppy. Uh, the characters always hit the right poses and hang on them for a second before going like, it's beautiful to look at. I don't, I don't know if you can copy the temporal style, but certainly you can copy the frames, the visual language. Yeah, yeah. So for inspiration or research or don't use any existing frames and if you have your own photo collection or whatever. Mood boards. You give it the images and then it creates a new style reference code based on your images. You're basically training a style reference. Yeah, that's what I'm saying. I think under the hood you're training a Laura and then the hex code or whatever is referencing that Laura. So then you use that in your prompts and then you get. images with this look that you're after, which is good for mood boarding, storyboarding, previous stuff. Going back to Flora, I think, uh, maybe there should be a feature like that built into Flora. Yeah. And maybe there is or will be, but yeah, I mean, I'd say that seems like the next logical step of like having a way to train your own Laura, bring it. I mean, let's send the name. So if it's not there, it's gotta be, uh, and like the last year or two, I think, uh, the obsession was with just generating quality stuff. And now it feels like the obsession is with control. Control consistency. Yeah, yeah, and that's Yeah, that's the way to get there. And the last interesting thing on the radar, not visually related, but speech to text. So sort of the kind of big model that had been powering a lot of the speech to text was Whisper, which was open a eyes version of speech to text, which was pretty accurate. Now ElevenLabs, which has been known for kind of the opposite of Text to speech, they released their own speech to text model called Scribe. And they're saying that it has the highest, this is self reported, but it has the highest accuracy on benchmarks, outperforming previous state of the art of the models, such as Gemini 2. 0 and OpenAI Whisper version 3. Okay. Okay. Hey, did ElevenLabs do The Brutalist? Is that right? The tool that the editor used for That was Respeecher. It was Respeecher, you sure? We keep mixing this up. Okay. I think I said it was Respeecher and then you said it was ElevenLabs. That was for not the Brutalist. That was for, oh, the translation of the podcast of Lex Fridman. Zelenskyy. Yes. So I saw The Brutalist finally. Yes. And I immediately, like when, uh, Adrian Brody was reading that letter. Yeah. I don't know, man. It didn't sound like him really. Yeah. What did you think? You saw the movie. Yeah, I saw the movie. So the how, how, how tuned is your Hungarian? Are you a Hungarian? No, no, no, it wasn't even anything like I'm tuned as a human to recognize other humans and their speech patterns. Adrian gravels delivery as Laszlo in the whole movie was very chill pace. Like he was never rushed to deliver line and he had a really gravelly coarse voice that yeah. Never really cleared up. You know, I think that was the reflection of his addiction and all of the bad lifestyle choices and the letter reading was just so it was too perfect. It wasn't that imperfection that Laszlo had. It was more like a robot reading the letter. That's what it seemed like to me. Interesting. Okay. Interesting. That's my interpretation. It didn't stand out to me at all. Okay. Well, go back and maybe. And also I'll say, uh, when, uh, Adrian Brody's wife, I forget the name of the character, when she reads her letter and then you meet Felicity Jones, uh, in, uh, in the movie, I think the same issue there, like it's 90 percent there. And I could see why the tool was used because the pronunciations were spot. I was like, how do you use. Pronounce Zofia, right? Like the name of, which is an issue with getting the Americans to pronounce it. Yeah. Even, uh, uh, Adrian Brody's character had to correct that pronunciation in the movie. Right. And I was like, okay, I could see why it is used. Having said that, I mean, it does, it's not his performance. All right. That's my take. Didn't stand out to me, but, uh, yeah, good. Do you think you would have known if you didn't know beforehand? Do you think? You would have like something would have stood out. Where do you or do you think it's only stood out because we had an extensive conversation about this. I think, uh, the minute, uh, because the way the movie cuts is, uh, Adrian Brody stops talking and then the letter starts to kick in with like a narrative. So when that trans when that cut happened, his voice stopped and then the AI voice took over. You notice a little little bit of a glitch right away. You're like, Oh, wait, is that the same person? Yeah. All right. All right. I got to revisit this. I'm curious. But yeah, that was Respeecher. This was ElevenLabs and the opposite. I mean, having, having said that, like we are judging the performance of this tech product at the highest level. Yeah. For the other 99.999% of the use cases I'm sure is fine. It's great. But what it's like intended, what it's intended to do and what the alternatives are, which is not a lot of alternatives. If you have to dub a, uh, telenovela in. Five other languages in a course of like, you know, one day, there's no other way to do it. All right. So that's kind of our tool roundup. One last thing to end on. Uh, so there was a speaking of, well, actually this was, I think, indirect result to, um, the backlash on the brutalist, uh, with AI. James Cameron, uh, came out and said that he would potentially open the next Avatar movie with a title card that says, quote, no generative AI was used in the making of this movie. Um, it just seems, this just seems really silly. It seems, uh, a little bit hypocritic. I know why he's probably doing it. I think he's doing it so it qualifies for an Oscar, because now the Academy is all over it. With the potential new rule to, uh, that you would have to disclose. Yeah. It didn't even clarify if it would disqualify. It was just like, you may have to disclose. I think he's just taking the safe road and just being like, nope, not me. Don't look here. I think this goes back to my bigger point that I've talked about a bunch of, like, We just need more language around AI and more, more terminology because AI is such a broad term. It's like, you're going to tell me that no machine learning was used and how you like animate the waves or how you figure out the movement of the waves or like whatever other surreal elements are in there that All of the mocap work, uh, all the clean positioning. I know he did say generative AI and it's like, okay, well, no one in the entire production process, no previs, Midjourney for a mood board for some like inspiration. I mean, maybe the only way they can say that is because. How long have these movies been shot or in production? Right. Ten years. I think, yeah, the only reason is like, because it predated all these tools when they made these movies and, uh, it wasn't an issue back then. And so yeah, it's done. Movie's done. We know we didn't use any AI because it didn't exist. Also, who's on the board of Stability AI now? Mr. Cameron himself. Uh, yeah. So, yeah. Stability AI is such a driving force of AI adoption in Hollywood, right? They have heavy hitters. They obviously have the stable diffusion products, video products, academy, they're on, yeah, sci tech board, right? Exactly. That was the big update from a week ago. No other company has the positioning that they have being an AI company looking at Hollywood. I mean, they're based in L. A. Yeah, that's just, I don't know. Maybe the Academy thing, it also just feels like after all the backlash. Yeah. The Brutalist got, um, for some AI stuff and, uh, that, and it was just like, oh, you know, we didn't use any of that. I, I mean, my guess is Avatar 2 and 3 will, will be qualified for Visual Effects, uh, Oscars. So, you want to make sure you get those. Is it the 2 already came out? I'm sorry, 3 and 4. Three and four. Yeah, I thought they were making two at the same time. I think, I don't know. I think I only saw the first one. All right, we'll go with Avatar 3. I know the second one did come out. I saw it. No, it was good. Okay. Yeah. And uh, the water. Was so freaking real, man. Yeah. Yeah. They did such a good job. And there's this, uh, controversial shot of one of the Na'vi characters where he ties a rope around his arm. It went viral on the internet for a couple of days and people can't tell if it's like an actual person's arm painted blue or, you know, or it's that good. Yeah. Like they nailed photorealism in that show. Yeah. I mean, I would hope so. Without AI. All right. That's pretty much our show. All right. Well, that was a good, that was a good chat, Joey. Thank you. Yeah. And hopefully a couple tools on the radar. If, uh, we missed anything or anything that's been on your radar, it was just, uh, either shoot us a message or leave a comment. Yeah. Uh, Spotify or YouTube. But yeah, show notes as always at Denoisedpodcast. com. Yeah, we check the comments and the reviews and please, uh, engage with us, leave us your thoughts. Uh, we love to change the show a little bit based on what you want to hear. And of course, uh, review on Spotify would be fantastic at this point. So please, uh, take a minute to do that. Thank you. Thanks, everyone. See you in the next episode.