
Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
This Week in AI: Moonvalley's Marey, Luma AI's LA Lab, Veo 3 Adds Image to Video, and more!
Moonvalley's Marey AI model officially launches with pro-level features for filmmakers, while Luma AI opens a physical space in Hollywood. This week on Denoised, Addy and Joey break down the latest AI tools reshaping media production, including Veo 3's image-to-video capability, Perplexity's agentic browser Comet, and an array of new video generation models.
--
The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.
Basically, if I can cut out the step where I have to like generate a first frame and another tool first, and then use that to drive the video. Got it. And I can cut that out. Yeah. That, that, that, that's pretty cool. That's, that's useful. That helps, you know. Sure. Save time. So it's like if I can give you all these elements where it's like, this is, this is what I want to use in my scene. Mm-hmm. And then I can direct, quote, direct the scene and you know, have different shots and positions. Yeah. And I'm have to keep re-uploading stuff. That, that'd be cool. Alright, welcome back to Denoised. Good to see you Addie. Hey, we're back in the studio. So real quick, first off, we are wearing new headphones. You guys liked them? Some big cans. These are Atomos' new, uh, StudioSonic production Headphones. And so thanks to them sending, sending us over a pair and uh, we're gonna. Check, check it out. We'll report back on how they are in a few weeks. Yeah. Thank you. Atomos. We uh, we already love the brand from before. Right. We talk about the Sumo and the NEON. Yeah. They make great products. Yeah. Usually monitors. But then they acquired, I can't remember the company now. They acquired an audio company last year and then now they're rolling out audio products. They have these headphones, they have some microphones coming out. Yeah. Um, so yeah, excited to see what else they develop.'cause big fan of the monitors always been great. Yeah. And they compliment like the Blackmagic stuff really well too. Right. Does have a good look. Yeah. Yeah. And uh, the same sort of price point. I will say. One shout out that of these headphones that I did appreciate is the cable that it comes with Yep. Is reversible. So there's two ports on these headphones. One is a quarter, uh, inch, and the other one is a 3.5 millimeter. Yeah. And the cable has both.'cause I always lose the adapters. Yes. And so you could just pop out this cable and switch, like actually you're using the 3.5 millimeter side. I'm using the quarter inch. Yeah. Uh, so that's also something that has been, uh, very useful. Yeah. And it's, it's big, uh, it's big on your head, but then that's'cause there's a lot of cushioning. So you're getting a natural, um, anti noise canceling. Noise canceling. Yeah. Cancels everything out without using noise canceling. Yeah. It's just, it pass sucks around your ear. The old cool way. It is very comfy. Yeah. Okay. Story wise. So we're gonna do another end of the week roundup of new ai, uh, related stories. Yep. In a media entertainment space, big one. Moonvalley as least Marey their AI model as a product into the world. Yeah. Our timing couldn't have been better. Like we dropped the Bryn interviews, which we were like eager, so eager to drop.'cause it was so in interesting and fascinating to us. And Marey. And then they're like, oh, but we're also releasing the product this week. So now there's a lot of interest in it. Yeah. Uh, yeah, we didn't plan that, but that just happened to be really good timing. Yeah. So if you haven't watched those episodes, give it a watch. They're on YouTube, Spotify Podcast. Um, fascinating interview with Bryn. Yeah, yeah. Two-parter. So yeah, we talk about Moonvalley and the now released Marey product, and then in part two we talk about Uncanny Valley and the work they're doing with, uh, Natasha Lyonne's, uh, directorial debut. Yeah. And it wasn't just like a soft launch, like they launched the ComfyUI API, the public release all Yeah. At the same time. So yeah. And I wanted to touch on, because it's, uh, yeah, I kinda just show, it goes to show that. You know, they really hit the ground running, but like targeted for professionals. So it's like day one the product comes out. They are also, they have the ComfyUI nodes and the ComfyUI nodes. I did mess around with it in ComfyUI. Uh, it's got a lot of seed control. It's got a lot of more pro. Level control. Okay. Uh, output control than you would normally see on, uh, other API tools. Yeah. I saw frame count video models. Count FPS count. Yeah. A lot of different frame sizes. The output by default is full HD 1920 by 1080. Nice. But then also had 1 by 1 ratios, vertical ratios. Yeah. Um, so a lot of options there, but like all tools, the API's usually a little bit more limiting to what the full website can do. Um, and so they have some interesting features on the website itself and just, you know, also outta the gate it's like it supports being that they're, you know, kind of a little bit late to the game compared to all the other tools that are out there. They released with a lot of features that took other companies time to introduce. So like first frame, last frame. They do have a video restyling type tool. Style transfer. Style transfer. Yeah. So you can give it, uh, either motion or post transfer of an existing video and transfer that into the AI output. Yeah. Um, they also have camera control from an image or video where you can reposition, keep the shot the same, but try to shift the camera, which is something we talked about with Bryn in the interview. Yeah. And an interesting use where we haven't really seen that in another tool. To be like, Hey, just keep the video the same, but like shift the angle. Yeah. I've seen examples of like Leonardo DiCaprio and The Great Gatsby where he's raising the glass uhhuh, but just, just shifting that angle. That's cool. That's interesting. A lot of camera control and things that we have talked about where like at a professional level mm-hmm. It's been lacking and AI tools and so they've definitely kind of come at this from. Tools that professional or use cases that professional filmmakers would want to use or see in an AI tool. I feel like this is like almost like an Apple moment where they think different, right? Like Apple 1.0, you have the IBM, the PC and like the beige PCs of the world, and then they just come in. They do things slightly differently, but way more creative focused. Yeah. Or just, yeah. Focus on like specific features that creatives would wanna see. Yeah. Uh, pricing wise is basically, they have plans and then there's the API, but it basically comes out to a dollar 50, uh, per five second generation. Okay. So about in the middle of like, yeah. Every pricing for all the other models. And then also we kind of left out the big thing that they lead with that is an ethically trained model. Yes. That they have set, so that they've said the data that it's trained on was licensed or they knew where it came from and had permission to train. And so that it is a commercially cleared model. Yeah. Which was something Firefly tried to do, but my thought with that was, yeah, cool, but what do the outputs look like?'cause if it. The issue with being trained on limited data 'cause you have to license it or you know, where it comes from is, um, sometimes the outputs aren't that good, but in the stuff that I've messed around with, pretty good on par with like Runway Gen-4 and some shots, some shots I had hallucinations. But that's sort of the case with all of, um, these models you got, that's a lot of trial and error. But, uh, I was able to get some good stuff nice out of it. So quality wise, it, it's, it's up there. Yeah, you could definitely tell, uh, when we interviewed Bryn, he had such a. Passion for this being ethically clean. Mm-hmm. You know, first and foremost, you have to protect the artists, protect the people that are actually gonna use this. Right. One thing Adobe did very early on, that was a big flow. P was whatever you generate, they owned, and then they then can then use that for training for future models. Oh. And they backtracked on that, but that, that caused big outrage with like the early Fireflight customers. So again, they, they're doing like. The things that matter to Hollywood, they're doing it really well, you know, and of course like that is the biggest, uh, blocker right now. It, it's not so much quality, it's just like, can we even use it? Yeah. Right. Yeah. That's, yeah. Still a question of uncertainty. Like, I think we've been getting some clarity, uh, like we had the, um, papers from the copyright office. We have the Anthropic ruling, Midjourney still going the lawsuit, I believe. I think so. Yeah. Yeah. And yeah, I mean, there are a number of other. Lawsuits going around. Yeah. Yeah. I hope, I hope it gets cleared up in the next year or so. Um, yeah. I, I, I think Hollywood is probably the M&E our industry mm-hmm. Is probably the one that is the most sort of sensitive to ethical issues. If you go to, I don't know, something else like, uh, fashion apparels or something like that. Like they don't care about it as much. What they're really concerned with is, uh, digital likeness. So they're more concerned about of the faces or the people? Yeah. Like of, you know, if it's a famous model, do you have their consent there Right. To use it versus is the model itself. Right? Yeah. Yeah. Which, I mean, I think that is a valid concern. And of course, like you should, yeah. Yeah. If you're generating a model that looks like someone should have their permission, yeah. Right. I don't think that's gonna turn into a carte blanche. Do whatever you want. I think that'll just have maybe more, yeah. Legal terms or something outlined in, in contract negotiations or, yeah. I feel like we're much usage rates for like this much more further along in Hollywood because we have been doing digital doubles in VFX for a long time. Mm-hmm. So like a lot of that stuff SAG has already kind of figured out. And you remember we. Talked, uh, talked about Kavan the Kid and his SAG approved film. Mm-hmm. So, I, I think we're much further along than other industries on that side. Yeah. Yeah. I mean, yeah, we're hitting it first. And so those, those, uh, terms are gonna get worked out. Yeah. First. Alright. Other updates. Uh, Veo 3 world. Expected, but it is finally out. Uh, you can now upload an image, a first frame, and use that as the guide for your Veo 3 outputs. Originally it was just text to video, which was limiting with character consistency. Uh, but now you can give it a frame and then you still get the advantage of Veo 3 can generate speaking characters. Mm-hmm. Um, but now you could. Use an a variety of other tools that make first frames with consistent characters. Mm-hmm. And then have Veo 3, uh, turn that into a video big limitation that they just resolved. I think the other thing Veo 3 doesn't have yet is negative prompting. If I'm not mistaken. I don't think it has negative prompting. Yeah. So I think once they get past, check that box, and then I don't know if any of the Veo models. Have negative prompting, uh, like Veo 2 HA is a little bit more advanced in the sense that it has like, some, uh, references. Yeah. Features. We can kind of give it some like, you know, character image, location image and it'll combine them. Uh, that's not in Veo 3 yet. I'm sure it'll get there, but I don't know if any of them have negative. Maybe there're, I a lot of tools do I feel like, like Runway doesn't Yeah, it feels like, uh, I mean if they're fundamentally structured differently, we, you know, we talked about some of the fundamentals of ai. Compute in the past. Uh, I guess it's just really how you go into the latent space and how you, uh, reference all of the notions of those words. If by default everything that you don't mention is negative. Right? Do you even need negative prompting? Right? Like if it's like some kind of gate. So I'm not sure if it's like a systematic architecture thing or they just forgot to add it because, you know, they're, that's not how creatives are doing it at Google. Right. Or if just the prompting it's good enough where you can just say, and the regular prompt, like, do not, do, do not use blah, blah, blah. Or do not make it look Oh, yeah. You could throw your negative statements in the general prompt. Yeah. Yeah. Maybe if you're getting outputs, you don't Yeah. Something that's going weird. You can just tell it in the, in the general prompt. Yeah. And it would understand it. If you're out there using Veo 3 and, uh, using some negative prompting in your prompting, let us know in the comments. All right. Other updates? Uh, this one is more in the browser space, but I think it's worth talking about here because we've talked about operator features in the past. Yeah. Um, but Perplexity has publicly released Comet Nice there, new Agentic. Web browser. Do we need another browser, Joey? I was gonna say a um, browser, a new upcoming browser battle is not what I was, uh, thinking would happen in 2025. Right.'cause also the browser company released Dia, which is their own version of hi Agentic browser. Mm-hmm. Um, and OpenAI has also just said that they're gonna build their own version of an agentic browser. Yeah. What is an agentic browser? Um, at first I was like. I don't know. This doesn't make any sense. What do we need this for? But, um, I've been investing around with Comet and now I'm a bit sold on the future vision. Yeah. So basically all of these browsers, they have sort of like a chat window box on the side, like an open AI or with, uh, comet, it's perplexity. Yeah. And I. It can pull things from your open tabs or kind of pull information, but with come, it can also take action. Mm-hmm. So basically I had like Gmail open and I'm logged into Clickup where we manage all our tasks and I was just like, oh, hey, add a reminder Frame.io about this email and clickup for tomorrow. And then it, uh, you went in, just started like doing its thing and running on my computer, uh, and set up a new task, like read the email. Put the context in the task. Yeah. And built a new task and set it up inside. Uh, hook. I just, I just imagined that scene in the matrix where the guy, the operator's, like. Good. I can't control anything I thought. Say I know kung fu well, you get locked outta your own system. Yeah. Uh, yeah. That's obviously, um, the risk of these things go haywire, but it basically was like, uh, it kind of reminded me of, of Manus. Yes. Or, uh, ChatGPT Operator. Yes. Uh, but those are using like a cloud version. I'm getting VIVE Mars of a browser NotebookLM remember that? Google product? Yeah. I use that all the time. Yeah. Yeah. I think they're a little. I don't really know. It doesn't really mind me that, well, it, it has the sort of, uh, vision and image recognition. Like it could look at any window, any application, figure out where the buttons and the tools are. I think you're thinking of something else. Okay. Notebook LM was where you give it a bunch of like YouTube videos or documents and it'll turn it into a podcast. Oh, I think I was, it's the product. They have terrible names over there, so I think it's called Google Studio. That AI studio or something? Yeah, AI studio. Yeah. That, yes. Yeah, probably that one. Yeah. Mm-hmm. The nice thing is like, it is running on your computer, so I'm unlike, man, that's where you gotta keep buying credits and every time it's trying to figure something out, it's like ticking through your credits. Uh, it's just running on your com on your system. Yeah. I mean, browsers are not like the sexiest things, right. It's not like an I, it's not a tangible physical device, and it's just like a wide window to the world. Mm-hmm. But do you remember, you know, I guess we're both. Have witnessed these monumental browser moments in history. Yeah. I feel like the history of the internet can be marked by like browser battles. Yeah. Like going all the way back to Netscape and Windows, uh, Explorer. Abso, I mean, uh, the reason why Mark Dreessen is where he is today as a multi gazillionaire is because of Netscape. Right. Yeah. And then, uh, also crazy of the concept that people would have to pay, pay for a browser, for a web browser when it's built into your computer. Yeah, for sure. And then, uh, I think Sundar Pichai, CEO of Google. Is where he is now because of the success of Chrome. He was the product manager for, for Chrome? Yeah. Yeah, yeah. I was just listening to an interview with him about that. Yeah. Uh, yeah, also, yeah, another thing where it's like, oh, crazy that you um, yeah, that, you know, I mean they were like, at the time it was like, why are we building a browser? Yeah. And I was like, well, a lot of the browsers then just were not really built for how the web was evolving. Exactly. Um, and I think similar idea now. The irony though, is all of these browsers, I believe, are built on chromium as the backend. So like the foundations are still Chrome. Yeah. Um, like just about every single modified browser. Yeah. So Chrome is all open source. They gave the whole thing out and I I, wow. It was just crazy at the time. And, uh, I think Joey, within six months to a year, when these browsers are less clunky, more responsive, more capable, we're gonna be running it on our phones. Yeah. Or some, or maybe an interface where it's like the perplexity app can trigger the perplexity, right. The comment browser on your computer and, and do more actions that are too complicated for your phone. Yeah. Can it take over my work Slack and just start responding to people so I, I Possibly, eventually, yeah. Yeah. Or just understands your writing, understands what you do. Yeah. I mean that the Dia um, the other agentic browser that's out, uh, doesn't. Do control of your tabs yet, I'm sure that's coming. Mm-hmm. And that's probably just like a safety feature, but they had, their features were a bit more like you could kind of create prebuilt prompts and just kind of call up the prompts. Yeah. Based on how you wanted to respond or write or do something. So yeah, the idea is partly it understands you better and then also could take action on your part. So I, yeah, I hope it builds start to build a user profile, if you will, of Yeah. Like if it's, if it's you writing style preference, or exactly what you, uh, are interested in your top 10 apps that you constantly use and the top 10 people you talk to, things like that. Yeah. Yeah. So, yeah, not necessarily film specific, but I'm gonna test it out and see what, because I'm, I, remembering back to the experiments I did with. ChatGPT Operator of like, find me some rental gear or find me an Airbnb film location. Yeah. Uh, so I, I'll try to rerun them and see if, um, and that was what, only three months ago or something like that. Yeah, yeah, yeah. Things are moving fast. I'll see if, um, if Comet can, can do something similar. Okay. And then let's see what else we got. Uh, Luma AI. Luma AI, not necessarily a product update, but a physical building update. Uh, they're launching the Dream Lab LA, which is gonna be a new physical office. In LA run by, it's run by Verena Puhm and our friend Mr. John Finger. John Finger. All those, uh, x videos have paid off. Hey, hey, Verena and John, if you'd like to get on the podcast, uh, you know, to contact us. Another, another Addie invite. No, I've met V before at, uh, production a couple years back, so I, I am familiar with her and John, I've met him in a meeting in the past, so AI on the Lot, I saw 'em both there. Because that's where everyone goes. That's right. AI on the Lot, I, I did not meet them there, but they were there for sure. And then Dave, Dave Clark invites out to you as well, bud Frame.io to keep track of all that and invites. So it, it's interesting, we, we like to crap on Hollywood, right? We say how is, uh, production's going overseas? The California tax credits are not enough. You know this. Yeah, sure. Some to a some extent it is true. But then why are these companies going out of their way to build physical offices in Hollywood? Why do you think that is? Yeah.' cause it's still the capital. Maybe it's less so of a capital, but it's still the capital of filmmaking and media entertainment. Right. But I mean, I think it's a cool move by Luma because it also reminds me or made me think of something that Bryn was talking about in it, his interview, where they're like, a lot of the AI tools are built by engineers. Yeah. For engineers. And, you know, with Moonvalley, like we're trying to build it for creatives. And this is a, a, a key thing from Luma AI as well, where they're like, we're trying to build this for, uh, creatives. Yeah. You know, and like tap into the community and, you know, set up shop in where the hub of creatives are. Yeah. It's interesting you bring up Moonvalley and Luma. They're a little bit to me, like a little head-to-head. So the Luma Modify tool is sort of like what Moonvalley does style, like very capable, elaborate style transfer essentially. Yeah. I mean, yeah, it's. Yeah, there's gonna be a battle. Yeah. If it's gonna be, I mean, yeah. There's gonna be a lot of overlap between these tools and, and you know, throw a Runway into the mix. Competition's good. Right. And especially at this early stage. Yeah. I mean, I'm, that is also why we keep getting updates so quickly. Yeah. Uh, 'cause oh, you know, it's also a lot of times too, it's like they have probably developed the features, uh, just a matter of testing and releasing and mm-hmm. You know, someone, one company releases an update and then it's like, oh, we're not gonna. Sit and wait out that one. So we're gonna release that product feature now too, right? Yeah. I would love for this, uh, Dream Lab folks, uh, at Luma to start on a feature film or something like, something meaty and substantial to bite into. I'm sure it's happening. I'm sure a lot of this stuff's happening and like, I would be surprised, you know, next year we start seeing more. AI stuff, whether it's like in your face, like, Hey, it's generated film, or it's like, you don't know, but it's like, oh, hey, by the way. Yeah. A lot of generated stuff in that. Nice. You just can't tell that that's, that's the way to do it. Uh, invisible VFX, right? Yeah. I mean, if it's a good story, what does it matter? What does it matter? Yeah. And it got, and it gets made. At the budget, what does it matter? That is ultimately what might save the film industry in its current broken state. Right? Like that's exactly what Bryn talked about. Mm-hmm. Yeah. I mean, uh, yeah, the 50 70 mil mil. Um, budgets are hard to come by. Impossible. But if you could do the same thing, same vision for Yeah. 10 to 20, then you got a lot of people interested. Good model. More enticing. Alright, now some more techie updates. Techie, LTX Studio, which I always feel like they're pivoting or. Ish. I remember they used to be like kind of a platform, but now they started releasing their own models. Um, they released a handful of LoRa models. Mm-hmm. Pose, Depth, and Canny LoRa. Mm-hmm. And so, uh, these look like some good nodes that you could bring into comfy to build out more powerful, complicated workflows. Um, similar to kind of building out your own. Style transfer, video style transfer. Yeah. Workflow. But running it locally on your system. Yeah. And also you can train lawyers yourself. Uh, I think we're gonna cover it on a upcoming episode. Yeah. It'll be coming up. But, but then why do that when somebody's done the work for you? Just take that and then do the creative thing that you wanted to do on top of that. Yeah. I mean, look at these demos, it's like, oh cool. You can give it a video. Yeah. And like. Tracks the full body position, multiple characters, and then can drive the performance of another, uh, video. Yeah. Unless you're making like a, you know, a $10 million feature film, I don't think you should be getting into the world of training your own models and building your own Lauras. There's enough stuff for you out there, even LTX and outside of LTX. Mm-hmm. That you, if you know where to look, I think you can come up with a substantial workflow to. Get to 90% of where you need to be. Yeah, for sure. And so, yeah, this is great that the, it's great they're releasing these tool too that, uh, you know, anyone could use them. Um, awesome. Yeah. So yeah, just uh, putting that out there, something to be aware of. And then, let's see, other update, uh, video ai, which I had not heard of video AI before. Seeing this pop up had you. No. Had you Vidu? No. No Viju, no, I don't ViDu yet. So they have, uh, launched an upgrade to their references, Vira references to video. Um, so they'll let you upload seven different reference images, uh, character scenes, props, uh, to generate a video. I. Seven. Up to seven. Yeah. Which is a lot.'cause Runway three. Three. Yeah. I've never seen that number that high. Yeah. How do you direct or cure or kind of call, that's the thing element you wanna uses? Like when you throw seven images in there, I feel like it's just like too many hands in the cookie jar. Yeah. Unless there's a good way, like Runway is clever 'cause. You can, when you upload an image, it gives it an id. Yeah. And so in your prompt you can tag that specific image. You'd be like, oh, use the person from at image one. Yeah. Walking down the street, blah, blah, blah, blah, blah. So yeah, I'm looking at their videos right now and I gotta imagine there's a way to like call up or kind of specifically call out what elements you want from each photo. Yeah, I get that. But I, I don't know, I just feel like if you're giving it seven references, you're doing something wrong. You know what I mean? Like, first of all, you should already be working in a, a shot specific, at like a granularity level. So you're not trying to generate a whole sequence in one go. You're already down to a shot, which are typically, I don't know, 10 seconds, 30 seconds tops. Well, I don't know. I mean, if this helps you save, oh, I see a demo. There is a way to tag, uh, you can call up and tag and tag the images similar to Runway. But no, I mean, to your point, I mean, if there is a way where, you know, it's like we're gonna shoot in this room with these two characters and you know, this prop. That's four. And I get of that once. Yeah. That's four references. But then keep generating and not have to, basically if I can cut out the step where I have to like generate a first frame and another tool first, and then use that to drive the video. Got it. And I can cut that out. Yeah. That, that, that, that's pretty cool. That's, that's useful. That helps, you know. Sure. Save time. So it's like if I can give you all these elements where it's like, this is, this is what I want to use in my scene. Mm-hmm. And then I can direct, quote, direct the scene and you know, have different shots and positions. Yeah. And I'm gonna have to keep re-uploading stuff. That, that'd be cool if it works. Okay. Yeah. Yeah. I'm curious to try it. Yeah, I would try it out. And then, uh, yeah, another image to video update. Uh, this one from Baidu AI Chinese company. Uh, Baidu is huge. Mu streamer. Yeah. Yeah. Image video model. Newly launched at, uh, Baidu. I think Baidu is like their Amazon. I did go to the website and it is, um, only in Chinese, so it was hard to, uh. Yeah, BA Baidu is as big as, um, like Alibaba or Tencent. That's what I thought it was. I'm like, yeah. Isn't this one of their, um, e-commerce, uh, com uh, massive companies? Yeah. This is their AI arm. So with, um, the new streamer, you can do ten second clips at full, 1080, various different versions of models. Turbo Light Pro. I wonder if this stuff is, uh, gonna be available for free. Or open sourced in any way? I'm not sure. Um, I did try to translate the website, but it was still also hard to navigate with the translation. Yeah. I mean, quality looks good. And uh, uh, in a world where we're already seeing a lot of models, you know, I. Sure. Come on in. Here's another model and another one. Yeah. Yeah. But I'm interesting to see, you know, um, I feel like 1080, like full HD is sort of becoming the, like the standard. The standard or like the baseline now. Yeah. To come in with a new model. Like, it's like it's gotta be at the full HD quality. Yeah. This, this takes me back to when we transitioned from, uh, standard DEF to hd. Remember the battle between HD DVD and Blu-ray. Yeah. Yeah. And then ultimately as Sony and PlayStation one. And we went and ultimately it didn't even matter 'cause we, we went streaming, it didn't even matter. But no, it did matter in the sense that a couple years, yeah, we had all the colorists, all the editors have had to conform to that new resolution, which helped push Final cut and Premiere and Avid into mm-hmm. The. Products they are today. So I'm seeing like the same sort of, we're gonna see like a, almost like a plateau where resolution hits and then we're gonna see another plateau to 4K hopefully in over the next couple of years. Yeah, I mean, I'm curious, and I remember talking about this with like topaz of like, are we gonna get to models that just that, you know, keep pushing out 4K or. Or you kind of wanna keep it around like 1080 and then you up it, I'm guessing here, uh, both Marey Moonvalley and Baidu are probably upraising under the hood. Uh, you think they're making something smaller and Yeah. And uprising before they give it to you? Yeah, I, I, because typically the, um, the generations are square formats, one-to-one aspect ratios. Mm-hmm. And then, uh, what they're doing is they're probably uprising it enough then cropping to get to a 16 by nine. Mm-hmm. That's my guess. But hey, if you're out there and I'm wrong, definitely let me know. Yeah. Uh, yeah. Yeah. I'd be curious what's happening under the hood. Yeah. Um, yeah. Uh, that's interesting. I mean, I imagine as a compute and everything gets better, like they'll probably be 4K. Outputs that look sharp. Yeah. That's the thing. A few years from now, video generation's so expensive, right? Yeah. Compute wise that you don't wanna waste a single pixel. Yeah. And so if you can do the bulk of your computation and inference at a lower resolution, let's say, you know, seven 20 by seven 20. Mm-hmm. And then once it's like a decent of image, upscale it to 1080 by 1080 or 1920 by 1920, and then crop it to 19 20, 1080. Yeah. Yeah. Something like that. Yeah, I can see that. All right. And then, uh, the next one, Grok 4. So, uh oh. Oh boy. We'll get two into it, but, um, if you're following the news, uh, Grok 3 went a little bit, uh, crazy. The worst way is possible. Nazi. Yeah. And uh, also the CEO resigned. So aside from that, uh, Grok 4 was, didn't slow 'em down from releasing Grok 4. Nothing really in the image generation space. A lot of the updates were more, uh, with Grok 4 were more around performance and reasoning enhancements. You know, a lot of benchmarks cited and you know that it did really well on certain benchmarks. We've talked about benchmark. Stuff where I'm just like, eh. Yeah. I mean, in a land where we have, but they don't had in a bit also like, okay, we have ChatGPT, we have Claude, like we have all these amazing lms. Who's using Grok? Like who's paying for it? Good question. I don't know. Yeah. Yeah. I dunno if you need, you're like, our model needs to be a little bit more unhinged. We need, well, yeah. I think that that's what Elon said. Grok has like levels of unhinged, and I think at the highest level of unhinged, there is no alignment. So. It could do whatever it wants. Say whatever it wants. I did see some other article that they're gonna roll out Grok into Teslas starting next week. See, that makes sense. But that's kind of weird. How does that make sense? Like, that can like Grok take over your car and like, oh no, no. I, I don't think it's that. I think it's, uh, so. Tesla has a lot of like voice command and stuff like navigate to, or you know, instead of that you can tell grok, Hey, I'm really hungry, feeling like a pizza, but I'm not sure where to go. And then you're gonna be like, you really want a pizza asshole? You sure about that? And you'll look at a mirror. Oh yeah, I could. Uh, yeah, we'll see. We'll see how this goes. I don't know. I'd have a. In internal camera. So it's probably looking at, it's like you've put on extra 10 pounds, uh, let's take you to Tender greens instead. It's like, I didn't ask for that. Yeah. I don't know who is using the API on Grok. I'm curious, yeah. About that. Um, I did notice too, now they also are adding a $300 a month plan for Grok, I'm assuming to use the grok super, super turbocharger pro. Yeah. To me, like of course you can fund the whole company from the billions that Elon has, right? Like. But what I'm trying to get at is like, what is the actual p and l? What is the actual revenue of this company that's putting out very expensive LLMs? They're not cheap to build. Also where, you know, you see what happened with Grok 3 and you're just like, Hmm. Like do I want to use that model on my company data that like could, you know, just go off the walls. Yeah. Doesn't feel enterprise safe. Mm-hmm. And the enterprise is where most of the money with APIs are. Yeah. I'd be curious. I mean, I always kind of forget about Grok. There were like some instances where I was thinking like, uh, maybe. For some kind of off the wallish uses. Yeah. Or if you're hitting, I don't mean this in a, trying to skirt the safety way, but sometimes the LLMs are a little bit too cautious depending on what types of projects you're trying to do. Yeah. Or not understanding, um, pushing the limits of human creativity. Yeah. And I was thinking like, oh, you know, maybe Grok would work for that.'Cause it seems to, it's more care, a lot about safety. Yeah. Right. Depending on what you're doing. That might be Yeah. A potential use case. Um, like if you're Tarantino. Yeah. Right. And you're writing a script, right? Most of the shit you write is not gonna be, maybe you need the Grok model to help you write the script, right? To explore, you know, fictional creative realms, uh, what, what's the word? Uh, explore previously taboo. Territory. And that's art, right? Yeah. Uh, because this is an issue even with the image generators, you're, you're working on a zombie film or a gore film and, uh, it's tough to generate stuff.'cause it's always like they have gore filters and, and Yeah. Uh, even, uh, not even corpus, it's like. Slight blood on zombie. That takes me, that's an interesting point. Blocked going back to Marey and Luma, because they're creative focused, Hollywood focused tools. You also have to be more quote unquote, unhinged, right? Mm-hmm. To allow for, uh, more NSFW output or what have you. Yeah. I mean, look, I mean, obviously you want protections against just abuse and, and, uh, you know, not authorized DeepFakes Yeah. Of people and abuse of these platforms. So, you know, maybe with Marey and Luma, they have, uh. You know, a different tier for vetted creatives where it's like we, you know, or maybe that gets unlocked the minute you start to pay for the premium, highest end license that only big studios can pay for. I don't know, uh, maybe, I'm hoping there's more vetted process so that like smaller creators don't have to pay enterprise levels just to make a, a, a crazy zombie horror film. Yeah. But yeah, I would see that as like. A decent middle ground where it's like, okay, hey, like you agree to, you know, we can audit your creations and stuff to make sure you're not abusing it right? And you're a vetted creator, but you will give you access to the less protected version. These are all interesting question would questions and uh, we're navigating through such interesting times. Yeah. And then obviously it's moving super fast and everyone's figuring it out. For sure. Yeah. That is it for the updates that I got on my lists. Good roundup of, uh, the stuff in ai AI this week. Yeah. A a good week, I mean, yeah. Uh, between, uh, Marey, uh, Veo 3, having images and comet, that's probably been the bulk of my feed. But yeah. Nice stuff about Marey. I'm, I'm, I'm excited to mess around with it more. Yeah. I'm really optimistic for the future of AI in Hollywood. This seems like exactly the right step we need, and kudos to Luma also for opening up a Hollywood studio. Yeah. I'm really curious to see what that's gonna look like as well. So, yeah. Exciting week and, um. Again, if you wanna learn more about Moonvalley and hysteria and all that, we've got the two-parter interview with Bryn. So go check that out. The links and everything we talk about as usual@denopodcast.com. The Bryn episodes have had a ton of comments, you know, on both sides of the fence, which we welcome, we welcome a debate. Uh, so go ahead and engage with those if you haven't seen it. Yeah. But, uh, make sure you watch the full episode first.'cause a lot of these comments I can tell, like you did not actually watch the video. So, uh, watch the video and then leave your comments. So they're, uh, informed comments? Yes. Yeah. All right. Thanks everyone. We'll catch you in the next episode.