
Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
The Oscars Say AI is OK, Test-Time Training, and Descript Goes Vibe Editing
The Academy clarifies its stance on generative AI in filmmaking, along with a few other new rules. Then we break down a new AI paper with Test-Time Training and how it generated minute long Tom and Jerry shorts. Then we look at Descript's new 'vibe editing' tool that they teased and what it could mean for video editing.
In this episode of Denoised, the Academy sets new rules on how films can use AI, a new AI model for long form video called Test-Time Training, and Descript teases a new feature for Vibe video editing. Let's get into it. All right. Welcome back to Denoised. Addy, good to see you again. Good to see you again. I found out today is the, the day we are recording, this is the 20th anniversary of YouTube, but it's the 20th anniversary that the first video was ever uploaded. Wow. I feel old. It's been 20 years. Although I have been using YouTube very early on as. I'm sure you have. Yeah, I remember, I think we talked about this a a few episodes ago too, where like I remember back in the day it was like, you know, especially in film school and they make your film and it's like, do I put it on YouTube or do I put it on Vimeo? You know where like film cinema is? Yeah. And YouTube was sort of the kind of weird, you know, cat video place. Yeah. Well the, the moment, the aha moment for me was, uh, when a buddy of mine wanted to share like something that happened on Hollywood Boulevard. Uh, it was like, you know, uh, late night after hours and I was like, oh, uh, send it to me. I wanna see the video. I. And he's like, oh, I uploaded to YouTube. Just sent me the link. I was like, yeah. Whoa. Yeah. I don't think people under our age realize how complicated it was to play video on the web.'cause you would have to upload the video and if someone wanted to play it, they'd basically be downloading the video file. It was common practice to upload like three different versions of your video. Like small, medium, and large. Yes. Depending on their bandwidth, they code could pick code which option they wanted. Yeah. Uh. AVI, MP3, MP4, DivX, DivX. Forgot, I forgot about DivX. And yeah. And the original YouTube was built, uh, with Flash, right? Was it Flash-based? I think so, yeah. Yeah. And then they eventually switched to HTML5 and. I have no idea what it is now. Yeah. It's probably its own unique ecosystem of tools. But, uh, do you remember, uh, very first mobile phone playback of YouTube? I remember the iPhone. One of the big features was that was YouTube was a built-in app. I. Crazy. And I remember the first time seeing someone who had an iPhone and then they played a video back. Yeah. And it was like, oh man, this crazy moment where you're just like, kind of like a glass brick you're holding in your hand and it's just playing a video. There's no buttons on it, no borders, no buttons. Just like a smooth piece of glass with a video plane. Yeah. No, I feel like we've aged ourselves a lot. No, I like the, the, those moments of amazement that we had, I compare it to like the first time I saw Sora's like that preview video. This is like, what, two years ago? Not a year ago. A year ago. It was a year ago. Time flies. And I was, it's the same level of amazing. Whoa. AI can generate an entire video. Yeah. I think I had dismissed it.'cause I think I saw the clips, and then I was like, I think I had kept scrolling, like, oh, that's just like, that's a reference video or something. That's not the AI output. Oh. And then I realized, I'm like, oh, what? No, that's, that's the real deal. Yeah. All right. Speaking. Oh, actually. Before I get rolling real quick, we appreciate all the love support, uh, but we have a favorite to ask you if you're enjoying this podcast. We've began a lot of love, usually on LinkedIn, sometimes in the YouTube comments, and that is amazing. If you wanna copy and paste those comments onto, uh, Apple Podcast or Spotify podcast, besides YouTube, that is the best way to help people discover this podcast and for it to grow. Podcasts are notoriously tricky and hard, and it's also crazy because podcasts are about. 20 years old. A little bit more than 20 years old, really? I believe. You know, it's funny, I, I was doing a short film in college on. Life hacking Uhhuh when it was an early concept and it was like lifehacker.com and stuff, and the guy who created the term life hack, it was nominated for Word of the Year, I think in 2003. Okay. And it lost out to podcasts, which was the word of the year. Amazing. I think the pod comes from iPod. I, yeah, I believe so. Right.'cause it was an Apple, right. Sort of developed thing. Yeah, it was on iTunes at the moment. Yeah, right. With the RSS feeds and all. Yeah. Yeah. You had to host it yourself anyways. 20 years later, they still have not really improved how people discover podcasts. So yeah, the ratings are one of the biggest ways that the podcasts go up in the ranks and people discover it. So if you would not mind, please leave a review on Apple Podcast or Spotify podcasts. Uh, we appreciate it. Yeah, we really appreciate it and we want to bring you. Deno news and have this have not just value on, on your career and on your professional life, but also a little bit of fun. You know, we like to laugh and riff on things and we hope you enjoy the conversation. Alright, and speaking of interesting things, yeah. And AI, uh, we have talked about this before, but now there's an official ruling, uh, with new rules from the Academy Awards and they have officially clarified their stance on generative ai. No AI, right? No, no. They said they're looking at it as it is a tool. Wow. And you are free to use this tool. Basically the, there's literally just one paragraph in the new rules. Uh, I'll just read it'cause it's pretty short. Sure. With regard to generative artificial intelligence and other digital tools used in the making of the film, the tools neither help nor harm the chances of achieving a nomination. The academy and each branch will judge the achievement taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award. That's fair and square. Yeah. It's a tool as we've been talking about and as they see it and I felt, you know, where do you draw the line? We talked about this before. Where do you draw the line? If you're like gonna track like Yeah, how, what generative AI went into a project and at what stage and you know, so yeah, I think, I think this is a smart decision. Yeah. The Academy really unlocked a lot of filmmakers in today's world. I think with this. Simple paragraph, because you are up against crazy budgets and crazy resource limitations. And there is, there are these tools that are incredibly helpful. Mm-hmm. But you're not sure if, if you use them, then your project gets disqualified from the Oscar. So now I, I think this alleviates a lot of concern. Of course, you can't just. Generate something fully and no, have no little to no human involvement and expect to win an Oscar. That's just never gonna happen. No. And I think that, you know, that's where the voting comes in and, you know, you would not get nominated or devoted. I mean, I'm a little like wondering'cause it's, you know, the, the second part, each branch will judge the achievement taking note account, the decree to which a human was at the heart of the creative authorship. How do you know that? How do, like is it based on interviews? Is it based on, I think director statements? Yeah. After the fact. Like how do you, one good thing about the A Academy I will say is they're incredibly technically capable because the people that are on the STECH committee, and so they're very well aware of all the AI tools and the practices, so. When the short list of movies gets to the top of the chain, and my guess is there'll be a vetting process. There'll be a formal q and a process, and they'll quickly determine if the usage of AI was, uh, in line with their expectations or not. Yeah. And then the voting members can make that decision, which as it should be. I mean, the peers are voting on peers and Yeah. You know, like it's their call if they feel like it was. Authored enough by Human or went too far with AI use, depending, you know, we'll see how that plays out in the next few years. Yeah. One of the things that I always wonder about is like, okay, visual medium, like a frames of an image. That's pretty easy, at least to me, to see, okay, what part of it was, uh, traditional CG or AI, like based on knowing about the pipeline or talking to the VFX supervisor, you can quickly determine, okay, is the use of AI here a fair use and good? You know, it enabled a filmmaker in a certain way. Or is it just, um, unethical, if you will. Sound and composing music like that stuff gets way murkier. Like, have you used Suno? Mm-hmm. Some of the stuff on Suno sounds really good. Yeah. Tell especially tell, use it as like a background beat or something. Yeah. And it's hard to tell how much human intervention went there. Right? Like if you mix a suno track with the human tracks and a human instrument. Human plate instrument. Uh, or if you use it to generate some sort of, yeah. Some background beat or something. Yeah. Uh, yeah. A track instrument that gets mixed in with the real instruments. Where does that, I don't know enough about audio production. Mm-hmm. At the highest level to kind of weigh in, but my guess is like other use of AI o outside of frames. Will come into play as well, the all aspects of filmmaking. Yeah. And I would assume, you know, if you use tracks on your soundtrack that were completely generated with Suno, that would disqualify you from like best, uh, for like a best score nomination or something. Yeah. Especially 'cause uh, I think Suno as a platform is still a question mark on how they train their model Uhhuh and what the outputs are. Yeah. And if they're. Copyright. Correct. Clear or protectable. Yeah. Yeah. Like I, I mean also that's, I don't think that's the academy's job to make those calls, but obviously that's up to each production and studio to Yeah. Figure out like as well. Like what is their, since the copyright stuff is still a bit up in the air mm-hmm. As far as you know, are the outputs protectable, that's more of a. Project to project decision. Does this feel like it's a step in the right direction? Yeah, I thought some of the early things, what were some of the early ones? It was that early ideas was, you know, possibly like disclosing or tracking all the use. That's right. That just seemed a bit kind of cumbersome and like where do you draw the line of, you're gonna need a whole like. Overhead department to monitor the use of ai. Yeah. And where's that budget coming from? What the pitch deck that I made like three years ago, you know, that I used some AI generated images, but yeah, that maybe never came into play again after we got funding. Like, like where do you draw the line of like tracking what tools and stuff get used. So I think this, I think this makes sense, you know, it's like, yes. Yeah. It's a tool and it's up to the voting body to kinda make the call for on their peers. Goes back to the James Cameron little title card on avatar. The next one. Oh no generative AI was used. Yeah. I guess you don't really need that. Yeah. I be curious if, uh, they kind of just remove that when the film does come out. Yeah. If people had forgotten about that. Or maybe there's more acceptance of AI or less bias. I dunno about the bias thing as we talked about the last episode, where people were biased against AI generated stuff, but didn't have a preference if it was AI generated. The, the public bias won't go away overnight. That'll take a long time, possibly years, because AI is not just impacting media and entertainment, it's impacting automotive and industry and quote unquote taking away jobs, you know? Mm-hmm. So like, there, there is a negative connotation across the board, although I, I see it as an enabler in all those industries. Like the same person can now do their work better. It can, you know, the company can output twice as much and so on. Having said that, the financial reasons why AI is getting adopted in m and e is like rock solid, right? Like if they see like Ted Sarandos, if they see a 10% efficiency, they're gonna go for it. Why would you not? Why would you not? I. Yeah. If the tools are there and if it, it keeps the quality the same and for the most part keeps the same, uh, amount of people employed. Yeah. I went to a dinner party a couple weeks ago and somebody sort of reminded me what we already knew was like people getting in the way of technical progress, never really fared well. Mm-hmm. Like throughout history. Yeah. It's the bold, the Indiana Jones Boulder, it's coming. Yes. So like, I'd rather get ahead of it than, uh, get, get crushed on it or have a. Spear, uh, impale me. The other interesting thing that came outta the, uh, new rules for the Academy Awards that I think also got more attention than our little AI corner is now you're required to actually watch the films to vote on them. You mean the whole time there was voting done without watching the films. There was not a requirement or a rule that said you had to watch the films before voting on them. Oh boy, you, uh, yeah. You need to watch all the nominated films to vote in your category, all the films in your category to vote. In your category. I am guessing this is probably more of a tech thing where it's just like too hard to enforce and track that in the past. But lately there is basically like a Netflix-ish streaming platform Yeah. For academy members where they can watch all of the films and so that's obviously trackable so they can track if you use that system, if you, if you played it. And then I think I saw some reporting somewhere else where like if you see it at one of the academy screenings or something, you have to just. Like, I don't know, send a ticket stub or provide proof that you like saw the movie, but, but they are tracking it. I'm guessing it's more of just the tech is easier to implement this. Yeah. I mean, the academy members at that high level, they're very busy people. Mm-hmm. Right? So maybe they're just, they just don't have 10 hours. Like if it's five nominations at two O Yeah, I get that. Yeah. But then don't vote, don't vote. You know, I mean, yeah. A lot of times it turns into, uh, you know, people voting for their friend or more just, I mean, that is the, all, all, a lot of the art of Oscar campaigning and stuff of like, I. You know, is it really for like the, the, the merit worthiness of the film versus there's a story behind the actress or the film or something and people are kind of voting for the story or voting because they're, yeah. I mean, end of the day you should vote, vote for time coming this person. Yeah. You should vote for the end product. Yeah. Percent. Yeah. Or at the very least, watch everything that is nominated and then make a decision. Correct. Your decision. Is biased, which I mean Sure, of course there's gonna be bias. Yeah. So, you know, glad to, uh, glad technology's helping there. Dude. The Academy is, uh, is doing some good stuff over there. Yeah. But yeah, overall a good one. And then also they've got a new award casting director. As a new award. Okay. And then in a few years they're having stunt coordinator. Oh, they added a new category? They added a new category. Stunt coordinator. Yeah. Okay. I think that's rolling out in '28. So John Wick's gonna win that one if they just bust out a John Wicks 5 or 6 by then. Yeah. I heard the John Wick franchise is gonna keep going and, uh, possibly have a couple more spinoffs. Yeah, well they got that other, the, uh, Ana de Armas one coming out, uh, Ballerina. Yes. Uh, this summer. Yeah. So yeah, maybe ballerina two or three will have the stunt coordinator. Yeah. Oscar win. Excited to see that. What do you think about, 'cause there's always talk where it's like, if they add more awards, then the telecast is gonna get longer. And with the more technical awards, you know, general population gets bored, um, for if they want to move the shorts off broadcast. I feel like the shorts are always a kind of a weird thing.'cause you've like, I like the shorts. Never seen them, never heard of, I'm not seen, get rid of the category. I'm just saying, does it need to be on the telecast? Yes. Because that, that is the, the quote unquote DEI version of including the, the small low budget filmmakers who can't afford to make a feature. This is their chance at the highest level. I mean, they can still get the award, they can still get honored and it's still, yeah, just don't. Included in the broadcast, you're saying? Yeah.'cause you know, no one, I mean, it's really hard to see these films and unless, you know, sometimes some of them were like, I think the one that won a few years ago was like an LA Times documentary. Yeah. Produced on the, one of the music schools or something. Okay. But I mean, for the most part, it's like you, no one's ever heard of these films or seen them. That's their, I mean, uh, if you go to the, you can't even watch. I mean, it's very hard to even watch them unless they're already. Playing somewhere online. It's about the recognition. So they just look at recognition. And nobody wants to be recognized off camera, I don't think. No, but these are shorts. I'd say as someone who, the closest I've ever come to like Academy Awards was I had a film that was shortlisted. Didn't actually make the qualifying process, but it, it qualified for the Academy Awards. Really? For a short doc. Oh, that's cool. Yeah, I won a fest, won the best doc at a festival that qualified it to like be nominated. Okay. Was not actually nominated, but that was the closest it ever came. Yeah, I would totally understand. If it was like, oh hey, cool, you know, your short doc was nominated, but like it's not gonna be on the telecast because you know, but you can still go to the Governor's Ball, whatever the award's saying where Yeah, we will present the award and announce it. It's just not gonna be on tv. I would understand that. Yeah. If it also cleared up spots for like other roles that worked on feature films. I guess, uh, to your point, like the Academy Award, even just the nomination can make careers, right? Yeah, for sure. And you don't need TV coverage for that? No. Yeah. I mean, even if you, right. I mean, you, you could say I have Academy Award nominated director. Yeah. Even if the nomination was for Best Animated Short or Sure. Like still worthy enough. Yeah. The Academy's probably under pressure to change the format of the show as we're. Moving through time, right? Like it's a very classical format. It's very long. Mm-hmm. And now we have shorter attention spans. There's YouTube, there's all these other things. So my guess is like they're just gonna keep doing what they're doing and hope that it's just so etched into pop culture that they don't really need to change. It's kind of like, like a Saturday night live, like that format is from the seventies. Right. And uh, now they're like still. Producing the show as is, but then they're chopping up bits and putting it on YouTube as shorts and they're sort of modernizing with the times. But something like the Oscars, something like the Super Bowl, it just has to be the way it is. Yeah, I mean, I would, I'm not for cutting out any of the technical awards. Like remember a few years ago they kind of mixed it up and they. Gave out, I think it was like Best Editor and Sound Mixer either during the commercial break or it was like some weird thing where they just like walked down the aisle and like handed them the award and they didn't really get to do a speech. It was bizarre and weird and that's not good. And there was a lot of backlash, I think brightly so. So like I am not for that. I think. Fair enough. I think just keep a focus on the features. Yeah. At, of all technical levels. Look, I mean, there's a sound design and sound mixing award, 99.9%. People have no idea what the difference is. I have a hard time explaining the difference, like why there's two separate awards for that. But don't cut that. I mean, that's still part of the art of, of, of filmmaking. Yeah. I'm now you got me thinking about sound design and sound. All right. But yeah. Uh, have you got opinions about, you know, modifying the Academy Award broadcast? Let us know because we give us a shout. Should, should the shorts stay on or should they, uh, yeah. Go to the streaming service. All right. Next story. Tom and Jerry? Yeah. One of, uh, my favorite cartoons as a kid. I'm dating myself, but this is before computer group generated Pixar style animation. Tom and Jerry is what a lot of, uh, older millennials grew up on. There was a period. I, I dunno if it's same with you, but like Cartoon Network. Yeah. I feel like there was a period where they didn't, they, they seemed to have stopped making new cartoons. Yes. And so a lot of the stuff was the 1960s, 70s Hanna-Barbera reruns, like, uh, Flintstones and Jetsons. Jetson. Scooby do, yeah. Jonny Quest, the original Jonny Quest, not the remake Jonny Quest. Yeah. That, that and Tom and Jerry, Looney Tunes. Right. That whole, uh, Bugs Bunny. So yeah. I love Tom and Jerry. Yeah. So Nvidia. Stanford. Yeah, a bunch of universities. So we're talking about Stanford, uc, San Diego, uc, Berkeley, uh, with Nvidia at the helm, they. Created a new video generation model called Test-Time Training. So right now with video generation, uh, one of the big issues is length. Mm-hmm. Right? And I think you and I were talking about this, the longer of a video you have, the longer the temporal consistency has to be from frame to frame to frame. So naturally the heavier weight, the memory, if you will, of the AI model has to be, it just has to remember what it, what the attention was on each frame as you progress through the frame. Right. And the more frames it's gotta like go look back. Yes. To make sure that the whole thing makes sense. But every time it looks back, I believe it's now like logarithmically. Expanding. It's not like, oh, if it's a one second or two seconds, it's logarithmic. Yeah. To three seconds that it just adds one. It's like, no, now it's like mm-hmm. X logarithmically the amount of tokens and memory it needs is, yeah. It gets astronomical. Absolutely. And that's one of the main reasons why video generation is so much more compute intense than image generation. Mm-hmm. So this research paper goes through a new training model called Test-Time Training, where they reduce that. Compute overhead by a big factor, and they are also able to get super temporal consistency to the point where they could generate, in the case of Tom and Jerry, a one minute video. Mm-hmm. Which is unheard of from a single generation. Yeah. And so do you know like what is Test-Time Training doing that is different? So, uh, what it's doing is essentially on the attention layer side. Attention in an AI model. I know it sounds like a very vague term, so when an AI model generates something. It has to focus, it has to put attention to the thing that it's generating and also cross attention to the thing, other things in the frame. And then it also has to have self attention, which is, uh, that in the last frame it generated Tom. So now in this frame has to generate Tom, so that's self attention. And then Jerry would be like a cross attention because it's, uh, it's. Another thing in the same frame. So there are all these attention mechanisms built into the video generation model. This has, I think, a different fundamental attention hierarchy. Is it basically, instead of reviewing the entire video, kind of learning within itself while it's making the video so it doesn't have to review the entire thing? Yeah, so they released the uh, paper as well as code, so you can actually build this into your video generation model, which is. Super nice. Sounds like where a typical video generation happens is you go back in the number of frames and the attention has to be through frames. So you're talking about, you know, in a 24 frame per second video, 30 milliseconds of attention span at a time. TTT is doing it with three second segments. Of attention, uh, units. And so because the time span is much longer, it's able to actually store a lot more information about each of those three second segments, which results in more consistency in the next three second segment and the next one, and so on. So instead of just having a frame reference, it's having a sequence, a shot reference, if you will. So it is just a higher level of abstraction. I'm not sure how they achieved it, but it seems like it's computationally as efficient as a regular video generation model. Yeah. And that, yeah, it is a way where it can kind of just figure it out as it's going along, which is. Versus looking at the entire thing again. And so the, there was a sample video of like a minute long. Yeah. From a Jerry video. Can we take a look? Yeah. Yeah. So yeah, we got the scene. And so the interesting thing was it, the inputs were basically, they did have to give it a text input. Oh, look at text. The text is a little bit jumbled. Yeah. That walk cycle wasn't that great there. Yeah. And also, no, people are moving, but Tom looks great. I mean, this looks like Tom, Tom looks like, looks like I made to Tom. Yeah. E even the style of the world is very consistent. Yeah. And there's a computer which also it's like, that would not have been in the original Tom and Jerry training. Right. So. Uh, I mean, the keyboards jumbled, but it still works. His, his mannerisms are like, still so good. And so is, uh, Jerry? Yeah. Like he's hitting the poses that you Yeah, I mean, Jerry just disappeared there. But look, the feet, the spinning motion of the feet is there. Like Behaviorisms. Oh, this? Yeah. And, and even when he hits the wall of that, that like comic book explosion thing? Yeah. Like dust, uh, explosion. Oh, love this dog here. Yeah. Oh, he's, he's the bully, right? But, so to make this, they had to give it a text input, basically a, a detailed shot list. Uh, but it's still a single text input and this entire minute long video was the output. I wonder if the fade out was even part of it too. Yeah.'cause the original show had that same fade out. Yeah. This is so impressive because you feel like that could never be a single generation, right? Like, you look at that and you're like, oh, I'd have to like generate shots and cha and shot Yeah. And then edit it together. Yeah. Myself. Yeah. And every time you do that, you're gonna have discrepancy between the shots. Mm-hmm. Right? Because you're gonna Oh, that. Uh, that shade of color is different than this shade of color, whatever. Yeah. But yeah, the thing that this unlocks, if this is able to, you know, kind of roll out in a, I mean, they trained it on I think, billions of parameters of like Tom and Jerry material. So it has a lot of training data. But if this could roll out into, you know, give it a single character image or something, that could be huge because it, it unlocks the issue, you know, we have of like consistent characters where right now it's. A lot of the process is create a character sheet and then try to create the frames you want with the consistent character, and then you give that to Runway or Pika or Luma or whatever to generate your videos. Correct. But if you could give it your, you know, one character and then be like, oh, make a sequence crazy. The same character that unlocks having like much more consistent characters, environment scenes. If it's all being able to be done in a single generation. That's so cool. And then, uh, on the topic of character consistency, it's more difficult now to have multiple character consistency. So like having Tom and Jerry in the same frame, and then have them in a next frame and next frame, like without distortion, that's difficult. Uhhuh. And they seem to be doing it without the step that you mentioned of generating the frames and then feeding that into a video generator. Yeah. Yeah. Very cool. Yeah. Awesome. Awesome paper and yeah, this came out a few weeks ago and it came out like right before NAB, so I was. Preoccupied. We preoccupied. Yeah. So I wanted to, but I wanted to come back and revisit this 'cause um, yeah, I think there's a, it was really impressive what this unlocked and what, uh, you know, kind of this could lead to in the future. Yeah. I think we're gonna see longer length video generation from like the commercial manufacturers, like runway and so on very soon, because why not? Right? And now you're at an upper limit of maybe, I think 20 seconds with those guys and now you're going to a minute and then. Who knows, maybe in a few months we're gonna push to five minutes generation And shots switching shots too.'cause it's like one thing. Correct.'cause a lot of times there's like, if you make a minute long Yeah. POV drone flying video, it's like, cool, but what am I gonna do with that? But yeah, if you can give it like they did with Tom and Jerry, where it's like, oh, here is our shot list. We need. Yeah. And then you generate that. I wonder if at this sort of a mid quality level, can an artist go in manually and fix all those things we noticed and have it usable? Like even at this level? Oh, like just come in and cleaned it up? Yeah. Like a paint fix, probably. Yeah. Yeah. I mean, it's already so usable. Yeah. Yeah, for sure. I mean, this could probably unlock a lot for even just YouTube kid cartoons. Yep. Oh, big market over there. Yeah. Yeah. Alright. And then our last story, Descript video editor app. We've covered de script before. We've covered de script before. I'm a big fan. I also think there's a lot of things they can improve from a professional video editing standpoint, but they have teased to basically Vibe video editing. So they are in private beta right now. Mm-hmm. Uh, rolling out. But they rolled out a launch video for basically a chat. Interface inside Descript where you just have a conversation and kind of tell it what you want. Mm-hmm. And it edits the video based on your feedback. I think that's a intermediate step. Like we're not gonna be prompting our way through video editing. Ultimately, video editing will be done through a timeline. It's just that it'll be a lot easier of a timeline task than what, how it is today. Yeah. I mean, I don't think any editing tool can get. Like, I think this is interesting because you know, I've seen other text based video editing, editing tools I messed around with like Eddy AI. Didn't really think it was quite there and it was just the text prompt interface. It gives you back the thing. You either take it or leave it and then you could send it to your video editor. Right? But you can't like manipulate the timeline there, do script 'cause it's all built in. You have this chat interface, it uses the already existing AI tools. That script has started adding, but then it's making the changes to your composition, which is either the text-based interface that Descript is known for. Yeah, I think they have, I still think they have the best text-based editor. Sure. Or you can pop up the timeline on the bottom and then, you know, if you're more, if you're familiar with it. Timeline editing, same exact interface. It's there. You can manipulate and fine tune the stuff. Uh, yeah. Some of the examples they're giving were, I mean, a lot of their use cases are more on the social media marketing video end. I've probably pushed it to the limit of like what it can do, uh, giving it hours and hours of interview footage. But their main use case that they focus on is like podcast editing. You know, angle switching. Yeah. Layouts, trimming out. Bits and low hanging fruit stuff. Yeah. Marketing videos. Yeah. Tutorial videos. Yeah. Very much geared towards like content creators and, and, and marketing teams. And so the use cases are targeted for that, where you can give it, you know, a text. Instruction of like clean up this interview or like edit this interview and you know, switch, uh, add B-roll shots. Mm-hmm. And then it kind of understands the context of what should be shown there. I think in the demo it also included some examples of, you know, the person is talking about this thing and this B-roll shot, so we should like edit that here. So interesting. Promising. Yeah. And I think for the market they're targeting like the more consumer end of things. Mm-hmm. Maybe this is enough. I do think the winning formula is in the hands of Resolve and Premiere the way you can, uh, bring in generative elements into Premiere now, like a b stock footage or whatever, or extend the timeline past what it's capable of, generatively, fill it. In Resolve, you can do the, uh, rotoscoping, the matting, the power windows, podcast camera switching, all with AI. Like adding AI super tools into an existing timeline based tool. Yeah, I think that's the way it's eventually gonna go. Uh, for sure, and especially for professional video editors that are already working in these tools, like these content creators, when they get big enough, the video's handled by a professional video editor. Yeah, right. So there's like a small intermediary step where they don't have enough technical know-how to really produce a polished, edited video. And for that descrip, it's perfect. Or even just if you're looking, I mean, even if you have a professional video editor and you're just looking to put out more content. Yeah, I mean, this is a bottleneck we have right now, like with all the NAB stuff, where it was like, you know, I brought on some additional editors, but even with that, it was like between editing the videos, but then also like we want to edit short clips and social media clips. Yeah. And so, you know, we're using Opus for that. But Opus, you still have to clean up. It's just turned into bottlenecks of like, getting everything out that we wanna get out. Yeah. And so yeah, if, if these tools can help speed up an editor to, like we talked about before, a lot, like get the first cut. Yeah. And then they can come in and clean it up. Or help automate, like finding good moments for social media clips and cropping it, you know, tracking the face and cropping it for vertical and adding the captions and then maybe grabbing some relevant B roll or something to make it more interesting. Right. You know, that stuff, that stuff will be huge and, and, and helping unlock just what, what a single editor or a team can do. Absolutely. Do you think the way Descript is now, it would actually help you for all the NAB bottlenecks you're having like just the way it is now. My biggest issue with Descript is it's in the way that we use it, it's a one directional pipeline. Mm-hmm. Because it's like the stuff has to start in Descript and then they have good export options to send it to, like export in XML the timeline and bring it into Resolve or whatever. But once it's out, it's out. It's like not like there's no. Back and forth, back and forth. We do, sometimes we import the final exports of the video just to do captions.'cause their Yeah. Text editor is nicer and their, uh, captioning is pretty good. Right. Uh, and it's easier to edit and clean up the, uh, just the, the captions. Indescript we could, I. Do the social media edits in script. Mm-hmm. But just Opus is better at identifying the good moments. Yeah. Because their, their model is like only trained on doing that. Right. So it's very good at that. Their editor's not the best. I wish if Descript could identify the clips better. Yeah. Then yeah, I think we could just do all of these short clips in script though, in Resolve 20. Now they have dynamic captions. Captions and like, you know, I'm a big fan of like, if we can keep it all on the same app, it just speeds everything up.'cause like. It's there, single source of truth. Everyone could be in the Resolve project. You know, editors still working on the videos. You know, a social media person could be working on cutting out the short clips. That would be ideal, but it's still weighs out, still not there. I mean, Resolve can make the short clips, but there's nothing in it to help identify what's a good moment. Yeah, it's still very standard. Correct me if I'm wrong, but it's very standard in post-production to go from Premiere to Resolve and back because sometimes Resolve is the stronger color grading tool and Premiere is the strong Yeah, it's offering for color. Yeah. Editing tool. So you gotta, you gotta have, it was to a point, but I mean, more and more people I talked to are switching to. Resolve for editing.'cause they're, they're, they're editing features of Yeah. Resolve is on par with Premiere. A hundred percent. Resolve is the closest to what you describe as like all in one suite. Yeah. Right. It has fusion, fusion, fusion. Yeah. Close enough to Nuke. You can do some composite and after effects. Yeah. Yeah. Fairlight their audio. Yeah. Mixing tool, which is about on par with like audition something probably, I don't remember enough about Pro Tools. I'm sure Pro Tools, user users would say it is not on par with Pro Tools, but for, you know, everyone else good enough if you need to do audio mixing. Yeah. I think if you're running a professional recording studio and mixing like a hundred different tracks mm-hmm. You need pro tools. Yeah. But for most things, I think either Adobe auditions or Fairlight is good enough. Yeah. Uh, yeah, I think it's what we've talked about before, like the the, the Super app or the, you know, all in one spot. Yeah. Or if they open up their, you know, Resolve is pretty good with, uh, scripting and a console. Correct. Where it can connect to other tools. Yeah. And let tools. Manipulate and do things in the timeline or with the media pool. So you know, if they could be that platform and that, let other third party tools come in and interact with it. Yeah, I see. Um, that could also open up there. I see. Premiere and Adobe's API more robust on that sense. Like Mm. I don't know if it's just me. I just see a lot more plugins that are available for After Effects and Premiere. I do. I see a lot. Yeah. Panels. Yeah. Built and things. I see that a lot more. Yeah. Resolve, and I don't know if it's a Resolve limitation or just more of the market. Correct. Was on Premiere, so it made more sense to build for Premiere. Yeah. But more and more companies I talk to, it's like. If they don't already have something for Resolve, it's like coming soon. It's on our pipeline 'cause I think more and more editors are, are adapting, Resolve or Yeah, using it more and more, uh, not just for finishing or online and coloring, but for actual editing for sure. I mean, it doesn't make sense for a company like Descript or QuickChart to develop a plugin for Premiere Resolve, and I'll tell you why. Like from a business perspective, the revenue is keeping the customers within your walled garden. Mm-hmm. To have them use your cloud storage, to have them use your editing software. You know, just do everything as much as possible. So for, I don't think they'll ever integrate with a traditional, no. I mean, I always thought it was cool that they had XML. For Resolve and Premiere and Final Cut. Yeah. In the first place. So that was impressive. Yeah, and they've had that I think since the very beginning, and that was one of the reasons we adapted it. I know I'm probably in the like 0.1% of users who was like more Resolve integration when like, I'm sure 99% of their users are like, yeah, I just wanna work in here at Export. One of the other issues that I had with Descript was they had very limited export. Settings. Mm-hmm. So that was an issue we had too, where we would give it, maybe we shot 'em like ProRes LT or something that was like a decent bit rate, you know, higher than an H264. Yeah. But their export settings were very minimal. Yes. And would compress it. And it was like they're working with the original files and so it was very confusing that they had such a drastic compression. They're outputting for YouTube so they don't, it is even crap. YouTube can handle a lot. You YouTube can take a 200 gigabyte file. You can give it. Sure. ProRes 422 HQ. Sure. They will not yell at you. They will take as long as it's under like 200 gigabytes, which is a massive file, it'll take it. And I think super nerd out here, but I think YouTube has their own compression format, so no matter what you give it, they'll compress it anyway. Right. So even if you give it a compress file, they will recompress it. So like there's the argument where it's like you should just give YouTube the best thing, the best thing you can export that is under the file size limit, right? Let them do, its compression off the best file possible. Correct. Don't compress it yourself beforehand, 'cause they're gonna compress it again anyways. Then again, you are a professional filmmaker who's obsessed with quality versus the average YouTuber. Right? The screen recording video of how to use our app. Is not really getting, I mean, they should care a bit because it's like you want people to see the details of the text on your, on your screen recording. Sure. So you, it it is, it is important. Right. Descript might have updated their export settings.'cause I'm going off when we, the only exporting we do is export XML to send it to Resolve. So they might have improved that a bit. But yeah, that was, that was one of the issues where we like fully even like, um, you know, like one of the reasons why we wouldn't do this podcast on Descript, even though I probably would be a bit faster,'cause it's a very good podcast editor. It was like the final output, you know, we're shooting on pretty good cameras. Mm-hmm. But then it's like, if we're just gonna compress it at the end, then like, what's the point? Right. Co compression rant, uh, compression Descript. No, I, those are. Going back to Descript, I'm excited to see the Vibe video editor. Yeah. And you know, it's cool that they're gonna be one of the first to like kind of lead with that in an existing platform. We've seen some other. Tools, like try to launch that are, you know, AI video editing, but they're not already an existing platform that already has powerful timeline and video editing tools built in. So I'm cool to see how Descript goes with this. Yeah. And, and Descript is, uh, helmed by, you know, uh, somebody who founded Groupon. Mm-hmm. Andrew Mason. Mm-hmm. Mm-hmm. So it has really decent leadership, so I'm sure they know what they're doing at a product level and at a business level. So I'm curious to see where this goes. All right. Good point to, uh, wrap it up. Yeah, links for everything we talked about at denoisedpodcast.com. All right, y'all, We'll see you on the next one.