Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
Tilly Norwood Signing, Runway's Lionsgate Issues, AI VFX Wins
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Addy and Joey analyze three stories at the intersection of AI and filmmaking: the potential talent agency signing of AI actress Tilly Norwood, the technical challenges in the Runway-Lionsgate partnership, and how VFX professionals are combining AI with traditional compositing tools.
--
The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.
All right, welcome back to De Nos. We're gonna talk about AI actresses getting signed by talent agencies. We're gonna talk about where the Runway Lionsgate deal might have fallen apart, uh, with some tech issues, and a breakdown of real VFX artists using AI combined with Nuke.
Yes, sir. More AI. No problem.
Let's get into it. No models. Hope models.
Welcome back, Addy.
Welcome back, Joey.
Good to virtually see you again because I am not sure if I like this or not because I didn't wanna drive on the 405. For anyone that just listens to this. Please if I make zero sense. 'cause you just hear voices, so like you, you see, you don't notice that there's any difference whether we're in person or, uh, according to this virtually on, on Riverside.
I, I think the audience can tell. I think there's a little bit more lag between. Our deliveries, um, our interactions, and also we're not drinking coffee together. I know. I just have my paper cup of coffee right now, so. Oh, there you go.
Cheers. Mouth.
Cheers.
Yeah. All right, so our first story. People are kind of flipping out over an AI actress, Tilly Norwood, potentially getting signed by a talent agency if you're a long time listener of deist AI actresses and AI actors being signed by talent agencies.
Actually, not really news 'cause we've talked about this for a while and made a bunch of comparisons to this before. But yeah, let's talk about this story. 'cause also I've never heard of this AI actress slash influencer Tilly Norwood.
And why
are the names always so gimmicky?
I dunno. Well, she is a British, uh, British in, uh, developed by a British company, so, you know, I think it's gotta sound pretty British.
Yeah, it just sounds like a made up name to, I mean, maybe that's the point. It
is a made up name.
Tilly Norwood. Yeah.
All right, so this came out, there was a panel deadline, had some AI panel or something in uh, Zurich. And the developer behind the company that developed Tilly Norwood, uh, Elene Vanderveen revealed that they were in talks to potentially have signed the actress, the AI actress with talent agency.
And also mm-hmm. I noticed in the photo on this panel is, uh, Verena that's running, uh, Luma, Luma AI's, uh, Hollywood Incubator studio, which we are still figuring out what's gonna pop outta that.
Yeah, I see some snippets of that. Lab on LinkedIn, John finger posts here and there. I see Jon Finger all the time.
Seems like there's a bunch of people in there working away.
Yeah.
Incubator. Yeah.
So that'd be cool. Okay. But uh, back to Tilly Norwood. So it is a little bit confusing 'cause uh, in the article it mentions that, uh, Vanderveen behind the company that developed her, that named one company as Xicoia. Then it named another company as Particle6.
So I'm not quite sure. I think Particle6 is the company. The production studio? Yeah, the AI studio. Behind it,
I went on the Particle6 website, and it looks like they check all the sort of relevant and modern AI boxes. So they have the AI avatars, they have AI VFX, they have AI drama reconstruction. So we talked about all of these use cases for ai.
For, looks like a two person team from what I could tell. So it's, uh, Vander Velden
mm-hmm.
As well as another gentleman. I can't get his name from the website, but, um, or probably two head people. And then a team of AI developers, artists,
people. And
it looks like they're hiring four people.
Yeah.
Uh, they're gonna, they're hiring an AI creative producer, a AI video motion magician, an AI.
Production intern, as well as a researcher developer type.
Yeah. I mean, everything on this list is like kind of all of the buckets that AI sort of fits in right now. Cinematic, Nature, Montage Showcase, VFX, drama reconstruction, which is what Gennie AI has been doing sort of as their specialty commercials.
Yeah, I mean it's like all the, the entire, the entire gambit. The influencer thing is just here, let's go to her Instagram. And I know I'm saying her as it's an ai, all right. Instagram got about 16,000 followers, uh, which feels low Frame.io for someone for a, a, a product that would sign with a talent agency.
Mm-hmm. Potentially, I'm assuming the talent agency deals would be like brand deals, brand promotions. Stuff like that. You know, I think probably calling it an AI actress is like the triggering part, unless if they call it AI influencer, that might be less triggering.
Yeah, exactly. Uh, you're encroaching on SAG territory, you know, the precious quote unquote celebrity territory with somebody that's fully synthetic.
And of course this is gonna cause some hair to rise on people's arms and shoulders. What are your thoughts on this?
We've talked about this before, and it feels like it's in the same bucket as CG influencers, like CodeMiko, uh, not CodeMiko. That's a different, that's a, um,
Lil' Miquela. Lil' Miquela.
CodeMiko is also a synthetic influencer. Yeah. But it is powered by a real person. It is powered
by a real person who, yeah. CodeMiko is a, a. Yeah. CG avatar that is powered by an actual human being who plays the character. Little McKay, very popular cg, non-existent character, but it's not unreal team of people.
Right. In real and, and and traditional computer. Yeah. I worked on a couple of animation episodes.
Back in the day. It was, it was a fascinating look at how a digital character was run. And I'm guessing fast fasting forward to today, it's probably the same thing. Um, there was an entire creative and technical team behind this one synthetic character.
The team was like 10 plus people. And, uh, it, it's, it was really interesting because they, they obviously have an idea of what she should be, what her life should be. Um, mm-hmm. Lil' Miquela was, uh, bisexual and had, uh, both girlfriend and boyfriend problems and all of these things that were fabricated to make her really interesting and really appealing to a broad demographic.
And then on top of that, there was real products placed within a lot of the campaigning. I think she had a Samsung campaign for a while that paid real money from Samsung to
do. Yeah, yeah. 'cause people see the posts
and
I mean, that's basically what their, the companies were paying for. It's like, do you, do people see the posts?
Does it get engagement? Cool? Like Yeah, product placement.
Yeah. Lil' Miquela was done like, I would say at the time, back in 2019, 2020, right before the pandemic at the highest level of unreal engine quality. That was achievable at the time using the. Some of the best unreal artists at the time were, were on that project.
It also, it didn't look photo realistic. Right. Like Lil' Miquela you knew was a Yeah. Was that was part of it. Like you knew what you were getting into
and it was cool. Yeah. So here's a Lil' Miquela's feed, uh, 2.3 million followers, right? I mean, well, I will say this does actually look more realistic, but you could tell it's like it has that, uh, metahuman feel
for sure.
Like the, the, the. Character is in the uncanny valley. Mm-hmm. But what they used to do was, um, they would composite her into real backgrounds and real photos of real people. Like in this case, they would take a photo of this real guy holding nothing. Right. And then they would just composite mm-hmm. Lil' Miquela into his
back.
Mm-hmm.
But you're aware. Yeah. You know, it's not like they're trying to pass her off as a real person, you know? And look also, they're not trying to pass, uh, Tilly Norwood off as a real person. It says that is an AI creation, but it also. Instantly off the bat, this looks photorealistic. This looks like a real person.
Not to me. No.
Uh, because it's, because, you know, right away if I just showed you like a stack of images of just like young women posing on with the, with their coffee,
uh, the, the one and you didn know giveaway Frame.io with, uh, synthetic avatars is there's two symmetric. Like the face. Um, so yeah, like 99% of human beings on the planet are asymmetric.
So like our right eye is a little higher than our left eye. And all those subtle cues give us away as to, you know, that we're real, you know, and synthetic generations just tend to be too perfect. In this case, you could see here like, yeah, but look, I
mean, I'm saying they're like, they're not, they didn't make this as a character, as like a CG character, that they're using AI to power it.
They're like this, look, they're make, they're making this as photo realistic as it's possible with the technology today. Is that like,
it's good enough? Like I, I I'm talking
about the intent. I'm not talking about how does it look and how does it look because you like, identify this stuff. I'm talking about the intent.
Sure. The
intent is to make it photo real, I think. Is it, look photo realistic. It is indistinguishable from reality.
Yeah. Like if you're scrolling and you like, didn't. Stop to look and read the stuff. You would think this is like a girl posing with her coffee with generic
cup of
Starbucks. It would like most likely to, 99% of people would not stand out to them.
But something was weird about this, right? We talking about the. AI influencer stuff and the actor, actor stuff. Where, where, what are you thinking about this now? Okay. Unless to flip this, this, this is definitely obviously like synthetic looking. This
Yeah. I, so here's my opinion on this. I think it's, it's very gimmicky and it, it may have a short term win with a couple of maybe.
Big product campaigns or perhaps mm-hmm. Even a placement in an AI movie. But in the long run, this is not something that will stick and we'll forget about it a year or two from now. And I think what will be more interesting is taking a real person. Turning that real person into many different likenesses.
Uh, for example, if you take, um, if you take you, if you take Joey, I would really want to see a, a 15-year-old Joey and a 65-year-old Joey doing two different completely different things or I don't wanna see
that.
Just taking an example, I, I think it's far more interesting patch. A, a synthetic avatar to a real person and tie it back to a real person then to have fully, completely synthetic avatar creation.
Yeah. If it's part of their thing. I mean, I'm also thinking of, um, you know, they've got those hologram boxes. I forgot what the product, what the company name is, but you know, the big box that creates like a hologram lifelike version of like a real actor and then it has some AI programming in it. So you, you could talk and ask questions to them and they sort of respond based on the context of your question, if they have enough material to generate that.
So. You know, it sounds like what you're saying is sort of in that realm of, of real person, and it's like, oh, maybe they can be in more places than once or replicate. Themselves to kind of do more things than, than they could physically attend or be at because of scheduling or just feasible issues. Like you can't have a 15-year-old version of a, an actor that is 40.
I mean, if you're looking at everything that you're showing on the Instagram page here, like everything here seems so uninteresting and bland and generic, like, there is nothing in here that makes me want to go back and watch this again. Even the way she looks, she looks very generic. You know, like, uh, you can't pick her apart in a lineup of a bunch of AI avatars.
Yeah. I mean also like what is her backstory? Like what is the story? Because I'm not saying like, you couldn't pull this off. It's like we obviously as humans fall in love with fictional characters all the time of just like getting engrossed to their story and. Root for them. So like, I'm not saying it's not possible to create a synthetic character if you develop like a whole story around them or some background or something around them that sort of is another version of storytelling and you're just doing it through social media with synthetic characters versus making a short film or a movie or a TV series.
I don't see that with this. I mean, maybe, maybe there's more there that I haven't dug into, but that could be something that could potentially work in the synthetic route of. Story in world building with synthetic characters.
Yeah. I, I think that's where you and I agree to disagree. I I, I don't see it that way.
I, even if you're generating a fully synthetic film, like what Cave on the Kid did
mm-hmm.
Echo Hunter in that you have, uh, Brock, I forget his last name, but you have a real actor who's digital, right. Trying the performance. Yeah, and it was approved by, I think that is really interesting because you instantly attach that notion of Brock and who that actor is to the performance that's being displayed on screen, and you just have this natural connection to it versus looking at fully synthetic backgrounds, fully synthetic actors, everything synthetic.
You're just like. What am I even looking at?
Yeah. I'm not saying I love it, but I'm saying I think there's an audience for it. If you have the story and you have the connection and you build up the connection to that character through old school traditional storytelling techniques, and not just like what you're saying with this hype, with this reel, with a bunch of just random shots of this AI character in a bunch of random scenes where it's just like, what's the point of this?
Like I, I don't have any fictional connection to this, to this character who's just. Just very doing a genre. Yeah. Running around a bunch. One thing I did maybe think about what you're talking about with the sort of like flash in the pan aspect about this is, uh, part of this and the signing and building up these AI actors, uh, or talent, uh, part of it feels like the 2025 version of bored apes, where that was like ed PC hype reel, flash in the pan, you know, tons of money moving around.
And then, you know, what are the bored apes people that spend a hundred, couple hundred thousand dollars on them now? They're worth a couple bucks maybe. Is this the sort of. AI version of like bored apes, build up the character, build it up, try to get some money inside, and then
Absolutely. I think there's a lot, so much investor money out there and mm-hmm.
If you even built a fraction of an audience as big as a real influencer, I mean you're, the in the investors is just gonna be all over it. What I'll say about what your comment was earlier about. You know, if, if you can create a real connection to a synthetic avatar, that I think would be a really interesting experiment.
It is yet to be seen. And maybe I'll be proven wrong and you'll be proven right, but if somebody can figure out how to make authentic, genuine, you know, mass. Level connection to somebody fully synthetic with an audience that's massive. Then we're really talking about a whole new genre of entertainment and a new window into this world.
Yeah, and speaking of genre, that was their response. 'cause they did post a, a response to, you know, the initial story coming out and then a lot of anger and stuff over the. Creation of the, what they're calling it now, an AI character. In this posted response, uh, quote, she's not a replacement for a human being, but a creative work, a piece of art.
I see AI not as a replacement for people, but as a new tool, a new paintbrush, justice animation, puppetry, or CGI open fresh possibilities without taking away from live acting. AI offers another way to imagine and build stories, so. Yeah. They're sort of saying it's a, its own genre. I think that's what I'm saying before, I think calling it a AI actor was a bad label, and I think AI character, synthetic character or AI influencer is more fitting with like what this is trying to do.
Yeah. I mean, I, I'm not sure if I see the point that this is not replacing a person. To me, it feels like it is. And if you take a up and coming actress from this town, there's a lot of them here in la and you put 'em in front of a camera, you put 'em in front of, you know, whatever set environment that you're building, synthetically or you.
Sort of take the real person's likeness and then you cast this entire AI avatar on this real person. I think that'll go much further along in believability and adoption.
Yeah, I mean, to the point of saying, oh, is, is, is it replacing a real person? Sure. I mean, for this I see. It's like, it look, I mean, look, it's a, it's a social media platform.
It's a level playing field. It's more just like. Building the audience. Like everyone's in that same game if you're trying to build an audience. And so if you're a person or if you're, I mean, if you're trying to build an audience around your real self, if you're trying to build an audience around a character you create, it's all in the same, in the same playing field.
So I don't really see this taking away a job from a real person. It's this is, they're building out a, a brand around something. And you know, people build brands around themselves and it's just more like, who? Is more successful at building, at building the brand. I can see your point.
What happens when there's enough of Tilly nor like when there is a hundred Tilly norwoods.
That's
why I think it's just gonna get so di diluted, and that's where I think the Bored Apes comparison comes in. Like, yeah, sure. I mean, they're not the first to create an AI influencer character, and there's just gonna be, and like it's just getting easier and easier to do that, and it's just gonna be so diluted that.
Either you're gonna have to go back to the basics and like really build a brand and a story and a, you know, something that's a, that attaches you to whatever this character is besides just like generating a pretty face or, or you know, it's just gonna get buried with everything else and to what we've talked when we talked about this like a few weeks ago with other AI characters.
That's why I still think. If you are a, you know, real person, like I, I think the biggest, the biggest buffer you're gonna have in this sort of AI future is building the brand and like being yourself and building the brand and authenticity around yourself. Because as big as AI or CG influencers get, they can never do like a real in person event.
You can never like actually meet them in person. Sure, you can make all the chatbots and virtual experiences you want, but there's like. They will just never exist. Whereas, you know, if you're a real person, you're building up that authenticity. Like we are real people. Like people can see us, they see us at events.
We have like, I feel like that's the buffer in the future to kind of rise up above like the AI noise that's gonna be super easy to, for anyone to generate.
Yeah, all of, all of the above that you said. In addition to that, a real person has genuine reactions to current events or somebody else's content, or somebody else's opinion, and an AI can only mimic that reaction, not really have an authentic one.
A real person can have some emotional updates, some down days, and those things create, for lack of a better term, art. And, mm-hmm. Like, we, as, as human beings, like those are the things that we also resonate with and connect with, with other, you know, creators. So, again, like there's so, like 90% of what makes a social personality, quote unquote real is missing from.
What I'm seeing here.
Yeah. Yeah. I
get, yeah, I get what you're saying with that.
All
right. I think enough Tilly for
now, right? You got any more Tilly? Anything else on Tilly? Goodbye.
Tilly Norwood. Hope to never see you again. Okay. Okay. Maybe
she's gonna, she's gonna hot.
She's gonna hot. Your dreams now.
Goodbye Tilly Norwood. I will see another a hundred of you in the next episode.
I think, uh, every time you generate something now, it's just gonna be Tilly Norwood appearing. It's gonna, it's gonna haunt your latent space. All right, next up. This one came out like a week ago, but we didn't really get a chance to talk about it and we've, we've been texting thoughts about this a lot.
Yes. We both have thoughts about this. Yeah. All right. So the rap had a big story on the Runway Lionsgate deal, which was announced last year. And they sort of were, the Lionsgate was sort of the first studio to have announced the partnership, you know, with Runway, big AI company. And the gist of it was like Runway was gonna train on lions gates.
Extensive film library make custom models. They were gonna use this for, I don't know, it sounded like every stage of the production process from like storyboards to vis, to possibly like generating trailers, all sorts of stuff. And we haven't seen anything come out of that. Right. And uh, the rap had this.
Kind of deep dive into all of the tech issues that have happened. Yeah.
No. So everything that I'm gonna say about this is gonna be purely speculative. Obviously, we don't know what's happening behind the closed doors yet, Lionsgate, right.
Aside from whatever was reported in this article that we'll talk. Yeah. And also we don't know what. Products or what r and d projects that Runway is throwing at the efforts internally. Mm-hmm. And, um, from, my guess is, you know, what's probably happening is that Lionsgate has, you know, very precious ip, uh, perhaps John Wick as an example.
So if you take the John Wick world, and that's a beautifully created world that has a very specific style to it.
Mm-hmm.
Characters look a certain way, and there's like this entire film noir look to it. I guess what they were trying to do, and what they're trying to do internally is maybe recreate some of that, maybe do some shot replacement for the next John Wick installment or the TV show or what have you.
I, I think that's in Runway think
too
detailed from
what they were trying to do. What do you think they're trying to do? Well here? Okay. Let's cover the article first, then we kind of jump into that, so, all right, sounds good. Uh, from the article over the last 12 months, the deal has encountered unforeseen complications from the limited capabilities that come from using just runway's AI model to copyright concerns over Lionsgate.
Lion skate's own library, and the potential ancillary rights of actors. The reality is that utilizing just a single custom model powered by the limited Lionsgate catalog isn't enough to create those kinds of large scale projects. It's not that there was anything wrong with one Ray's model, but the dataset wouldn't be sufficient for the ambitious projects that they were shooting for.
Uhhuh last, uh, Lionsgate vice chairman Michael Burns, uh, last month, he bragged to New York Magazine's vulture that he could use AI to remake one of its action franchises, an allusion to John Wick and to a PG 13 anime. Three hours later, I'll have the movie. Right. Okay. So that was one of the targets they're going for.
That's not gonna happen. Take take an ip, existing property we have and then re-spin it into, uh, Spider-Man into the verse or something. You know, high art animation masterpiece. No. Okay. That, that, yeah. Obviously that didn't happen. The first thing when this, when this deal was announced, the thing that always stuck out to me was Lionsgate does not, like, I can't say Lionsgate has a.
Um, like a visual language that is like, like I can't say, oh, there's the Lionsgate look. Yeah, like Lionsgate produces movies. There's a whole bunch of movies they've produced. They all look different. Yeah. So when it was like, oh, we'll train it on their whole catalog of stuff, it was like, like aside from John Wick, what you're saying, it's like there's what other franchise in there?
There's no like Lionsgate look. It's not like, yeah. Lion Studio Gigli. It's not like
Lionsgate is not Pixar. Right.
Yeah. It's not, yeah. Yeah. Even those, they all have different kind of looks and stuff. Yeah. And then, so just training on the movie would be like, yeah, that's not a lot of data to train on
and the variance.
So why, maybe
if they can source all of the material, maybe you'd have more stuff, uh, of like, not just the final product of the film, but like every element of the VFX passes, all of like all of the raw footage of the outtakes and all of that stuff. Maybe you could get more. I'd be curious if they, that was the plan to train it on all of that data too.
I remember a long time ago when we talked about that, um. AI training company that was like brokering deals with, uh, YouTube creators. But the thing they were interested in was like the YouTube creators entire raw media library. Not just their outputs, but like all of their outtakes. 'cause that data was extremely useful for all of the AI companies to, to train on.
Right.
So, yeah, I'm curious about that. So yeah, I. Couple of things going on here from the Lionsgate executive side, I think the expectations are just way too sky high. You know, being able to generate a PG 13 version of John Wig just from ingesting it into a Runway engine like that seems ridiculous. Like this is, we're not even close to that.
I mean, that seems like the same
gist of all of these other executives, like a year or two ago when it was like, AI's gonna revolutionize our entire workflow, right? And we're just gonna like switch everything to ai and then like now, a year later, it's like, uh, that, yeah, couldn't. Couldn't do as much as we thought we could.
So it's really interesting that. The entire Lionsgate library is not enough for quote unquote training the model.
Mm-hmm.
What I think is happening here, and I don't have information on how the, our system architecture is underneath the Runway video generation model, but on an image generation side, you know, the foundational models take billions and billions of images to train, and once it's trained, you can then attach, uh, a lo or something that in.
The size of, you know, 20 to 30, maybe 50 images to then pull that foundation model to generate in a direction that you want. And video generation's the same. Um, for like one, 2.1, one 2.2, you could build custom tune Lauras, uh, using, I don't know, 50, a hundred images, and it'll do more or less. What you train the images on, right?
So if you take screenshots from the John Wick film and if you have enough, let's say a hundred images or so, you would more or less be able to generate the environments and perhaps even the actor's costumes in, in that likeness. Um, I'm not gonna say it's gonna be, you know, usable or production ready.
Mm-hmm.
But it'll certainly get you there. So. Perhaps there is a limitation on the customization and how much Runway model could be fine tuned, and maybe there's a lot of pain there to sort of get the correct output that you want. And the second thing which you nailed is the Lions gate. Catalog is so varied.
It's just like, you know, they've made so many types of movies and so many different genres, so many different types of looks that to train on the entire lines gate library is not really gonna get you anything specific.
Yeah, I mean, sure it's good high quality data for just a general model, but. There's not like, it's not like Pixar.
It's not like Studio Ghibli. There's not like a Lionsgate look. There's no look.
Yeah. I think the only advantage is perhaps Lionsgate has access to, you know, the raw files from the camera. Mm-hmm. Or the XR files from VFX. And if you combine those things, you can train an upscale or model to have EXR output, 16 bit output.
And it's kind of what Luma has done with Ray3. Right. So. There's, there is advantages to having really high quality data, but as far as like custom tuning a video generation model to output a certain style of movie, like if, if I had James Cameron's Titanic, right? That's a three hour movie, and if I was able to train on that, I'm pretty sure I could get a lot of the details accurate when I generate.
Titanic shots. I think, um,
also, you know, where you're, the example you're giving where it's like, yeah, you could train alo with like Wan 2.2. The issue, you know, that they call that was like, yeah. It's also limiting to just only be able to use Runway and like kind of anyone that's in the AI space, you're using every tool available because all these models sort of do different tasks better than others, uh, or make things better than others.
And there's just. Being sort of tied down to Runway, even with being able to have their support to train. A custom model is still. Limiting to like what you're able to do with and not have the full tool set available to you.
Totally. And uh, I, I think the approach here is like coming from two different angles, right?
Like if you look at what Netflix did with The Eternaut mm-hmm. Where they did a shot insertion, where something that would've been VFX would've been really heavy and complicated and they just used AI for that shot and they got it done right. But it wasn't at a. TV show level or a movie level. Like they weren't trying to generate the whole thing in ai.
They were just take, you know, nitpicking and taking a couple of things here, replacing it with ai. If Lionsgate takes that approach, it's a much easier ramp up. It's a much easier adoption curve, I think, into getting the filmmakers to start to use AI.
Yeah, I think it's one click vision of like, we take our movie and then we could turn it into anime, or we could turn it into some like.
TV show episodic in a foreign language in like three hours is pie in the sky, at least for, you know, a while. At least for
now. Yeah. Yeah. And I think it will be for a long time because, uh, you're not only having to generate a very convincing storyline that resonates back to the original world, but then there's so much gap filling you have to do in building this, uh, environment, this world, this style in detail, you know?
Put the characters in there, and if they're synthetic avatars, then you have a whole set of problems to try to make them look realistic and sound realistic, and you just open up a camera in warm holes that you don't know how to fix.
Mm-hmm.
Yeah.
But yeah, I think what you're saying, and we'll actually show this in our next story, uh, using these tools and for specific parts of the process, uh, you know, whether that's pre-vis, whether that's storyboarding, whether that's like helping with VFX shots, uh, rather than like this.
Large encompassing approach where you're like, yeah, we're just gonna make the whole movie with AI. Uh, like, that's not gonna work. But the specific, using AI to help speed up specific processes in like the traditional filmmaking workflow is where there'll be like more instant gains, uh, absolutely. To, to be hack.
Absolutely. And,
and just to kind of, uh, go back to runway's capabilities, I mean, they're probably the leading provider of AI powered VFX solutions. I mean, I can't think of anybody better in this space where, you know, if you take a product like aif, like who's really doing the stuff that AIF can do, maybe it's not fully movie quality ready, but it certainly is very, very powerful and is mm-hmm.
Kind of paving the way for VFX, that's AI powered.
Yeah. But I mean, look, also, it is a black box and you're kind of just like, yeah, tell it what to do and it sort of does the thing, but you don't really have that. I mean, I, I'm curious what the deal looks like when you have, you know, enterprise support like Lionsgate has and like what.
Additional features or controls they unlock. But I think of the only other kind of thing I've seen that does sort of what Aleph does, uh, you know, is sort of the big Muppets workflow that we covered a few weeks ago where he Yep. Sort of built that wand 2.1 comfy UI workflow where he takes an existing video, can mask out the person, keep the person completely consistent, and then change everything else around them.
But that is a complete comfy UI workflow using open source tools. The, the bigger thing is like it's built in company ui. You can see every step of the process of how, how it's, and you can doing it. You can control every step of the, the process. And you can control, you can change, you can fine tune it. You have like that highest level.
Node based pipeline vet professional studios and VFX artists are used to having that level of control. Dude, Lionsgate
should have just hired Mik.
I know, right? Yeah.
Right. I mean, and maybe it's not the technology, maybe it's the people. Perhaps it's a different way to think about it or, yes.
Maybe it's the organization too.
'cause it's like, you know, I'm just like, we don't know that much. But it's like Lionsgate is. Studio office. Like I don't, I, I, I doubt there are any actual like VFX people on staff or Lionsgate. Like if they make a movie like John, I doubt it. They're going to another VFX shop that is gonna do the VFX work for them, or probably a couple VFX shops.
Uh, so like every movie is a completely different assemblage of teams and people, companies.
Yeah.
Yeah. Like Lionsgate is movie
making. It has always been a silo.
Mm-hmm. Yeah. I mean, every movie is like a little mini startup company that like a bunch of pieces come together and a bunch of different contracts and stuff, which also, you know, that's the, the contract thing was another issue of just like dissecting the rights of every single movie and 'cause every contract's unique, every movie's unique.
And that the other roadblock was just figuring out do they have the rights to ingest some of these movies to train, uh, in the a AI models. Um, and that, that's, that's gonna be another. Issue as well with training on existing ip.
Also, in addition to what you said, I'll, I'll even take it a step back further, which is that.
With the exception of perhaps Disney and Netflix, which are fully vertically integrated, right? Like they have, uh, a execution arm, they have a technology arm, and they could go all the way down to making the thing and then distributing it. With the exception to those two studios, I would say every studio like a Lionsgate, like an A24 more or less a financial institution.
And what I mean by that is you just got. A bunch of executives and a bunch of people that make big money decisions, right? Mm-hmm. They'll give the John Wick filmmaker a hundred million dollars to go make John Wick four. Mm-hmm. Or whatever, and then it's up to that filmmaker to figure everything out. Now those.
Folks that are making these big, high level decisions with money are not really gonna understand the nuances of what is and is not possible with ai. So I think there is a fundamental difference of those two worlds.
Yeah, I mean maybe Lionsgate has a tech arm that like offers support, you know? I know, I think like there's some similar stuff even with the vertically integrated companies, like a Netflix where.
If even if they bring on a, you know, they produce a, a, a a movie with an outside production company. They sort of have a tech division where it's like, Hey, we can provide support and help you with problems and stuff. 'cause we've like, you know, see this happening across a bunch of different films that we're producing and so we could help you however you need help with.
But, um, yeah, I mean, two summit. Amazon as have something like that as well. Right.
Amazon has the technologies in
inhouse
and stuff. Yeah.
Yeah. So it's like, I don't know if they were trying to do something similar with that and it's like, here's this sort of wide ranging support for all the projects that are happening.
But so yeah, what you're saying, Lionsgate is an office building that is producing movies, like getting the financing and funding and distributing them and stuff. But uh, yeah, they're bringing out their, every movie is a completely different mesh of teams and people in other companies, so, exactly. I'm just kind of curious what the initial vision was and then like how this is gonna.
Play out in the future. Like I, I don't see why this wouldn't work for pre-vis or storyboarding or concept designs or poster mockups or like, I don't see why this wouldn't work for that. You know, I think the initial, there's,
there's a lot of opportunity for a generative AI company to build real value and real efficiency within, uh, movie making pipeline.
Yeah, an existing pipeline. Something that.
Is there and how you can fit into that,
uh, and save time and money.
Well, I mean, best of, best of luck to Runway. I, I'm a big fan and I hope that, you know, they, they have a, a few wins at Lionsgate and that's all you really need to get the steam rolling. Yeah.
Yeah.
I'm curious. I'm like, I, I think it hit, I think it probably just hit the roadblocks from the initial vision, but I don't think this is a fail. I think that will still be good things that come out of this.
I think it's, it's a big misunderstanding, if anything.
Yeah, I think this is the same. This is the, uh.
Film industry thing in the bucket of the CEO that goes, you know, we're gonna replace our entire tech support staff with chatbots 'cause they can handle everything. And then realizing like, hmm, they're not quite there yet. Yeah, it's, it's the Duolingo thing. Yeah. Yeah, exactly. Exactly. The Duolingo of M&E.
Yes. All right. Last story, and this one kind of goes to what we've just been talking about, blending traditional VFX pipelines.
With AI. So we saw this ex post by Freddy Chávez Olmos. And I looked him up on IMDB, a very accomplished VFX supervisor, I think based out of Montreal, if I'm not mistaken, worked on movies that you'll recognize such as the Meg Annihilation, Deadpool, Chappy, American Sniper, sin City, ordained to Kill For, and the Mandalorian.
So there you go. This is somebody who is. Deep in the world of tier one filmmaking understands the quality that's required and is also really familiar with, sounds like all of the AI tools we have on hand today. So what happens when you give somebody with this level of skill and this eye for quality?
The tools that we have. Take a look.
So, uh, basically what he did was he showed comparisons with some original footage and, uh, some footage that was already had traditional VFX, like some older shots from Twilight, uh, and then redid them using a modern AI tools. His tech stack is Juan 2.2 animate, uh, in combination with Nano Banana, Beeble, and Nuke.
For VFX enhancements, replacements, and full head D aging. And we could see some of these comparisons side by side tests of, uh, some original footage like this Nick Cage concept of Superman. Yep. This was the twilight shot. Where it was just the actor and then replaced with the wolf. Yeah, there's three
examples and you know, each of these three examples, what is, what I would call very classic VFX problems to solve, right?
Mm-hmm. So the first one is relighting and, uh, sort of, uh, a head rotation and maintaining likeness, uh, which is the Nicolas Cage Superman example that we're showing you here. So. It looks like he did a second pass using, uh, Beeble to relight Nicolas Cage and perhaps maybe even generate him from scratch, and then composite it all back together in Nuke.
And it's a really tricky shot because his head and his body is rotating, so you can't just have like a plastered face. You actually have to have spatial awareness of that move. And then the second shot, this is from, uh, one of the twilights where, you know, um, you're essentially have a stand-in that gets composited out and then replaced with a CG animal, a wolf, and it looks like Wan 2.2 Animate was used here to create the wolf.
And then all of that was composited with nuke.
Yeah. So what do you think the workflow would be? So you think it would be track the body. With one 2.2 animate.
So I think he used nano banana to generate the wolf. So you have your reference wolf image, and then you give that to Wan 2.2-Animate, and you input the motion of the, uh, the mocap actor here that you see with a gray suit on.
That's your input to Wan 2.2, and then your output is gonna be that wolf in that same motion cycle. And in Nuke you're probably, uh, feathering and blending a lot of, like, that fur looks really difficult to blend and composite in. So, uh, obviously this guy's really good at Nuke, so he's doing a lot of heavy lifting here that we're not seeing.
And you think he would've
used Wan 2.2 and sort of just to create the wolf on like either green background or on. Something that he could composite out and then layer it on top of inside the web action footage. Yeah, you would've to
maintain in, in just like any other composite, uh, you have to maintain the same camera, uh, lens and the same camera perspective for both plates.
So I'm guessing in, um. Nano Banana, perhaps there is this entire frame in there just to kind of decipher what the cinematography language is and then replicate that in Wan 2.2. So when you blend those two things together, it looks like it's being shot from the same camera.
Yeah.
Also, I mean, just with her
hand touching the fur, that also looks really good too.
And like blends right in.
Yeah. Yeah, exactly. It's, it's really tricky. But the fact that no computer graphics was used per se, like this wolf is not a digital wolf that was built in blender and the fur was done in NextGen and I, it's none of that. It's completely synthetic that, in that it was just generated in Wan 2.2.
Yeah. Or here, actually I just noticed in his original, in his behind the scenes thing he has where he completely erased the actor and then Yeah. Brought in. Yeah. And then, yeah, this, uh,
uh, Paul McCartney aging, the Paul McCartney. Aging is nuts. It looks so good. I mean, looking at the stuff. 10 years ago, our minds would've been blown.
And now it's just like a post on X, you know?
Yeah, yeah. I remember when it was such a big deal when they did it in like Forrest Gump and like inserted him in archival footage. Exactly. And Tom Hanks. Exactly. And uh, yeah. Now it's like, oh, I'll just pull, run it on my computer.
Well, people still talk about like the Irishmen, where De Niro and Pacino are DHD much.
Yeah. And they had to shoot, that was like a three camera rig. It's massive. Thing get Exactly. They had like infrared camera set up and a traditional camera set up and it was, it was like a whole thing. And now you're able to just do it in comfy more or less. What do you
think, what do you think was the, you think like giving old images of.
Paul McCartney and Nano Banana to replace the face to have that driving image and then animating. So kind of similar, I think you were talking about before, animating it with Juan and compositing with Nuke.
Yeah, I mean this is a, he picked a good example because Paul McCartney's hands and his torso when he is older, match that of when he's younger.
They look at the hand, the hands DH too.
The hands look a little bit DH as well, but the, the, yeah, it's a, it's a one 2.2 input output. Example, right? You give it the old Paul McCartney motions as an input, and then with Nana Banana, you give it a young McCartney as the reference, and then output is this, and I think the.
The magic here is the lighting matches really well. He's blending in with the couch and the shadows are cast correctly. Um, his face is lacking a little bit of detail. Uh, if you look at the old,
I wonder if that's a a, a resolution issue too. 'cause I'm, I'm wondering what he is outputting Juan 2.2 at 'cause like sort of, I think it's kind of is in that seven 20 range to have some usability.
I mean, I think you could crank it up to 10 80, but it might take forever and you would need a lot of. Some beefy machines. Um, yeah. So I'm curious. Yeah, I'm curious what, what the, because also I remember we're looking at this on, on X, which is already compressed, so I'm curious what the, uh, outputs were at, were, were being done at.
My guess is he grabbed the, uh, video from. YouTube perhaps. So it's not gonna be high quality to begin with. Yeah. So, you know, input is output and, uh, you're not gonna have a higher quality output per se, but if the average person looks at this, they'll think it's real, right? Like, we're nitpicking here and we're trying to find flaws, but really like, this is well done and this is acceptable for, no, this looks great.
Uh, talk short for sure.
Yeah. And I think, I think this, I think this goes, you know, directly to what you were just talking about when we were talking about the last stories, where it's like this is the, the future of taking traditional VFX pipelines and using AI to help, uh, enhance it, speed it up, uh, make it better.
And do stuff like this with a, you know, that one person just kinda messing around in their free time, making something look this good.
Yeah. And I don't want to take any of the attention away from, I think the bread and butter of what, what is happening here, which is Nuke. Uh, so much heavy lifting is being done on the compositing side, whether that's, you know, the.
Edging, the feathering, the blending, the highlight control, the shadow control, uh, skin tone matching and all the, all the stuff that he's probably doing under the hood. I don't even know. I'm not a Nuke artist. He clearly is. And, uh, with, you know, with AI tools and with traditional VFX tools like Nuke, those two things combined, I think gives a way to a completely new type of workflow.
One that is. Potentially far less expensive than traditional VFX.
All right.
Good place to wrap it up.
Uh, let us know what you thought. About this episode and the comments and, uh, link for everything we talked about as usual in the show notes or over@denoypodcast.com,
and just going back to our last video on YouTube. Thank you all for the comments. Uh, Sarah Naura added again, read 41 0 9, as well as two new commenters, ob dfu and the Creation Studio.
We thank you.
All right, thanks everyone. We'll catch you in the next episode.