Denoised

SwitchX, Seedance 2.0, and Hollywood's IP Nightmare

VP Land Season 5 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 38:58

Beeble's SwitchX transforms video backgrounds and relighting without destroying faces—potentially the most practical AI filmmaking tool yet. Joey and Addy test it live, debate LED volumes versus AI workflows, and break down Seedance 2.0's official release amid deepfake controversy. The hosts also critique Darren Aronofsky's AI-generated '1776' series, comparing it to traditional filmmaking and questioning when AI should replace real production.

--

The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.

I did a test with us and that's us. I could run another test and start through the, 

oh, the cyberpunk alley, dude. Dude, this is like the most cliche example. I love it.

Alright, welcome to to Noise. Addy. Good to see you. 

Good to see you, sir. 

All right. I'm excited about this one. I feel like this week all over online has been Seedance Seedance 2.0. Yeah. But, um, I've got, there's one that I'm more excited about from people, a new tool that, uh, just came out. We got a sneak peek, we're doing some filming a few days ago and got a sneak peek at it, but it was officially released today.

This thing feels way more practical in the AI world for actual filmmaking mm-hmm. Than, uh, most of the other stuff. That we've seen. 

All 

right, so what's, what's it called? Break time. It is called SwitchX from Beeble. I also wanna clarify, because I've had this confusion after I've, as I've talked about Beeble to other people.

This is Beeble, the AI VFX tool set. Um, this, not the artist, Beeb the artist, because I've, I've been like, oh, you know Beeb? And they're like, yeah, I know Beeble. He's a, our B he does this stuff. I'm like. Different people. Yeah. So people's main thing was this really cool, kind of like break, you give it a video, it breaks it down, and it does this AI relighting and it gives you all the elements that you could then bring into Blender or nuke or uh, whatever your compositing tool is.

And it was kind of this good middle layer. But the problem was you still, they had an online tool and it was okay, but you still had to know compositing and like be good at VFX. Uh, it wasn't sort of a one stop shop, but this new tool SwitchX. It's basically video to video transformation, but. It does not mess up your source video and it does not mess up your people.

It is the best preservation that I've seen in their demos and a couple of tests I ran that keeps what you want and changes everything else around you. Kind of like that Make Muppets workflow that I always keep talking about. I'm like, that thing was cool, but you gotta like run Comfy and it's a whole crazy thing.

And you have to mask 

two clips 

manually and all. Yeah. 

Yeah. This one a mask. But let me, I, I can break it down 'cause I have a, I have a demo up, but, uh, and some clips 

nice. Okay. 

But basically it's got this, uh, you know, you got your source footage woman walking in the woods. Right. It'll also, it'll automatically do the alpha and it runs a lot of the kind of beeble secret sauce stuff under the.

To like mask and relight your subjects? Yeah, the 

relighting engine. Yeah. 

You give it a reference image, like your first frame, you style you use, not a banana or whatever you want to style, what you want the scene to look like. And then the video generation uses that first frame reference and the masking to generate everything else.

Mm-hmm. But preserves, it'll relight your subject. So they blend in, but it'll preserve everything. Uh, it'll preserve them. So let me go to, 

well, it's doing much more than relighting Joey. It's, um, it's changing costume. It's adding footsteps in the snow. It's. Putting you into that background. So like it's doing a lot of heavy lifting.

Yeah. Everything else around the person, but even the subject itself, you only go to the Hoons video. So this was their launch kind of demo video. And so what we're seeing in the big screen is the final output. So Hoons just sitting in his office. 

So that, that lighting feels really accurate to me. Like, uh, if you go back and, uh, just yeah.

I mean, we could just keep playing the 

city. Yeah. Yeah. He's getting re-lit based on the environment. 

Yeah. 

I mean, the relighting you're doing is based on your reference image. So you give it first frame, have it modify your reference image with the new lighting that you want. Then it, 

yeah, so much of it is, uh, white balance, um, like they're getting the white balance right.

And um, also it is highly aware of where the strongest light in the scene is, and he is sometimes underlit because there is no light in front of him, which is accurate. 

And it could just be a matter of getting your source image better. Or having, you know, that, that, that having that correct? 

Yeah. Like, uh, if you, yeah.

Some of them are overly contrasty. Like the last example in the, in the NBA gym. I wanna see the relighting here. 

This is the Relighting 

Game of Thrones throne. 

Yeah. 

Yeah. This is, see that top down light? Yeah. All that shadow under his eyes and nose. That's nice. 

Yeah, I mean the main thing Frame.io is if you, it's hard to see on the smaller screen, but the source video and like what they look like Yeah.

Doesn't get messed up because like we've had video, video with Runway, uh, olive or, uh, even cling and clinging is like cling A one has been sort of the best I've seen. But it still messes with the face a little bit. Yeah, you get a little bit uncanny valley. Does not mess with the face. 

We're so, like, human beings are so sensitive to facial distortion.

Like you stop looking like a person the minute it's like 99%. 

Yeah. Just one 

like that 1% makes all the 

difference. Yeah. Or even that 0.1% where somebody's like a little bit weird or, and there's like, uh, feels something, something feels off. So, uh, I did a test with us. Hey, that's us. I could run another test.

And so through the, 

oh, the cyberpunk alley dude, the workforce dude, this is like the most cliche example. I love it. 

So basically the process, you give it a input clip, and the limitations right now are. 

Oh, so the, now we're looking at everything web-based too, like no nuke needed. 

Exactly. And that's like the video, the, the ongoing saga video that started with be that we were doing for, uh, the channel here.

That'll be out hopefully in a few weeks now that we've hopefully finished filming. But, um, it was supposed to be, it started off as like recreating the one battle after another chase scene, uh, with an iPhone. And we shot some stuff in Beeble at their, uh, studio, at beeble slash partnership studio in like November and.

Part of the delay besides the holidays and stuff was needing like a new compositor to put it together for like that last mile, right? To finish it. Yeah. And then in between then Conrad is like, that test clip has like unearthed a lot of issues and also helped develop like four different new products, like an unreal engine plugin from people.

And was also like, what are the test cases to develop this? SwitchX thing where it's just like, oh, what if you don't need to have nuke and all that stuff? And you could just kind of do this here. So now that we have this new workflow, the, the, you know, the video get done faster. 

Dude, dude, I, I mean, look, I, I'm, I'm wearing a virtual production hoodie right now.

Beeble has. Start at the beginning of the end for LED volume. 

You still think it's gonna be completely like replaced or? I still feel like there's still, I still feel like if anything, AI can be another boon to virtual production workflows at a higher level, like if it just drastically cuts down on VAE and stuff.

You're talking about two different things. Virtual production, the big umbrella, I think there is, um, camera scouting, location scouting, you know, all that. Like the, what's that company we talk about? 

You mean just like, like just building 

a like V cam stuff? Yeah, just Oh, like World 

Labs kind of thing. 

Yeah.

Yeah. So I think that stuff will definitely get supercharged by AI and get better over time. What I think is the end of the road is specifically a very niche tunnel of virtual production, which is LED volumes, because it's so expensive, it's so hardware intensive, it requires so many people at the same time to run a stage.

It's physical production, which by nature is so expensive. So the two big promises of LED volumes was accurate lighting. And background replacement and Bebo's doing both right now. Okay. So maybe not to the level of quality that we need for, you know, a six k, eight K sensor. 

Mm-hmm. 

It's not there yet. The stuff does look soft.

I'll give them that. But this is the beginning of the end for that specific avenue of virtual production. In my humble opinion, I helped build the industry. I, I sold many LED volumes. I built them by hand and, uh. Every time I was doing that, uh, what I was wondering was like, what is the ROI on this, are we, how long does it take to recuperate this?

And we've done projections, we've done forecasting, you know, like within 10 shoots, the studio pays for itself, da, da, da, da, da. But like now, if this is running on the web and it's like, you know, uh, $1 API. You can run this a thousand times to get the result that you need. 

When it comes to your virtual production costs, how much of the breakdown is like just the actual day on the stage versus how much is building out all of your 3D assets and worlds and VAD and all that stuff?

Great 

question. So my numbers are my, uh, maybe just a year or two updated, but on average, if you're gonna commit to an unreal engine scene, you're looking at anywhere from 50,000 to a hundred thousand. That's two to three artists over a few weeks. That's for tier one. Maybe even high end tier two film.

Right. If you're doing plates, it's obviously gonna be a lot cheaper. Studio, uh, rental rates are, so for the big badass ones like, you know, one Nat Australia mm-hmm. Those places like that, you're looking at high five figures to low six figures a day. Then for the mid-size stage you're looking at, you know, anywhere from 10,000 to $50,000 a day, and that's without you hiring the crew.

Right. Then a cinematographer has to come in, the grip has to come in the acs. Yeah. Producer for 

any 

production. Yeah. It's physical production, right? Mm-hmm. You're putting people in a stage and they're safety or, and all that stuff. Like I know, I know for a fact. That was even cheaper than taking people on location, right?

Like if you're gonna shoot somebody, if you like. Uh, a perfect example is, um, society of the Snow. That Netflix movie, it took place in the top of the Andes Mountains, right? Mm-hmm. Like how expensive is it to take a crew to the top of the Andes Mountains or even any mountain with snow, for that matter?

Mm-hmm. 

Right? Incredibly expensive. Then you bring the snow into Anility volume. Yes, that makes sense. It's way cheaper, way more controllable. The weather's not an issue. I get it. But now if you shoot that crew on a green screen and then you can replace that background with bebo, with the snow background and then relight the actors to like a diffuse cloud snow environment, you have just eliminated the need for LED volume work.

In my opinion. 

Yeah, I get what you're saying. I think it'll probably still be two branches of like, yeah, maybe tier one, top tier stuff. Some LED volume use. They want to have that in the filming sense, like have that feedback and have that environment while they're filming. But it is like a. Premium experience versus you could film in a green or blue stage, uh, which is sort of what the Beeble VPX stage over in the valley is kind of this big blue box.

Mm-hmm. 

Or I mean, look also to like throw stuff out there even more. Uh, we're gonna do a shoot and we're gonna do, we're gonna do gray screen. I've been hearing a lot more gray screen uses. Yes. Gray 

screen is really good for ai. Yeah. 

Yeah. Just because if you're relying on the, uh, the AI rotoscoping tools have gotten so good that mm-hmm.

Gray is good and then gray, uh, you don't have the spill issues that you get with exactly green screen. It's a 

neutral spill. 

Saves, uh, a lot of the headaches we're gonna do. We're gonna, I'm gonna test it out first, and then we're going to shoot it before I commit to that, but yeah, I mean, it could just be a soft.

Gray box and mm-hmm. This, this workflow. 

Yeah. I'm not saying like daily e volumes will go away overnight. I, I think there's like the driving plate stuff absolutely makes sense. Yeah. That is the number one use case. Like you put a car in a volume. Yeah. All the reflections are free and the, you know, there's a whole.

Plate workflow behind it. It's come, come along. So far I'm saying like historically went, we went from shooting on location to green blue screen to LED volume, and now we're gonna kind of come back down to gray screen and then sooner or later we're gonna go back down to shooting anywhere. You won't even need a gray screen.

Um, with the depth map innovation that's happening now. 

Yeah. And I mean, that's even some of the showcases, like none of these are on a, these are all shot in some environment. This dude's got a door behind him and it looks like he's in 

Yeah, he's in, yeah, exactly. 

A street. Yeah, podcasters, uh, this girl is always on their promo videos.

Stock. 

Yeah, it looks like stock photo. All right. Back to 

us. 

Sorry 

for the segue. Read through the workflow of, uh, how, what you could do with this anyways. So what are limitations? Well, speaking of how this is not gonna completely. Change everything just yet. I'll upload a new clip. So you upload a video clip.

So current limitation 240 frames, so about 10 seconds if you're at 24 frames a second. One nice thing, and this not a lot of editors have this is uh, it gives you the option to trim the clip, which I really like that because I've been on other apps where you upload the clip and then just gives you an error like your clip's too long.

That saved them so much. GPUI bet. 

Yeah. Yeah. So then it, uh, you upload your video and then it runs this detecting subject feature and it. Automatically does a pretty good job masking me. But you could come in here and add additional masking, you know, click around. That's around. 

That's 

good. 

Masking. Yeah.

Yeah. And. It shows you the preview of the mask. Or you could also just use the entire image if you wanna do something, like keep everything the same, but like modify one specific thing, like, uh, change the clothing on your person, you know, so you don't wanna mess around with the entire scene, you just wanna keep them as is, but add something to them or modify something with that.

Mm-hmm. And then they have Nano Banana built in, but you could just take your first frame and, you know, Photoshop or whatever, modify what you want the first frame to look like. You got any. Got any ideas of 

Oh, uh, background ideas. 

Yeah, background 

ideas. Plenty. Let's go. Roman Coliseum. 

Uh, okay. 

Like with, uh, top, uh, I, I, I like the top down sunlight thing.

Um, I think it'll play well. Noon lighting. Top down sunlight. Yeah. Harsh, 

top down sunlight. 

Yeah. 

Gen A that'll take a little bit. Uh, so yeah, the, the limitations, I mean, you're, we're still in the AI world, so it's, you know, roughly ten second clip cap plus, uh, 10 80 max output at, you know, api, uh, you know, RTB.

So you're not gonna, if you were shooting on I something, you know, a red, black magic, black magic raw. Mm-hmm. Mm-hmm. You're gonna, that's all gonna be super compressed. And they're, they, they've said like the ideal workflow, oh, this, you know, me. 

You gotta keep the framing. Yeah. 

Yeah. I gotta, 

these guys should talk to Topaz.

'cause if, if you pair this with really good upscaling, like. That's a lot of the use cases for VFX. That covers a big portion of it. 

Yeah. I mean, they would still say, you know, they're, and talking to, and stuff like this workflow, you know, they're not saying this is a VFX pipeline replacement right now. This is like for web fun stuff because there are other pipeline that they started before is specifically designed for high quality footage.

We'll create the lighting maps, uh, the PBRs and all that stuff. And then you composite that with your original footage in Duke or wherever with their variety of plugins. So like they kind of have that pipeline already. But, um, okay. Yeah, let's. 

You got a hard shadow on you. Let's go with that. Yeah. See how it behaves.

And then you got your prompt and it kind of just auto generates where you can modify if you really want to. But I'm gonna leave it and then, yeah, you got your resolution option, so I'll just leave it at 1440. 

I wonder if Beeble is doing any work for any of the big studios. 

Uh mean, I'm sure every studio VFX House is using Beeble Studio and their pipeline for.

Right. 

Um, 

whoever's, 

I mean, I've also heard people to say they like it, not even for the relighting stuff, but just that be's here, like rotoscoping is just really good as well. 

It is just good. 

And 

Wow. 

Yeah. Uh, 'cause I've heard some V FX artists say like, they, they, they don't even need the relighting, they just like it to make keys.

This is the clip I did before, and so you could see. 

Yeah, I mean, just knowing how we light our studio. This hasn't really impacted our lighting at all. I mean, I still have the highlight on the top of my head from the light that I put here. See. This, this thing, uh, the only thing it did was really add that purple tinge on the table from that neon light.

And that could have also just been my, like, this was the reference image I gave it. So like I could have also just not have done as great a job at the reference image to relocate. 

Oh yeah. Yeah. I think that's it. Yeah. 

So it really comes down to like how good that source images that you give it of like what you want it to make and Right.

It honors that. Oh, there's a compare button. Oh, that's cool. 

Oh, 

wow. Yeah. That, that's, oh, that's even, oh, that really helps solve the effect. So yeah, there's, that's a huge wide balance. That's a good marketing tool. 

Yeah. 

Yeah. I mean, I, I mean, I'm just looking at our likeness and like, you, like I'm me, 

you're you.

Yeah. 

And look at the hand for look at your hand. 

Yeah. Right. 

And the movement. And your hand wasn't even in the shot when the first frame started. 

That's true. 

It feels like a little gap sometimes, but I mean that, that's kind of crazy. 

That does help sell the effect.

I was like, 

it's not doing macho. It's doing a 

lot good. Good job. Who and the team, you really put this great marketing effect that just for us to demo before the wipe, the wipes, 

man, the wipes will get you. 

That's right. Oh cool. This was, uh, pretty fast. Oh, it had a rogue frame of you. I did. I should have TriMed that.

That was, that was quick time being annoying at trimming clips. This is a shorter clip. Shadow. I mean, like, 

as you're moving your face, that shadow is pretty static. Like it's, it's behaving accurately. 

Yeah. 

Like half of your face in that shadow, even like the, the shadow of the wire on your left shoulder is top down.

Yeah. Dang dude. 

Well, that shadow is in the ref. In the source. 

Oh, it is? Okay. Oh yeah. I see. 

But it did amplify, it did, uh, make it more 

contrasting. Yeah. Like the entire Right. Half of your body is, uh, darker from the, the actual shadow. 

Yeah. And I mean, look, my face, it didn't mess up my face, but my face is in shadow, 

even darkened the laptop down, like the laptop's crushing.

Yeah, yeah. Yeah. It is annoying with this. This trimmed clip. Yeah. So I, I'm excited to mess around with this more. 

Me too. Yeah. 

How do 

I get, how do I get access to it? 

It is publicly rolled out into beta. All you need is any, any paid people plan. I think it starts at like 20 a month. Uh, okay. And it usually people runs on credits, but I think right now that showing that this is free during the beta, so 

Nice.

As long as you sign up, you can start generating stuff. This also might be a secret way to, uh. Use Nano Banana and not use credits right now. Sorry guys. 

Oh, for Nano 

Banana. 

NI just do the Freepik Infinite 

thing. Yeah, I know. I've got, yeah. Yeah. We're on the Freepik. On the Freepik, uh, infinite chain. Yeah.

Alright, cool. Yeah, I'm excited about this one. 

Very, very cool egg uh, hats off people. I know. This is, um, there's a, there's a lot of demanding people that are looking at this nitpicking, but you guys are really paving the way for Ai V effects 

the coolest like. Video to video without messing up your source video that I've, that I've been seen.

Exactly. All right, next one. Seedance 2.0. 

Seedance two. Yes. 

So how was, how was your feet this week? 

Uh, you know, I'm starting to get, uh, a little inundated by best video model ever, every week. 

Hollywood's 

cool. Yeah, because when Kling three came out two weeks ago, that was the best ever. And before that we were raving about Kling oh one and before that 1, 2, 6.

And, um, like, it's hard to tell. I think somebody in our comments on the last video was like, you need like a, a metric. I, I forget what kind of metric, but we need to have like a more consistent ladder of where we rank these models. And for that we need to test them. 

Yeah. 

Which, uh, which is hard when you don't have access to the models.

This one is like the weirdest rollout. This felt reminded me of Nano Banana Pro again, or Nano Banana again because like, I kept seeing clips online since like, I don't know, last weekend and now it's Friday, but it was eclipse from the most random like, uh, account or products that I'd never heard of, and I was gonna ByteDance this website.

I'm like, did they officially release this? And they had, and it was like days where I kept seeing Seedance two chatter. If you, there was another, there was like a Seedance AI and it was, you know, I remember the nana banana banana com com. Yeah. And yeah, so it was weird that I think some people had, some companies maybe had early access and leaked it or, yeah.

Gave it to some influencers and stuff. Regardless. Now it is officially out. At least announced by ByteDance. I still haven't seen it on fowl or uh, Freepik yet, but I'm sure that'll come that aside. And it's more like been chatter. But I like have not fully believed anything I've seen because there was not an official release.

Now there is, so now we can break down of like what. Is actually officially in this new model. 

Yeah, 

I mean, honestly, I mean we, we've seen a lot of the demos, but it feels, you know, similar to, to clean V three. 

Well, it's, uh, it's, uh, catching up, right? Like these guys have to kind of come up to the certain level and then one will up the other.

So I think, um, the industry as a whole is catching up to vo. Cling has caught up to vo, now it's ByteDance. He dance is time turn to catch up and then when everybody's caught up, then somebody will surpass them. So there's like a rung in the ladder. 

I mean, this feel, let's see. I mean, we'll break it down.

Maybe there's something that is, uh, you know, level up from, from clinging, but you know, it's got. Audio and video generation in the clips. It has, you could give it a video input now, uh, supports images, audios, and videos as references. So we see this kind of reference video of two guys fighting that and reference images of, uh, my God, people are gonna kill me if I don't, I've.

Whatever the fighter guys are. And then Streetfighter, it turns into street, is a street fighter and then turns into fighter. It's 

not 

an actually 

street fighter esque. 

Yeah. Why is it that the anime examples are always spot on? Like they nail those? It just sucks for the anime industry that like. Demos are always so convincing.

Yeah. Gimme some stats, man. Where's your, where's your, 

we need numbers. Yeah. What do you think? Is it still a 10 80 p model or are we going a little bit higher here? 

I would assume 4K, but let's see. 

Okay. 

I dunno if it says 10 80, but let me see if that could be total bs. I need that paper. Am I missing something?

Yeah. I mean, what you're going through is, I think what everybody's going through with C dance two is like, it's very mysterious and vague at the moment, but there is a, a illusion of. High quality. That exists in the internet. 

Alright, we'll go off whatever Higgs Field's advertising, which take that with a grain of salt.

One click video recreation. Yeah, I mean, we'll talk about this in a second. 'cause there most of the been Oh, matrix 

trade rip off man. 

Yeah. Have, and this is this, at least this doesn't have, oh God. Matrix characters. 

Somebody just got shot in the head. 

Uh, feed up to 12 assets at once. Yeah. Okay. So we know images, video, audio clips, and text.

Frame level precision, what does that mean? 

I think that's the same thing that Kling has in the other models of like multi-shot generation of the same video. So, you know, the same idea where you're making, uh, Seedance 1.5 or even one was the one where they, that was an accidental figure out that if you prompt.

Multiple shots in your prompt, you can generate like your ten second video, but have multiple shots. Yes. And it would be in the same generation, so it's consistent. 

Cling three does that. Yeah, 

CL three does that. Mm-hmm. And then obviously 'cause see dance was the first to do it. They do it. But you know, I think you know, even more consistent multicam storytelling, generate new storylines or exist, extend existing videos with natural shot connections.

I mean the frame, 

dunno what it's like. What are the, what's think the max generation, max length, or 

ation. Yeah. I mean I'm just looking at like, uh, temporal consistency of like, as cameras move around, pan around. That seems pretty solid on those demos. 

Yeah. I mean, the stuff is low, you know, I mean the qualities there.

Yeah. But I feel like. A lot of the models now, it's like that is the bar between clinging and vo. Like everything's looking more realistic and sharper and less like weird stuff happening. 

We're so close, we're so close to a model that I think will come out this year and I predicted it. That will be our quote unquote VFX model.

Like we're actually gonna use it for film. 

Yeah. All right, so, uh, this is from Google's summary, so I'll also take this with a grain of salt, but it's saying 4K resolution 15 to 22nd duration. That could be total bs 'cause this is an AI summary, but lemme see what's follow. It's on file yet? 

I don't think so.

I checked. 

All right. Sorry. I wish we, usually, I like to have more details than this and I feel like we're scrounging, but it's been such a weird, this is a sort of summarize the whole week. Let's, let's 

just cut to the Tom Cruise thing. Come 

on. Click's popping up and then trying to figure out what does this model actually do and how are people using this model?

Okay. Tom Cruise. Tom Cruise clip is sort of, uh, send everyone into a tizzy. 

So, yeah, I saw this on variety, variety covers. 

Is that 

okay? You know, as, as like a IP infringement thing, obviously. 

What, what IP here, 

the people. The, 

no, I'm joking. The dudes make it a action film with extremely detailed lookalikes of 

two famous, I, I gotta say that's really good.

Like, uh, the likeness of both of those guys at their current age. That's spot on. Even, um, if you look at like the hand contact where. They're blocking each other and like the arms are making contact, that stuff is accurate. 

Yeah, I mean, the clip look good. I've seen a lot of, Lord, the ring stuff. This one, this one actually was kind of funny, where they, they just fly to Mordo and drop the ring in the box walking, you know, 

we joked about that like 20 years ago.

That 

is. Yeah, that has always been, that was always, that was always the like, wait, if the Eagles could rescue them, why couldn't the Eagles just, yeah. Like you could've saved 

nine hours of film. Yeah. 

A live action adaptation on a tag. Okay, so the bottom is generated live action from the top, which is anime. We're looking at, uh, a spaceship over some sort of land, 

a blimp, a Zeppelin, basically turning an anime film into a realistic looking live action.

It doesn't work. It doesn't work at all. Yeah, because anime is so limited in animation, like, 

lemme tell you what Twitter loves. Twitter loves when people take a beloved anime film and try to make a live action. The gist is the bulk of the stuff that has been popping up from Seedance online has been hugely infringing on using existing IP and real people.

And that's led to, oh, the deadline backlash. 

Sorry, not variety. Deadline. Yeah, that's it. Yeah. This is what I was looking at. 

So yeah. NM PA calls on TikTok owner ByteDance to curb new AI model that created Tom Cruise versus Brad Pitt. Deep fake. So yeah, we've, this is, we've, this has been the recurring theme of the Chinese models.

Mm-hmm. Different few on, on copyright. Takes a lot of stuff that is copyright protected or likenesses and easy to generate. I mean, I feel like this clip got a lot of attention, but I feel like most of these models can do this anyways. Yeah. Uh, maybe this clip just kind of broke, you know, to more of the, uh.

Outta the AI bubble. But this is not really, this is not new of what you're able to do. 

Like, um, if, if a model is training on publicly available data, it has the awareness of Tom Cruise and Brad Pit already. They're all over the internet. So in order for them to not generate it, they actually have to put filters or prevention mechanisms after inference.

Yeah. It knows it, it just has to block people from doing it. 

Exactly. It's built into the model already, so you can't fight them on like, well, don't use the training data. Well, there is 10 million images of Tom Cruise on the internet, and it's really hard not to. Right? Mm-hmm. Like the, I, I think the studio should go after the post inference controls, like, uh.

Finding a way to standardize that and make that, uh, very transparent rather than like the training data. 

Yeah. And I still think that's where things will go. And it's like, yeah, I, I, first off, it's a, it is, the stuff's already there. It's trained in the model, like it's there. But yeah, you could add blockers and restrictions on what people can generate and do, which is a thi I think what they're asking for, even 

at a prompt level, if you detect, you know.

The phrase Tom Cruise, you can say, sorry, that's not allowed. Or something like that. 

Yeah. I mean, we had this issue, uh, last week when we were doing The Genie and we were like, Dodge Charger car. And it was like, it was giving us an error, but it wasn't saying why. And then when we stopped using Dodge Charger, then it generated and it was like, uh, it's probably just blocking, like the specific copyright word or trademark word.

Right. But yeah, these models know stuff. I mean, the, one of the projects we're working on, there's a character named Mickey, and sometimes the name ends up in the prompts and then the outputs we get. Mickey Mouse, like holding, holding shotguns and like,

so 

I wanna see the bloopers reel on that, you know? 

So yeah, the bottles, the bottles know this stuff and you, you could, that was one of our predictions too, that this year that would get sorted as very use or not, or more clarity on that. But yeah, as we've said before, the, the focus should be on the outputs and.

What 

you can do 

with them. I mean, 

the bigger picture I wonder is like three to five years from now when these models are really good, I'm, I'm talking like discern indiscernible from reality Good. And anybody can generate Tom Cruise, Brad Pitt, whatever is Tom Cruise and Brad Pitt actually gonna care, or they're gonna embrace the fact that people are bringing them back into the, um, cultural sphere with like a cool meme or something viral.

I think the issue is when you start having videos of them doing. Stuff that they don't approve of, uh, or like, or, you know, is disparaging and, and that it's just so, you can't tell that it's fake, 

a har harmful 

content like this, this, this, you know, fighting on the roof thing. Like, you know, it's all like, haha, what's the big deal?

Like, whatever. But what it's like, you know, whatever. I don't wanna say like, just bad examples of like. 

Tom Cruise, visiting the Epstein Island or something, 

right? Or Yeah. Or even like, stuff like that. 

That wasn't me. 

Or, or even, uh, I, you remember that crazy Brad Pitt scam where that. Woman was, thought she was having like a DM relationship with Brad Pitt, and then he was in the hospital and he like needed money and basically like wiped out her whole account.

But the images from that were like, did you ever see the, I think we talked. 

I don't, I don't. This is, this is weird. Okay, let's, let's take a look. Brad Pit Hospital scam. 

I thi Okay. This post article says AI generated Brad Pitt in hospital. I AI generated is very generous. This looked like really crappy Photoshop.

A French woman duped out of $850,000 by scammer, posing as AI generated Brad Pitt. And, and this is what, uh, her name was? A uh oh, 

that is bad Photoshop. It's not even ai. Yeah. 

Yeah. So she was getting sent pictures of these, uh, and believed that. Brad Pitt was in the hospital and needed her help and that she was in a relationship with Brad Pitt.

This is a bad example, so like this is a bad quality example. So just, just imagine when it becomes so good and then you just add Well, 

I was just thinking like, just put this through Nana Banana and have it like upgraded. 

That would be so much 

better today. 

That's the, that's the risk when you know, you could just make these photo realistic scams and scam people outta money and they think they're.

You know, talking to the real person, just be suspicious of everything online. If you can't, actually, I don't even know if you can't see him. A real person. I was gonna say, if you can't like FaceTime with them, but now, you know, soon crea, real time AI is gonna be so good. You could, you know, probably have some person transform in real time into.

Brad Pitt or whatever. 

All right. Aaron Aronofsky. This one, you know, happened a few weeks ago. We never really talked about it. So Aronofsky's Primordial Soup Company, which we talked about last year when they sort of announced a partnership with Google DeepMind and vo, right?

One of their first projects out is this weekly ish series with Time Magazine, where they're kind of dropping shorts, shorts episodes about the, uh. Revolutionary War, the American Revolutionary War, leading up to, uh, independence Day in our 250th anniversary. The first few episodes are out. Reactions to the look of them has been, 

it sucked.

It 

sucked. 

Mixed. 

We're gonna, we're gonna watch it. Together again, 

we can play some clips or something. I don't know. 

Okay. Okay. Uh, alright. As you're pulling it up, did Danowski actually review this thing and sign off on it, or it was just one of their people 

that 

just, 

you, you phoned it in or something?

Yeah, I do think that 

from what, what they've said about the process, you know, they're like, it was sag voice actors with the voice performance, but then everything else was. AI generated. 

Yeah. That's so irrelevant. If you look at the actual like visual choices and directorial choices, like, I don't know, it just, it just has this synthetic sheen all over it, which is so distracting from what the, yeah, like the 

skins.

Like you're talking about like the skin tones and everything right 

here? Yeah. Like everything is uniform and cut and paste and very textbook, you know, like high school textbook. 

To me it feels more like, well, like for one, 'cause AI models keep changing so fast. Like what, what timeframe was this made? Just with the model availability and then the more, the bigger thing is like, this is part of the challenge like.

Primorial Soup has this partnership with Google D Mind that they're locked, I'm assuming, locked into only using Google products, which like VO is great, but in the landscape of everything, especially something like this, there are no, uh, motion transfer human performances, video to video, right? Anything.

And vo 

the other big limitation of VO is the inability to customize a look and feel, right? I mean, you could throw in a month. We tried this. But like if you want consistent look and feel across the entire film, you're probably better off building a LORAs or, 

yeah, if you wanted to. I mean, or just get good at prompting.

I mean, I'm just thinking more of like for something that's so heavy on character speaking and then there it's like, cool. Yeah, you're using the voice actors, but like what would. Probably be even better is if you had their actual performance and then you could, 

yeah. 

Motion transfer that there is no model from Google that does that right now.

Yeah. And also the like, there's so many like closeup shots and it just, the whole thing just feels like, um, it was just kind of cobbled together with what you can do with AI today. 

I'm guessing the workflow was like they recorded the performances and then. Prompting text prompt. You know, text and image prompt with what you need them to say, and then just keep regenerating until you can get an output that has the AI mouth moving.

That syncs up convincingly enough with the audio you're recorded. And then some creative editing to cut around when it fall apart. Falls apart. 

Yeah. It doesn't look like they need even did color grading for that matter. So, okay. While, while you have this trailer up, just real quickly, open another tab of YouTube and do me a favor and search for the Patriot Mel Gibson, and let's watch that side by side.

Even though that wasn't, uh, considered a good movie at the time, like it was an okay movie in the, I think the late nineties, early two thousands. I remember watching in theaters 

or even, um. Uh, the John Adams miniseries. 

Yeah. But like, just, just if, just go to some shots. Like if you look at this, this feels like cinema, like that silhouette shot there, he's backlit with fire.

There is actual battles with blood and, and there's real people in real costumes. Not just like regalia, British costumes. Smoke in the environment. I mean, see we we're, we're trying to like. Just okay ourselves to think AI is almost there, but then you compare it to something from 30 years ago. We're not almost there.

That's a good point. 

Like this is what that period of a film should look and feel like. Like it has a layer of authenticity that you just can't, like the, the AI just. Replicate correctly. I mean, you get the basics right? You get a guy in a red coat. Yeah, sure. But does he look menacing and evil? 

I think this also gets to our bigger question too, where it's like, I don't think it's ever gonna get to the point, or like people would want it.

Where like does the, you know, Aronofsky 1776 series, something like that. Uh, you know, or is the vision of the future, like, oh, something like that can replace production for something like the Patriot, like I feel like the patriot with like real people and like, you know, getting into the nitty gritty of like being on a location and maybe augmenting a lot of stuff with AI effects, but still at the core.

Yes. 

Right. 

Being real people. I don't think the audience would react to that being replaced or like would want. That experience replaced completely with something synthetic. I still think the, you know, 1776 type thing is a good example of just something where it's like, it's such a low budget project that there's no way it could exist or just wouldn't exist money, uh, budget-wise.

Yeah. That we get other stuff like this. 

Absolutely. There's probably zero money for any physical production, so you can't put a real person in VFX on top of that. There's just no budget for 

it. Yeah. For the budget to make that. For real or even just VFX or Unreal like, uh, 3D project, it would be too cost prohibitive to do something like that.

So yeah, I think it's more just like exist or not exist, but not eventually repla, not replace real actors and being on location and absolutely getting nitty gritty. 

You're right. Um, just a note to Mr. Darren Aronofski. I mean, look, if you're gonna put your name on something like this, like that is gonna.

Not that's gonna tarnish your reputation as one of the best filmmakers in the world, right? 

They're just dropping the hammer. 

I mean, there's, there's only a few people in the AI world that are quote unquote traditional filmmakers. He is one of them, right? And when, uh, when they drop something with AI usage, we notice.

When we see this to us, it just feels like noise and it's not the mark that we were hoping that they would hit. 

Yeah. I'm, I, I'm, I'm curious Bo just like the, what the timeline and workflow was of like, when did they start this and what tools were available at the time and it's like, you know, maybe it's not you, like are aware of all the issues is not the best, but it's the best that could happen with the tools at the time.

And it's just like, okay, well let's just go with that. 

Joey, you know, in cinema that's not good enough. Like this 

is 

a cinema, this is a web series. I mean, it's not putting this computer, this is a web series. 

Sure. 

On Time Magazine made a partnership with Salesforce 'cause they use Slack to chat about Salesforce making the project.

Yeah. The end. It's like sales time studio and Salesforce. 

It just hurts us all in film and television industry. Like if we see this stuff and 

if 

someone with like as big of a name as Aronofsky is 

like, yeah. It's like, oh, you guys are up to this now. This is what you're doing in Hollywood.

It's not good. Not a good look. 

Yeah. Well look, I mean, I think the first thing that they came out with, it wasn't, you know, he produced it, but it wasn't, uh, him directing, but it was the, um, ancestral, which was that blend of 

the one with the baby. Like that was the baby Really good. Yeah. That the quality on that was way higher than this.

Yeah, I 

agree. Yeah. And so, you know, I think there's a variety of things coming out and, and or working on. I know this falls in the, you know, looks like AI slop kind of bucket. But uh, you know, I think maybe it's just, let's just do what we can with the tools we have and like keep moving this forward. 'cause it's only gonna get better.

Amen, brother.

Links for everything we talked about@denopodcast.com.

I know we went on on our own personal tangents about Brad Pay at Tom Cruise, Darren Aronofsky, the Patriot, Mel Gibson. If you have any comments about your favorite movies or where some of the AI applications are being used at the time, shout us out. Do a comment, do a like. 

Thanks everyone. Catch you in the next episode.