Denoised

From Fox to Natasha Lyonne: Three Key AI Developments in Media

VP Land Season 4 Episode 28

Fox Entertainment quietly approves Runway AI for public-facing deliverables, marking a significant industry shift. In this episode, hosts Addy Ghani and Joey Daoud dissect this major development and what it means for other studios. They explore the actual capabilities of AI in film and TV production today, examining real-world applications and the necessity of custom training foundational models. Plus, they discuss Natasha Lyonne's upcoming AI film directorial debut.

Gonna bet speculating here that the other studios are just gonna watch this so closely because Oh, for sure. Of course. It's like canary in the coal mine thing. It's like, oh, Fox is going through with this. Okay, let's see what happens. Let's see what the public output is. Let's see if it's actually good. In this episode of Denoised, Fox clears Runway AI for use in public deliverables, the actual state of AI today for the use of film and tv, and Natasha Lyonne is making her directorial debut with an AI film. Let's get into it. Alright, what's up Addy? Hey man. Welcome back. Thanks. Good to be here. Let's get AI into it. Into it. Yes. A lot of AI stuff happening. First up, this literally hot off the press, came across my radar from newsletter friend, uh, Janko, who runs the low pass. Uh, newsletter and headline story. Fox Entertainment has approved runaway AI for use in public facing outputs. That's incredible. It, it's such a big green light for the AI industry as it pertains to m and e. Huge. Uh, yeah. I mean this has always been a sort of like back and forth, questionable do like is it commercially cleared? Is it Right. Legally allowed? Yeah. So it seems like this also might've been a thing happening for a while.'cause it is a screenshot from a. Fox is, uh, Q&A, internal Q&A, and I believe it goes back to February. So this might have been okay for February. It's just kind of coming yeah. Out now. But yeah, the question was, uh, can, could you confirm Runway is legally cleared for use in actual deliverables? And the answer is yes. All clear. Please use it and share how you're using it because people would be huge. It would be hugely helpful to justify the spend and fun to show people. Yeah. It sounds like Fox is still rolling it out. As a test case and nice to have feature just to see how their employees are gonna use it. But the fact is, you know, it's not just about having an ethical model, advertising that it's, you know, good to go commercially, safe, clean, whatever label you want to put on it. It's about the actual approval from the studio to use it. Yeah. Which the, a big studio is giving a green light that it's okay to use this right now. And also like that's not just necessarily Fox movies, that's Fox's- Fox Entertainment's broadcast sports. Yeah. Uh, there are a bunch of outputs and should also say like, we don't know, and there's no indication that anything has actually been used. Yeah. In broadcast we don't. We don't know yet if they have published anything or For sure. And I think those things will come with time. Like the, this is like the first of many steps towards us seeing public use of pixel generated with AI. Mm-hmm. Other big studios like, you know, Disney, Paramount, all the whole entire Viacom family. Like, are they gonna follow suit? Like this is, this is the beginning of, I think other studios just kind of opening up the gates, if you will. Yeah. And I'm wondering if it's just like, if. Everyone's gonna try to wait for some legal clarity or legal or answer. Yeah. That could take years and maybe still never fully be clarified of like what is and isn't approved. So Sure. It's just sort of leaping forward and be like, okay, we're just gonna go with this. I mean, there was some clarity on in front the Copyright Office where if something is completely aI generated, then it doesn't, it can't be copyright protected, which is also, there are two sides to this too, from a, uh, a large studios perspective.'cause there's the main one of, are you allowed to use generated outputs? Mm-hmm. If they're trained on copyright material. Mm-hmm. And are you liable for copyright infringement? But then the other flip side is. The stuff you create, can you copyright protect it? Exactly. Because that is a huge, obviously of interest for studios to make sure that their stuff that they generate is legally protectable. Right. And And it's not like a Runway can kind of have ownership over it or if it's even not ownership. Right. You know. Okay. And a lot of the AI companies, I mean, uh, I, I, I know Fox is on, you know, an enterprise plan or something. So like the issue is like if they, during any stuff from one Runway, whatever plan they're on. Yeah. No mega plan. Yeah. They, the mega plan, they, they have the, the copyright protection to the extent of the law, but we know, yeah. We had the rolling from the copyright office that without human input in it, which I think is still probably gonna be debated and mm-hmm. Of what that classifies as It can't, it can't be copyright protected if there's no like human input involved. A hundred percent. I'm, I'm. Gonna bet speculating here that the other studios are just gonna watch this so closely.'cause Oh, for sure. Of course. It's like canary in the coal mine thing. It's like, oh, Fox is going through with this. Okay, let's see what happens. Let's see what the public output is. Let's see if it's actually good. Then the other thing I'm wondering about is how is it actually deployed inside of Fox? Mm-hmm. Is it like a secure cloud portal to Runways data centers, or is it locally run inside of the Fox Studio walls, or is it just an API handshake? Like, what is it? Mm-hmm. Yeah, like are they still using. Runway servers. What is, yeah, I mean, I'm also just curious like what are the, what would the outputs be like? I'm guessing it would probably start with images. Images or just marketing assets, additional assets. I wouldn't even go that far, I think. You don't think so? It would just be previous stuff. Concepting maybe in the development teams. Yeah, but we know about that. I mean, this is saying it's approved for, it's approved for, for distribution. Not that they're not allowed to use it, but that they're allowed to. Cleared for use and actual deliverables. That was the, the, the quote from the Q&A. Yeah. I, I think it's still long ways out because it's one thing for Fox employees to have access to it, and given the green light, it's another thing to actually master the tool and get it to a level where the output is acceptable. Yeah, exactly. And I'm guessing like you, that it would not be, we're gonna generate a complete shot or something and insert that in our, you know, like that they're gonna generate a shot of a football player and use that in a promo video for like, you know, NFL or something. Like, I don't think that's gonna be the case, you know, I'm guessing, you know, probably just modifying assets or background plays or some element or something. There is another interesting, uh, point too, and also this ties into like that it's not just for previs, but for broadcast. The FAQ document did indicate that employees should tell. Boxes, broadcast Standards and practices group, if they were using any AI generated videos in any broadcast or production material. So just to tell 'em not necessarily that it's not prohibited, it's allowed. And from their original statement. Mm-hmm. And that also their, uh, gen AI use guidelines, uh, require them to consult with their manager and business unit legal team prior to commercial or external, external use. It's like, uh, duh, whoa, whoa, whoa. Let's just, I mean, yeah. But if, you know, Fox is massive and has a bunch of different units, so, right. Yeah. But. No, I mean, I think the big thing here is like, it is getting the green light for public distribution, not just internal use. And interesting that they chose Runway and not somebody like Adobe or OpenAI or, you know, whoever. Uh, why Runway? Is it because of quality or is, was there like a, you know, uh, back end deal or what have, what have you? Yeah, I mean, I, I, I agree. I think, I mean, just from a user perspective, like Runway has. Generally been ahead of the curve and had the best and most realistic looking models out of anyone else. We covered the references thing. That's amazing. Yeah. And gen four, uh, their latest model has been up there with VO two from Google, like as the best outputs that I've seen. Our guy, Cristobal, was killing it over, so maybe, maybe Fox is just waiting for the. Audio. I don't remember what the name of the speculative audio Yes. Controls. They're just, yeah, they're on board with that. They're all, yeah. They're like, we what? We don't want to do any work. We just want to talk to the thing. I mean, the funny thing is this has been around for a few months, so we just know about this now. But yeah, I think this is a, a step forward and I. One of the ear early to mid Dominoes falling that it's, it's a big domino. Uh, let's see if it's big enough to push the other dominoes. And also, this is not something that you advertise. This is not gonna help Fox's publicity in any way. The only reason we know about it is it pro it probably just got out and they wanted to get ahead of it. Yeah. It seems like just, yeah. You know, there was a screenshot or something from their Right. FAQ that came out. Yeah. I mean, like we said, AI is stuff that everyone is using or probably using. Yeah, for sure. And no one wants to talk about it because there's just nothing but like negative backlash against it a hundred percent for usually not a good reason. How could you not be testing with AI if you are a studio in this climate, in this downturn with the industry the way it is with the budget, the way it is? How could you not at least have a few engineers that are just pushing this? Yeah. You gotta understand it. A hundred percent also. Yeah. And also so when, or you'll be left behind. Yeah. And so whether you use it or not, or whether you internally make the decision to use it, or there's a legal clarification whether you can use it. Right. So like when that happens, you don't wanna be starting from zero. Yeah. And being like, oh, well let, okay, well now we're allowed to use this AI thing. Let's figure out this AI thing. Yeah. You definitely wanna be at the point where it's like, well, we know how we can and. The ins and outs of the AI thing so that once we get the green light, I mean, I look, I think, you know, it's not like this is magically gonna disappear. There's gonna be some ruling that, uh, the AI thing is, uh, yep. I guess not gonna work out. Sorry everyone. Let's pack it up and go home. Like that's not gonna happen. No. Yeah. So, you know, I mean, maybe, uh, there are ways to train models and we'll talk about this in the next story, but I mean, you can train models and completely clear data Yeah. And stuff. So like there are ways to work with AI. And completely eliminate the debate or issue of, you know, is the AI that we're using trained on commercially cleared data?'cause yeah, we know there is a way to do that. A hundred percent. And from like a public sphere, like stepping back as like a regular American and just looking at where AI is really coming in, coming in hot. It's our industry M&E and automotive. Like with all the autonomous vehicles. Yeah. Like you don't see FinTech being impacted too much. Um, well, I mean, you know. Marc Andreessen just, you know, said that, uh, the one thing AI can't replace is venture capitalists. Oh, of course, of course. Because that's, that's, you can't replace everything else but venture capitalists. Yeah, because, you know, 'cause he's a venture capitalist. It'll replace everything else that I don't do, but it can't replace me. I would, I would say the complete opposite. I think venture capitalism is, uh, a state of decisions. That could be handled much better by an emotionless. Yeah. Highly reasonable AI analyzing a bunch of data Yes. And making a yes or no decision seems like, and how much money. Right. And how much to invest in the risk tolerance. Uh, seems like a prime use case for AI. I think he would know that better than anybody else. He's not stupid, so. Yeah. I mean, the only thing AI maybe doesn't do right now is the, the, the, the human interaction of knowing about the deals or whatever. Yeah. But what's, you know about the deal then. I think he's just trying to keep his employees cool and calm, so you're not gonna be replaced, I promise. Which brings us to our next topic, state of AI use. Now I wanted to talk about this. Mm-hmm. Because I thought we hear a lot of hype. We cover a lot of new tools. We cover, you know, like dominoes falling. What is the actual capabilities of AI today and how are creative technologists actually using it. And I have a little bit of insight. I was just on a project where we tested with AI. Mm-hmm. Didn't get the clearance to use it, but we got certainly got to a point where it was like, oh, this is legit. It's gonna save us a lot of time, money, and get, get us a better output. Can you talk about any, like what you tested in vague terms? Yeah. It was a video generation tool that was really good at the thing that we were trying to solve. We were trying to generate motion graphics and you know, motion graphics. For a motion graphics artist is very time consuming. Mm-hmm. Of course, they'll have way better control over it, perhaps higher quality than something AI can generate. But we were literally given like eight days to create like 30 minutes of motion graphics and there was, I. Two artists. Yeah. And the first thing they did was they turned to some, you know, off the shelf video generation tool, kind of like a Runway or whatever, and stuff came out really good. And then they actually went into After Effects and modified the AI generated stuff. Mm-hmm. And layered it with stock footage. Right. So they used the AI output as a base layer, and then they got to a good, good place. And then we brought it to the studio studio's like, well. Don't use that tool. We kind of are talking to these guys. Me referring to another AI company, okay, why don't you try their tool? And we're like, okay, let's, uh, let's get on a meeting with them. Let's go on a call with them. And then the call went really well. We were just about to convert over to this tool that is quote unquote studio approved. And then the studio comes back and changes its mind and says, actually, hold that thought.'cause. No AI is gonna be used on this project. Okay. Even though it was gonna be just for like motion graphics? Correct? Yeah. Yeah. Okay. And, uh, we were like, sorry guys, you're gonna have to work your butt off. So those two motion graphics artists just went at it for like the next six days or whatever. Okay. So these. Ended up, the solution ended up being, they didn't cut anything down. They just, they, we had to remove all the AI elements. No, but I mean, you had to remove the AI stuff, but because initially the issue was you had about 30 minutes of graphics you had to create with two animators. Yeah. And so the solution ended up being make the animators work, like not stop. Yeah. Correct. Did you cut any of the graphics down or it just was like, I would argue that the graphics were compromis and quality because of that incredible, just incredible crunch. Triage. Yeah. Like just get something done. Just, just get the, and not the best. Yeah. Work, but something. Okay. Yeah. So that was my story. And then, uh, going back to, I have a really good friend who's also in the AI world and consults for a lot of studios and does ai, you know, r and d and sort of test bed situation for them. And we were, we were like, yeah, uh, the way, how do you use AI for the consultation work that you do? I do it this way. And I was like, this is what I have experience with. And a lot of that stuff lined up. So I wanna share some of the things with you. I think the most obvious is, you know. You'll hear this from everybody. AI is just a tool. Mm-hmm. It's not the entire thing. It's part of your arsenal and the other 90% of your arsenal is stuff you already know. Mm-hmm. Like nuke, like Maya, like unreal. What I've always been complaining about too is like, you know, AI is such a broad term and I feel like we just need better Yeah. Lingo and more specific lingo around. Yeah. Because it could be so many things of like a tool, you know, it's something from. Runway type of text, get a whole shot out or ElevenLabs or Respeecher, that's also AI. Yeah. Or Move AI as ai, right? Yeah. Or it's like I need, you know, I want to go to another site and just generate a 3D object that I'm using in my unreal engine scene. Yeah. I just needed something specific to spin up. So like, there's so many different use cases, like where so much, uh, nuance gets lost. Yeah. I think in this case I'm talking specifically about generative AI. Mm-hmm. So from text to image or text to video. Mm-hmm. And what I think a lot of people are realizing is there is no way around the fact that you're gonna have to custom train and fine tune foundational models mm-hmm. To get what you want. So it doesn't matter how incredibly good you are at prompting. Mm-hmm. Or if you are using like a prompt engineering mechanism, like you put your prompt into a prompt model and then that prompt model will generate like. Almost like machine talk for the Oh, video generation. Yeah. Or I've like made a project Yeah. With a prompt, rebuilt. So I give it like a very crappy prompt and then it rewrites it into a a, yeah. Better prompt for whatever I'm trying to do. Prompt structuring is, is like, uh, is gonna help. I. But it's not gonna get you what you want. Mm-hmm. Like if you are Coca-Cola and you need your cursive Coca-Cola, you know, signature Lugo, there's no way around it than fine tuning, uh, image generation, video generation model with a ton of the Coca-Cola Lugo. Right. And then attach it as a LoRa. Mm-hmm. And we've covered, uh, we've talked about LoRas before. Yeah. Low rank adaptation models. Mm-hmm. It's basically a mini model that latches onto the big model. So if you thinking about the big model as like a giant ship, a LoRa is like the tugboat, uhhuh. It's a tiny, little, little powerful thing that's like steering it. Custom train on yourself can be from 2050 images if something like. A person, a face, an object. A product. Yeah, a location. Yeah. So that LoRa model, you still have to build on your own. Right. And the way you could build it is you could build it on the cloud, you could build it in comfy ui or there's other ways to build it. That's more and more tools to have been adding features of, they don't even call it any LoRa, but it basically is like, it's fine tuning, right. Build your own custom object or something. And, and it's like, give us 20 photos of your thing and, and then it trains a model. Yeah. It's basically doing this in a very easy to use fashion. Absolutely. And uh, now that you're getting into essentially building your own model, you have the same problems that these big guys do, which is like, how do you create a dataset correctly? How do you condition that dataset, prep the dataset, and then how many times is that dataset gonna run through this model, which is called an epoch? Mm-hmm. How many epochs are you gonna have to get this to be right? And then there's a problem of overfitting and under fitting. Overfitting is when you've trained it so hard that it's just rigid and all it can output is that Coke Lugo, right? Like you, you can ask for a car and it'll just output the Coke Lugo. It's overdone. It's like a steak that's too well cooked, but you, you know, the other end is if it's medium rare, it's not giving you the Coke logo. You haven't trained it enough. Yeah. So you want just, you know, medium, just in the middle. Yeah. I've had this issue trying to do like some early Lauras on like myself. Yeah. And you know, the, a lot of the photos were maybe from an angle or like smiling, so then all the outputs. Yeah. Even if I'm trying to get a different expression for myself, it's always sort of like a weird, like a smiling. Mm-hmm.'cause that was all the photos I gave it up. Yeah. And uh, on the specific topic of face, one of the videos that I saw that solves that is. Take your, uh, photos that you gave to the Laura and remove the background. Like keep it on, uh, white background, red, green or green or blue background, and that'll actually train the Lauras way better. Oh, just having a clean background. Yeah, exactly. Just no additional noise to Yeah, decipher. That's interesting. Yeah. So there are all these. Tricks that people are finding out. Yeah. The other, uh, thing that is used a lot for, uh, controlling the generation is structuring. Structuring is just a fancy way of saying that you're giving it lines and edges for the generation to stick to. Mm. So it's almost like, you know, you give. The AI a skeleton and then it's filling in the muscles and the fat and the skin. Okay, okay. Yeah. But that skeleton, you have a high level of control over, and generally the how that's done is through actually 3D. Okay. So let's say, you know, in the case of like, uh, like Paul Trillo making that CUCO video, how they generated that LA street was, they built that LA street, I think in blender. With all the little shops. Mm-hmm. And then the background was just like 2D mats of city and skyscraper and they had a camera through that 3D environment, which was the correct perspective, correct number of buildings. And then what the AI model did was just neural render over that. So like, kinda like a video to video essentially. And that's the other thing I wanna talk about. I think a lot of the work for professional use. Is not images. It's not images at all. It's not even image to video. Yeah. There is a big sort of focus on image as the reference guide. Mm-hmm. Not prompts, not anything else, but I. End of the day, it's about taking some kind of input video, making some kind of output video. Yeah, we're talking about that. I think that is probably gonna be the way things are. Whether that is at least near term, the next like year to two years, that's how we get the job done. Mm-hmm. And then who knows where the AI capabilities are after that. Right. But yeah, some type of in painting in a video. So like that could be shooting with a real actor. Yeah. On. A green screen or some sort of just roughly keyed background. Yeah, like with Lightcraft or something and keeping the. Actor in the performance, but changing the background. Yeah, yeah. Something like in that realm, but working with video as the source.'cause yeah, the image to video. And despite all of these other tools adding, you know, prebuilt camera move movements and motions and stuff, it's like this, like no one here in the industry wants to operate a camera with like a preset. It's like, that's not operating a camera, that is not controlling the movement. That's never worked. Even animators, you know, working on like animated films or, you know, between programming the camera with key frames or using an actual like camera tracker, right. It's a different. Yeah, feels a different experience. I mean, even in the animation world, like you would think there is a walk cycle. I mean every character walks. Mm-hmm. Right. So like wouldn't an animator just start with like a walk cycle and then modify that walk cycle? Mm-hmm. Like why animate every single bone in a walk cycle? Well, it turns out. If you want to have full creative control and create a very unique sort of gate and walk mm-hmm. To that, you have to start from scratch. And that's the same approach to cinematography. Like you don't want this weird punch in that is preset and pre-programmed. No. I want to build my own and put my own fingerprint on that. Right. I mean, even for regular films, it's like, yeah, yeah. These are cinematographers, dps, they know cameras, they know lenses, they know films or digital. They will still do extensive. Camera testing. Yeah. With every lens and every and the camera. Mm-hmm. And if they're shooting film the different film stocks Yeah. And different skin tones and different clothing. Yeah. To test it all out to, for the look for that film. Yeah. And, uh, creatives a lot of times don't do this for obvious reasons. A lot of times it's like a self fulfillment thing. Like they just want to do this thing because they feel so strongly about it, and it satisfies a creative urge for them. It makes no sense, but it's kind of like the, the Ron Howard motel sequence in the studio. I got you on that one. For those of you, uh, who, dunno what I'm talking about. Addy has just, just discovered the studio. Now he's catching up. And when you watch, I didn't wanna bring up the AI episode that just aired and also 'cause it did just air. So I don't wanna spoil it if people haven't watched it yet. But next week we'll talk about the AI episode 'cause I know you'll have caught up. Yes. And that'll give enough people enough time if. They haven't watched it yet to watch the latest episode. We gotta throw up some clips of the, the episode. Yeah. When we do that. Yeah. Okay. What are some of, like, in the practical sense of like actually using mm-hmm. AI and using it as a tools, what are some of the things you've been seeing? Yeah. One of the tools that I think is slept on is Invoke AI. Mm. I just discovered it. Yeah. I had, uh, not heard of it either. So Yeah. Tell me what, how you stumbled on this and what has been standing out to you from, with Invoke. Yeah, so using something like Runway, mm-hmm. You know, it is a fairly easy to use web ui. You know, you upload some references, you can prompt a crap out of it. And there's even a few presets to like Runway now has, I think the punch in the face preset? No, that's pika. Okay. Pika iss the one with, with like presets galore. Yeah. So like cake. The cake face or the squishy face. Exactly. That's all pika. I saw somebody make a beautiful montage of like a bunch of politicians getting punched in the face. I was like. This is what it was really built for, like, uh, UGC that could get a lot of clicks. And the meme Meme factory. The meme factory. Yeah. But in the real world, in our professional world, like that stuff really doesn't have any high ROI. The way, uh, invoke does it, is it brings it into a very, uh, familiar ecosystem of tools and workflows we're familiar with. So most of us in the industry know Nuke. We know Houdini, when. Also Da Vinci Fusion. And those three tools are all node-based. Mm-hmm. Resolve as well. So creating a node-based tree that is easier than comfy UI is one of the things that invoke does. The other thing invoke does really well, is it when you generate an image you can do in painting in layers. Mm. You can mask the portions of the engine layers and each layer ends up being highly modifiable on its own. Very similar to Photoshop or After Effects or you know, any of layer based systems. It seems so obvious, but I still haven't seen something that El elegantly built. Mm-hmm. And I was really impressed. I think that Invoke, uh, CEO if iKan remember his name, Kent. Ey, he goes through a YouTube video where he actually uses his own application to just like modify something. Yeah. The YouTube panel has a bunch of Yeah. He, he runs a bunch of tutorials on his YouTube panel using Invoke Yeah. And is like. He gets it. Yeah, because like one of the examples was him taking a, a very generic AI generated image of a guy, uh, like a model, fashion model with a leather jacket. And then there was a lot of skin on his chest and his face, and it just looked like a plastic doll. Right. And we see one of the common issues with AI output. Yeah. And that, that goes into the AI slop category, right? Mm-hmm. So he's, he gets it. He's like, look. This isn't obviously not gonna work. How do we bring skin detail and then just correct that skin? Mm-hmm. And then he'll go through layer by layer, adding like pores and fine details to the skin, taking off some of the specular highlights and all this stuff that a professional artist would do. So I think like it's, it's a really neat thing to see when people from our industry who actually do the work that we do, use AI to generate the thing that is actually of caliber. Yeah. And any other controls stand out? I mean, I know one of their big things with checking out their website was data and IP. Yeah. Both ownership and input. Uh, kind of a big focus on using your own models or using Laura or using models that are trained on. Already commercially clear data. I think a, a lot of, uh, AI companies are, are big on security now more than ever before. The one, I'm not, I'm not a big like internet security expert, but I keep hearing SOC-2, SOC-2 a lot, SOC-2. Mm-hmm. It is like a security standard. So these guys are SOC two compliant. Mm-hmm. As I'm sure you know, obviously something like AWS would be as well. Yeah. And, and that's like a sort of security standard, but I think that's like the standard that you have to meet. To work with a lot of enterprise companies. Yeah. There considered, that's, I think the bare minimum. Yeah. And then there's standards built on top of mm-hmm. All, all of that. Yeah. But like to be in the conversation to be a right viable tool for these companies, you have to like, yeah. You can't just have like three servers in your garage connected to the internet. Like that's not safe, you know? So like there is a big cost associated with security compliant, right? Mm-hmm. And these guys have gone through those steps. Yeah. Do you see anything with the outputs or anything that it does differently, either with color space or just like what it's able to generate? Yeah. Uh, so I think one of the things that, even on the Beeble conversation, right? Yeah. So yeah, iKan, I bring that up to the NAB interview we did with Beeble, which, uh, we've covered them previous years, but super cool tool where you could upload your video and then relight your video, right? And. They built a web platform. You upload your video to their web platform, do your edits, and then they can give you the video. Either they'll give you an H, like I believe, like an H 2 64 video. Yeah. Which is not the most usable for, um, professional work. Yep. Or a believe an image sequence, which is a little bit more usable. A little bit more usable. Yeah. But yeah, there were some comments on that video. Pointing out that for professional workflows, VFX workflows, it needs to work with EXR files. Yes. And also, you know, and also be in ACEs color space, EXR and that a lot of the issues with AI is that the AI is trained on SRGB images. Yeah. So can you break that down of like the issue of like the training of the color space and the color space that, like professional VFX pipelines need? Yeah. I, I think if you go from like a consumer world that's, you know, iPhone mobile based, uh, running H 2 64 or JPEGs, you're in a very, like, technically speaking, you're in like a very low bar of quality that you can achieve. When you go to a professional A VFX workflow, it's like. What the technology today can do as good as it gets, and so that means. Open EXR ass color space. Uh, perhaps rec 2020, exploiting the cameras sensor abilities to the max with the most dynamic range. Mm-hmm. Using a lot of raw inputs. Open. EXR is a file format for images. That I think, I don't know of anything that can hold a higher quality image than that. Okay. So it's obviously much higher quality than the jpeg. Mm-hmm. Much more than a PNG or a Targa, like it's like the top of the hill. And the reason for that is I think they internally use something like 16 bit encoding, decoding something like a hundred megabits per frame, like. It's insane how heavy open EXR is. So when you have an open EXR sequence as a video. It could be easily in the hundreds of gigabytes, if not a terabyte. There is no compromises, like it is doing some kind of loss, less compression, but it's as good as it gets. The reason you need that is because, uh, VFX is often done at a pixel level, right? Mm-hmm. You're painting stuff out at a pixel level, and so if you don't have. The resolution, you're gonna just paint a big blotch of stuff, or you're gonna just render something that just doesn't fit in a line. Then things are gonna swim by the time you composite a bunch of those together. On top of that, you need an alpha channel. Mm-hmm. To control transparency, which obviously XR supports most things don't. There's so many requirements for professional VFX workflows as far as file formats and fidelity goes, and I think it's just, uh, yeah. Somebody just coming to Beeble Sing. Yeah, this is great, but hold your horses when you say, this is VFX ready? Actually, this is what we need for it to be VFX ready. SRGB is a color space that is synonymous with like Photoshop. Uh, it's similar in color, gamut size to like a rec 7 0 9, which is mm-hmm. You know, 30, 40 years ago. Yeah. A bit. Right, like 256 steps. Mm-hmm. Uh, very small in terms of how saturated the red, green and blues can get. Right now we're in, uh, ASIS color space, uh, close to rec 2020, which is more synonymous with like HDR 10 bit. Mm-hmm. And then we're gonna probably move. Pass that color gamut into a 12 bit world very soon. You know, like the last two decades of our lives, we saw the transition from standard def to high definition, which is also a transition from eight bit to 10 bit. Mm-hmm. Which is also a transition from standard dynamic range to high dynamic range. And then the next 10 to 20 years we're gonna see a similar jump. We're gonna go to 12 bit higher dynamic range. Eight K would be the standard resolution. And then, you know, file formats, we'll have to adjust to this new quality bar. But VFX is already there. Right? Right. So like in the working space, I mean, obviously they're gonna, that's to get it to the good. Spot and then output it in something lower. I mean, I think you know this more better than I do. It's like you want to create your frames in the highest possible quality, so then all you're doing is down sampling to all the different formats. Yeah. Yeah. But in the VFX world, you need the highest, highest possible sample. Also, you gotta remember like color, you are seeing the stuff in a movie theater screen at the dailies and things like, I don't know, 30, 40 feet. Mm-hmm. Wide. Like you're gonna notice every single thing. Right. And especially as either projection or if more movie theater switch to like ONYX LED Yeah. Uh, displays. Yeah. Uh, slash Apple Vision Pro. Mm-hmm. Or uh, just other beyond high definition, but yeah, high quality headsets. Yeah. Another thing where like. The detail will start showing. Yeah. Especially if you use film.'cause film still has to be scanned. Yeah. And brought into a VFX pipeline and the VFX added and then printed back on film. Mm-hmm. Like there's still a process. So like now you're talking even higher resolution than eight k. You're talking like 16 K per frame. Something insane like that. Yeah. Something like sin is on. Yeah. Those IMAX I 70 millimeter prints. Now, even if people was like, okay, yeah, sure, we'll have XR export support, you're still only as good as your weakest link. So all the training data is still, uh, SRGB. Yeah, like you could definitely like throw JPEG quality image into an e XR wrapper. Right. But you're not, you're not unlocking, improving anything, unlocking the benefits, right. Of anything. So I mean, is that gonna be an issue later on where it's like everything's gonna have to be retrained on higher data, or is there something that, like it could be leaned on with AI as sort of like, there's AI uping, is there like an ai? No, I, I don't think, uh, improving color space. Like operating will get you so far. So like the reason Open xrs are used a lot in something like Nuke is because it holds HDRs really well. HDR is when, and I'm not talking about like 10 bit, you know, HDR on our tv, I'm talking about merging different stops of images or video together. To create a dynamic range that goes far beyond the camera's cap capabilities. Mm-hmm. So like a typical camera will do like 17 stops or whatever, right? Mm-hmm. So you go from the lowest t stop to the highest, and then what if you do three or four ranges of that mm-hmm. Through different, uh, portions of the light. Now you. Put the dynamic range at like 25, 26 stops, and to hold that level of dynamic range, you need a lot more bits, and now you need 16 bits. So what Beeble has to do is if you're using an iPhone, you have to take enough images at different dynamic range areas and then merge it into one giant dynamic range. And then that has to far exceed the dynamic range of the iPhone as a single exposure. Then that gets brought into VFX. I'm curious where their models come into play and where the elements that you add, because I know if you wanna relight your scene and be able to have some like preset, you know, scenes you could relight, but you could also just import an HDRI panoramic image. That's it. And so I think the FX artists are like asking for an export, right? Of that same of that. Yeah. You give it the high quality data and get the data back. Yes. Yeah. I know they have some of their own models, but I think their models are more also for, well, I think understanding the face and how the light will fall. Mm-hmm. Onto the face. But yeah. But I mean also just aside from people specifically. Other AI elements or outputs, it's like, are they gonna have to like, retrain the models in the future on a hundred percent, 4K on the higher, on the higher resolution, higher bit rate data. I think you're, you're gonna see like, uh, a Runway. Regular and a Runway pro. Mm-hmm. And the Runway Pro will train on way less data and maybe leverage the big foundational model's knowledge, but then output at 10 bit. Right. Or is just like the deeps seek future, where it's like they figured out how to train the models, but on way less data. So you skip they distill. Yeah, they distill from the main model, but then train, uh, essentially a high fidelity LoRa mm. You know, something like that. Yeah. I mean, these are exciting things to like look out for. Yeah. And things to, you know, I mean, and these are the big issues for like wide adoption and stuff to, you know, figure it out. Yeah. That the face smushing presets, uh, are cute, but uh, you know, or don't solve issues in the professional world, but those face smushing presets is where the money is made. Yeah. And that case for our stuff where the money is not really made your meme support, support the arts. Thank you. Thank you for the face mashing pika. All right. And then our last. Story. Natasha Lyonne is doing a director directorial debut, but with an AI film. Yeah. I mean, look, she's been directing for a minute now. I, I know her from American Pie way back away. Right. Like I'm that old. But uh, Russian Doll was a big Russian doll is great. Yeah. Yeah. Big hit show on Netflix. Used a lot of VP, like all the subway scenes. Oh really? Okay. Yeah. Yeah, I mean, so that was the headline and you know, obviously everyone gets wrapped up in the AI and AI and then also I saw a photo going around because she was like on the picket line for the WGA strikes. Oh, interesting. And then it was like, what were you striking for? Yeah. Um, but the film is called, I. Uncanny Valley, which perfect. It is a film, uh, like that seems like, is heavily leaning into augmented reality, AI immersive video games. And so they're leaning to, into AI as a creative output, a, a creative tool in this. Storytelling aspect, which also is combining live action. Mm-hmm. Actually starring in it and yeah, it's not a hundred percent gonna be AI generated. There's an AI generated film, which I think everyone always gets so focused on and stuff. Yeah, exactly. Um, and it is also seems like it is meta and commentating on AI. I see so many AI films online that are just like, uh, playing back at like. You know, 12 frames or se it is just like all slow motion with the like, lifeless face, but the mouth is like, yeah. The eye contact's never to the camera. Yeah. I'm like, God, stop. So I'm so glad somebody that actually makes real movies is working on an AI movie. Yeah. And I'm really curious about how they, uh, are gonna blend it in with a, you know, a traditional pipeline and, and production aspect like, like we've been talking about. And Yeah. The, uh, one of the companies behind it is Steria, which we've talked about before, and I didn't realize that she actually a co Oh, she's a co-founder of Asteria. Yeah. I did not know that. I didn't realize. That. Okay. So yeah, I'm, I'm excited about this. I'm curious. Yeah. Yeah. I feel like the focus on the AI is always like a distraction, you know? It's crazy, you know, e Everything Everywhere, All At Once. Yeah. Everyone loved, you know, crazy effects. That was one of an that was an early user of Runway. Yeah. And they just were so early to it. It was before anyone really knew what was happening, but it was like they got it out the door. No one is saying, you know, one's calling, you know, this movie that won best picture, like, oh, AI slop, or like all the crazy stuff. Right. It's like they used it as a tool in their pipeline. Yeah. To lean onto the weirdness Yeah. Of it all, for sure. Um, which for a film called Uncanny Valley, which is if you're in the AI space at all, like. Are well aware of this term. It just seems like it's just gonna lean onto its weirdness, you know, and with some like extremely creative, uh, people behind. Yeah. Uh, this project just excited about it. To refresh our viewers on what uncanny valley is. It is a, uh, visual representation if you're looking at, you know, looking at somebody's face and how pleasant or unpleasant it is as a graph, and, you know, the more attractive someone is, you know, the graph goes up in value. All of a sudden, if you're looking at something that's artificial, it just like drastically drops into this valley. Like you're absolutely appalled and repulsed by it. And uncanny Valley happens a lot with digital humans, especially, you know, if the digital human faces don't have enough. Texture and, um, like defamation to their faces, or if the eyes are just missing something, it's always typically the eyes. Mm-hmm. It's like something's off. Yeah. You usually can't quite put your finger on it, but you're like, I know there's something weird about this. Yeah. Evolutionally speaking, like mm-hmm. We are so fine tuned to look at faces. We look at faces. Every day. Mm-hmm. Not only do we look at faces, we read expressions from them, you know, and, uh, when we see something artificial, our brain instantly regurgitates. And that's what Don County Valley is. Yeah. For them leaning into that and harnessing AI as a tool. Yeah. And being extremely, uh, creative and, uh, amazing storytellers. I'm excited about this. Me too. And again, the, you know, AI takes the headline, but for all the wrong reasons. I don't know what, like Okay. I, I get that she's able to make this film and most likely it's gonna be pretty cool, at least as an experiment. Mm-hmm. At the, you know, tier one level. Uh, I'm sure they're gonna be funded well for this. Uh, my sort of, uh, question is like, how does it get distributed? Does it just roll out into theaters like every other film? Or is this gonna receive like a special treatment?'cause it's like, because it's a, I don't know, an oddity or something. Or It's an oddity. Right, right. Yeah. I don't know. I mean, I'm guessing too, if they're making it like, it's probably gonna go the festival route and then try to get, uh, acquired like any traditional Yeah. Um, independent or lower budget film. What about like, spy kids four D you know, like how that rolled out into theme parks or something like, 'cause you need the shaking seats and all that stuff. I don't even remember that. Yeah. But like, that was an oddity of a film, right? Yeah. Because it just used a different type of technology. So I'm, I'm wondering what is gonna happen in this one. I don't know. Yeah. We'll see. We'll keep following and tracking it as it progresses. Yeah. Maybe we can go on set. On set is just a bunch of dudes on computers. Yeah. They are filming real people. It's not completely AI generated. Fair enough. Alright, good place to wrap it up. Yeah. Alright. Link for everything we talked about, as usual@denopodcast.com. I got a message from a friend, uh, who loves the show and said he left a review on Apple Podcast. Oh, thank you. Thank you. And we saw a Spotify review number go up this weekend. So yeah, another one would be great. And these are literally. One by one that we're building. And all of these help us tremendously. Yeah. So yeah, thanks for leaving the review and uh, we appreciate the love. Uh, we'll catch you in the next episode.

People on this episode