Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
Adobe Acquires Invoke.ai + Launches Custom Model Service
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Adobe makes major AI moves with their new Foundry service and Invoke.ai acquisition. Addy and Joey break down how Amazon's 'House of David' series used AI for 253 shots, saving months of production time and potentially changing how TV is made. Plus, Runway's strategic pivot to industrial applications, and key insights from LA Tech Week where top executives revealed their AI production strategies.
--
The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.
Like, dude, I'm not sleeping on a single couch ever again. I'll go get a hotel. Yeah. I need my eight hours, but I'll see you later. Yeah, exactly. I can't, I can't podcast like this.
Alright, welcome back to you Denoised. Addy's gonna drive this one 'cause, uh, I'm busy. I'm doing a live stream with Adobe this Thursday, so if you're around Thursday between 9:00 AM and 1:00 PM Pacific Time, I'm gonna be live streaming about Firefly Boards. Uh, but I've been busy and so, uh, Addy's been staying up to date with the latest news.
So he's gonna, he's gonna film me in two of what, of what's happening. Uh, Addy, what have you been seeing? What should we talk about first?
So usually, you know, just a little peek behind the scenes. How this usually works is Joey and I text each other all day long with, Hey, did you see this? Did you see that?
And then over the last few days, I just haven't been getting any texts from Joey. Joey, we got an episode, shoot, let's go. And, you know, I've done the same, I, I get into projects and then Joey's texting me. So there's been some big announcements. So Adobe with two prong announcement one is they have this foundry.
Where you can build custom models. Mm-hmm. We're gonna get into that. Second, Adobe acquired a AI startup called Invoke.ai. We'll get into what that implications of that is. Runway now has full way training for their models and what that really means and what's the strategy behind it. So we're gonna speculate on that.
And then finally, this was Tech Week in LA last week, and Joey and I attended one of the events together. Joey went to another really interesting event on Monday that, uh, we're gonna get into. So, yeah. Alright, let's, here we come.
All right. You wanna start, uh, you wanna talk about Adobe's Foundry service?
Yeah. So let's take a step back. Uh, this is 2023. We're all kind of scratching our head as we are looking at Midjourney and Firefly. Thinking, could this be the future of a lot of the things that we do in computer graphics today or manually today? Uh, three years ago, generative AI was. We're certainly a promise, but it, it certainly didn't sort of unfold into the ecosystem that we have today.
And I think two years from now, we'll be saying the same thing. It'll just further proliferate. So Firefly was one of the earliest models to kind of hit the scene. And the reason it was a big deal is because it was, uh, commercially safe. Adobe trained it on licensed data only. And this was around the time when, you know, he frankly had stable diffusion and Midjourney, which trained on publicly available data, which loosely stands for anything you can find on the mask.
Scrape it, scrape it, the vacuum cleaner. So it's, yeah, exactly. Uh, the, the problem. With now there's pros and cons to both. Uh, we're just gonna put the ethical stuff aside, Larry, I'm gonna just kind of pin that right here 'cause that could be a whole conversation and a whole episode. So of course, ethically it's not cool to grab other people's data.
Uh, a lot of the AI companies will argue that if it's publicly available, then it's, uh, for us to take. And so there's like a murky line there. The. Pros is when you have a ton of data, and I'm talking about, think of every images off of Google image search, like millions and millions and billions and billions of images.
Every little article that Google ever searched for, right? And I keep going back to Google 'cause like a lot of AI companies in the past may have used Google search to extract a lot of the data. And uh, what that entails is then you get into. A model that has a really good understanding of world context, right?
It could differentiate between a ginger Aleph and a Coca-Cola, right? Like it could be very granular, very good at real world sort of problems, versus one that is just trained on licensed data. Now, if you think about just license data, think of Shutterstock, think of any stock photo website. You know, you're talking.
Maybe a million images, perhaps 10 million images at tops. So I think when you don't train on enough images, you don't have a good understanding of the world. And so even though you're ethically safe, commercially safe. Your model is just not as good. These are the things that it, it's a trade off, you know?
Um, and then if you have a model that trained on publicly available data and it's not commercially safe, then how do you monetize it? How do you make money from it? Because no true company will want to use that to make movies or whatnot with, so this was the Adobe Firefly backdrop that I wanted to present.
Now in today's world, Adobe is, and I'm purely speculating here. I don't work for Adobe. I don't actually, yeah, I don't have any friends on the Firefly team, so I don't have any insider knowledge here, but I'm speculating that Firefly models are just not good enough when it comes to quality and fidelity.
Um, a lot of times when big companies, uh, you know. Like a Coca-Cola, Pepsi will use AI for a commercial, for a printed ad or what have you. You won't even know that it's AI generated because the quality bar is just set that high. And, uh, on top of that manual human intervention, like Photoshopping goes into it to make that AI generation as believable and lifelike as possible.
The problem is Firefly custom model training like LORA training and all of the stuff that Firefly. Still offers is still wasn't getting you to that quality bar. So in the past couple of months, and I think we covered this on one of our previous episodes, is Adobe opened up their ecosystem to any models, right?
So now I think Google image models, uh, ChatGPT image models could be found on. I've, I now way
more familiar with this 'cause I've been. Knee deep in Firefly and Firefly Boards. They added a lot of models, but, um, not every model, so like, uh, the ByteDance models are not there. Kling is not there. Wan is not there.
But the big ones that Luma, Runway, ChatGPT, Google, they're all there, they're image models on their video models. But yeah, we did talk about this. This was Sure, sounds like a connector.
Like the open weight models, the open source models are not there.
No. Obviously, yeah. Not Wan 2.2 and not, um, yeah, not, uh, Seedream and Seedance which are not open weight as we clarified in a previous episode, they're ByteDance's models. Yes. Which are rather good, but they're not there ByteDance. Seedance, Seedream. I will say. Also having messed around a bit more with Firefly and using Firefly probably a year ago, and it came out and be like, what is this? And then trying it now with the Firefly Boards.
It is way better. It's not on par. Yeah, it is way better. Better. I mean, it's probably better, especially if you're. Using it in the applications that they built it into, like Photoshop with the harmonized feature where it kind of blends the objects together and matches the lighting and then, or like inpainting and, and stuff like that.
Uh, it's a lot better than when I messed with it, I don't know, a year ago. Um, but still I would just keep going back to Nano Banana, like just all the time.
I, I, I don't blame you for that. I mean, just Nano Banana is just on such a league of its own right? And then you have the open source models in, in this small other league, and then you have the commercially safe Firefly thing.
Mm-hmm. Here, um, they're all, you know. I think in a couple years they're all gonna play on an even field, but kudos to Adobe for kind of course correcting and bringing that product into the folds of what modern performance, modern quality should be. Yeah. They should
Firefly into a platform and not just the model.
So it's like Firefly is their AI platform. To do whatever you want with whatever models they connect to, not just their models,
right? So now that they've brought on third party models into their Firefly ecosystem and they have their own model, what do you do with it? How do you actually monetize it? How do big brands use it?
You know, and all of this stuff that I'm talking about. Really, honestly, it doesn't have anything to do with film and tv because the requirements for our industry, the, you know, even down to like image format, right? Like you're talking 16 bit EXRs and tiffs and Rec 2020 color space. Our requirements are so high up here that AI models in general just can't sort of compete in this space.
So all of the use cases that I'm talking about is primarily like e-commerce. Mm-hmm. Product photography. Uh. Some, some type of social media campaign, you know, uh, vertical ads and things like that. And there's a huge sort of market for that. Like AI is already being used for a lot of that now. And so Adobe wants to kind of recapture that market because they had it initially, and then Firefly wasn't good enough.
So a lot of people migrated away from Firefly and now they're trying to recapture it with third party models and a solid ecosystem.
Yeah. Now, so, okay, so the big news with this is like they launched. Service where they can train. Own their own custom model for your company, for enterprise clients? This actually, I guess this, I guess I, I don't know, this was like an announcement or something, but I remember at the TIFF panel, Hannah Sacker, who's the vice president of Gen AI, she was on a TIFF panel that was, uh, run by AI on the Lot.
And she, and, uh, Moonvalley was there as well. And I remember she mentioned like, we can train custom models for you. It's not cheap, but like we train custom models for customers. So hearing this, I'm be like, oh, I guess that was like a. If you had a lot of money, you think they would've, they've been doing this for a while, but now it's announced as an actual, uh, service if you have a lot of money.
Uh, yeah,
exactly. So I think in the past how they, uh, dialed in the custom models was it went into a bunch of AI engineers desk and then they manually. Fine tuned and wait, train the model, uh, Firefly. Mm-hmm. In that instance. So they were finding that maybe that cost was too high on a use case by use case basis.
So what they're doing now is just turning this entire thing into an automated platform. So I haven't used it. My guess is you upload a set of your images, it builds a LORAs, or it does a weight training, and then you can do, then do inference based on, uh, your quote unquote
custom model. Did you see something then?
Made this sound self-serve. 'cause it just sounds like it's a product called Foundry, but it's for enterprise only, which means mega money. Like it still sounds like it's a ha It's a, I'm
purely speculating. Yeah. Haven't, like, this still sounds like a very, haven't seen workings
of it. The hands on, hands involved, like service where they're gonna, you know, be very involved in training a model for you.
And also, did you know anything, did you see anything else about that they are bringing other models into this training or is it. Because it also seems like it's being still being built off Firefly.
I am also speculating on that. I'm just connecting the dot from their previous announcement and the next article that I'm gonna get to
the
acquisition.
So I'm just like extrapolating a clear line. Okay. Yeah, exactly. All right. Cool. We'll go with that. So Invoke.ai is a AI startup. I, we covered it on the podcast in the past. It's a node based workflow that is very model agnostic. Um, in the same way, like free, we talk about Freepik, right? Like you go on Freepik, you can use Kling you can lose view, you can use Nano Banana, and it's, it's a seamless experience.
Like you could just pick your models and go do your thing. Invoke takes it a step further and is more technical. You can start to build node-based arrangements, sequential workflows. So if you have, for example, image references that go into a inference node, those image references can then be. Trained and generated themselves, and then they could sort of, kind of sequentially fall into, uh, write nice little node diagram.
So Invoke's strength is not just their spectacular node workflow. I, I, the demos that I've seen is, is really clean. It's also the fact that they support all the models, uh, the same way Freepik does. And so if Adobe acquires Invoke, it automatically gives them a leg up on. Providing support through all of the third party models that Invoke already services.
Right. Well, it's not an,
if the news is Adobe did acquire, Invoke or, and that's their front page now splash with uh, Invoke is joining Adobe.
Alright. Yeah. So I googled it this morning and I was like, Adobe acquires Invoke, and Google Gemini came back with that is inaccurate. This is a false rumor. And I was like, really?
Lemme just go directly to the source. So I had to go into
their website.
Yeah, so, uh, it's official Invoke is acquired by Adobe. This is, um, this is a big shift in Adobe's direction and I think, um, the course correction here is something that is definitely gonna benefit them if they pull it off correctly.
There's a lot to be said about. Integrating, uh, a completely new company and their technology into a big giant ecosystem that Firefly already is. I
mean, yes. And we've had this debate before of, uh, you know, do we ever see Adobe going down the more technical product rabbit hole of like a node based system, like, uh, a nuke or like a fusion, uh, or comfy eye.
And you know, so just like, do you feel like they, they're just gonna merge the Invoke product into Firefly? Is it more of an acquihire to just bring the AI talent on board? Like what do you think about this?
I mean, the strength of Invoke, uh, is that it's all web browser based. So imagine running comfy on a browser, I guess Comfy cloud type thing.
And it's way less. Uh, technically a jarring than comfy. Comfy tends to be even for the two of us who use it time to time, like even for us, we're learning nude nodes and we're like, wait, that. That note doesn't connect to this. Wait, there's a dependency anyway. Invoke is way cleaner. It's for the average creative who is just technical enough to get away with using Lightroom, Photoshop after effects and all the Adobe products.
I think that level of proficiency will be enough for them to then jump into Invoke. I see a really nice. Integration path into all of the web-based stuff they have already. Adobe Express is a good example, right? That's a native web browser-based, uh, uh, platform they already have. And so Firefly, uh, the Board thing?
Yeah. That you're gonna do a live stream. That's another one. Yeah. There have been times on the Board. I'm just like,
man, you know, just connecting a couple dots right here would be like, a lot easier than, yeah.
That's it. Yeah. So this, this is that missing link and it, it, I think it's, it's a really smart acquisition.
Uh, we have just yet to see how it'll come into
fruition. Yeah, for sure. But, uh, yeah, I mean, uh, excited to see where this goes. Uh, and yeah, even on Invoke's website that says they'll be shutting down their existing online service. At the end of this month, so like next week. So if, if you are current user, that, that's kind of stuff.
'cause you're gonna have to find something else until the product relaunches again and then maybe bounce back to it.
Yeah. If you are using Invoke or Firefly, hit us in the comments. Let us know if we're accurate here, how much off we are. Uh, we would love
to hear from you. Yeah. Yeah. I would say I've not used Invoke, I mean, the other thing too is you could run it.
Lo or you, oh, you could host it locally too, right? Was that a thing? That was another, was that a unique feature? Like you could download the code? I'm not, not sure. It says, yeah, not sure. Uh, a creative engine for locally hosted generative media models. Media models. So like, I also wonder if that type of stuff will stick around, or if that's, that's a audios,
if you're an Adobe competitor listening to this, let's say you're a, a Google or whoever good news is there is.
A couple of in vogue competitors that are still out there, and I'm sure they'll be happy to be acquired by large companies. So the other two that come to mind is Weavy, W-E-A-V-Y. LTX Studio. So those two I I used to put Invoke, Weavy, and LTX into the same kind of category as really friendly user experience that is node based and is powerful enough to get a lot of based extra creative juice.
LT X haven't used. Oh, I'm sorry. Flo A Fauna.
Flo. Flora. Flora. Yeah. Flora. Oh, it's Flora Fauna. I thought you were like making a joke. But their website, the product's called Flora, but their website's called Flora Fauna.
I think flora.ai is taken just like vpland.com
is taken. Gotta get it's, it's a race.
Gotta the.com race again. So I'm ComfyUI, LTX with
Freepik. I think LTX and Freepik can,
I mean, compete. Right? And I remember LTX has with their own models, and I remember when they first launched, I think they were trying to be kind of more of a media entertainment, like take scripts, turn it into a storyboard beat sheet kind of thing.
And I think they've evolved into more of just like, we'll be a platform with a couple of their own models that they made.
Exactly. The AI ecosystem is so rich and so full of these little companies. You know, I, I tell you this all the time. I totally missed the.com era of like small startups in San Jose and all the, all the little like.
20, 30 people, companies. Um, I was too young then, and I didn't live in the area, but I feel like I'm living through this now, like right through the picture,
but too old to like, be like, I'm gonna like sleep on a couch and eat pizza and like run a start. Yeah.
Like, dude, I'm
not sleeping on a single couch ever again.
I'll go, I'll go get a hotel. Yeah. I need my eight hours. But I'll see you later. Yeah, exactly. I can't PI can't podcast like this.
Yeah. So yeah, there's good, some good updates from Adobe and I'm excited. I'm curious to see where Invo and, and a node based editor goes. 'cause um, yeah, be using Firefly Boards a lot This past week I was like, I wanna connect some stuff.
I wanna connect some stuff. It's be nice.
Yeah. Uh, and I'm curious how much of this new stuff will be included with a Creative Cloud subscription? 'cause that's when I'll jump into it. Some credits.
Yeah. But it's like, it is a separate subscription, which I've also discovered and found out. But there, there's like, there's Adobe Creative Cloud and I think they give you some amount of credits, but then there's a separate, whatever they call it, Firefly AI or something.
Subscription. Mm-hmm. That's like another add-on that gives you additional credits, but it's another. It's another thing you gotta pay for.
Yeah. It's like a add-on to the creative cloud. Yeah. We don't have to have a creative
cloud to do that. Like if you're just like, I like Firefly and Firefly Boards and I just wanna do the AI stuff, you can, I think just get a subscription to the AI stuff.
Separate from Creative Cloud. Nice. But yeah, it's a, yeah, different, different system. Uh, I will, yeah, at one point my goal is to do a mapped diagram. 'cause the other thing with like with, with Firefly Boards, with Freepik, with like all of these AI platforms is like they have a credit system and then the credit system.
Converts into how that company figures out what, like running a vo three generation cost versus a, uh, Ray three generation, one of my side projects is to just build out a board to like price compare, like, you know, converting all visa credits into dollars to be like, okay, who's actually the most cost effective using their platform?
It's a side project we're working on. You
get an AI agent to do that now? Yeah.
Still I, I, I have tried that and they're not always accurate, so it needs a human to actually verify that the information's not bs.
Well, if you ever figure
it out, it'll be a giant
poster behind me. Yeah. We'll put it up. We'll be, we'll have a ranking of who has the best, the most cost effective, uh, credit system.
The best bang for your buck. Yeah. Yeah. We'll figure it out.
Yeah. So, uh, another episode, another Runway announcement. So on the last episode, we covered Runway's presets, which I think is just a fancy name for LORAs training that's sort of automated. I think
you're, I think you're think it's. Overthinking. I think it's literally just like, uh, just some nice prompts under the hood that are just, that are just.
Giving you what you're looking for.
Yeah, yeah, sure. I, I think you're right. Yeah. It is just a, maybe like a frictionless experience to get to the thing you need to get to. Just It is what, it's presets. It's what it is. Um, so this is something that's slightly more different. There is, uh, a, a big, massive.
Labor intensive way to take a big foundational model, like a image model. So if you take mid journeys image model and if it has enough hooks into it, uh, then you can actually make it output something completely different that it wasn't trained on. So, for example. If Midjourney just trained on cartoon data, right, like it's just really good at outputting 2D animation cartoons and stuff, you could actually take a million images of maybe not a million, maybe a hundred thousand images of live action footage.
It's called full weight training. You can full weight train the model and then now it'll output live action imagery. But the good news is you don't have to build a model from scratch. So full way training is. Maybe a day worth of work, whereas building a foundational model, you know, it could take months, it could take millions of GPUs and so on.
Mm-hmm. Um, so it's still a, a, a really clever way to kinda squeeze more lemon juice out of the lemon if you will just, just get more out of this foundation model and have it pivot and do something it completely wasn't intended to do. So a lot of. Proprietary closed models don't exactly have this feature versus open source models like one, 2.1, uh, Kling, um, stable diffusion.
These guys are quote unquote open weight, so then you can actually take their full model and then retrain it on at your own. Runway is introducing something called model fine tuning, although we. Historically have thought of Runway as a closed model because it's only available on their API and it's sort of behind closed doors.
They're introducing hooks into it where you can actually do full way training. And the reason for that is now you can apply, uh, runway's video generation, which Joey and I go back and forth on this. We can argue that it is film and TV oriented, right? Like all of the demos that Runway has shown is, uh, adjacent to our world.
But what if it is really just good at doing robotics training and replicating the real world? What if it's just really good at doing architecture arch, arch vis? Um, so you can take the Runway foundational model, give it enough imagery of architecture and buildings and interior spaces, and then now it'll just output that.
In its own. Mm-hmm.
Yeah. I mean, again, this doesn't sound like any self-service type of product. This is another enterprise money, money, expensive service. It sounds like a service version of what the Runway Lionsgate deal was or what they were hoping to do, whether they were like, what? Train the model on the Lionsgate or well or fine tune a model on the Lionsgate media that sounds like.
That thing as a service to other enterprise customers. That doesn't have to be M&E. Yeah. It's interesting 'cause in their headline is, you know, it's robotics, education, life sciences and beyond. So anything where you're like, I need a role, I absolutely agree with you. A world model to. As a basis to, to modify, to train, to do what I to for some applications that I'm trying to do.
Yeah. This is, uh, when I say squeeze more lemon juice outta the lemon, this is exactly that. So you've spent millions of dollars training this model, and you have this amazing model and the industry that you're catering to is just not making the money for you. Right. In this case, most likely film and tv, media and entertainment.
Mm-hmm. I don't think we're gonna ever see another, uh, Runway Lionsgate partnership. There's no money in there. And it was cool while it lasted. They high fived each other, but nothing really came out of it. But you have this amazing model. So what do you do with it? You can totally pivot and turn into other industries, uh, accommodate where anywhere our generative AI is utilized.
And you go the Nvidia route, you go the Google route, you go into industrial, you go into robotics, you go into autonomous driving, you go into architecture. Those are major lanes where there's a lot of money on the table. So this is a smart move on Runway's part, and I think they're diversifying to increase revenue here.
Yeah, I think it's a smart move. They, they
already, like you said, they already have a world model. Like why just restricted to one industry. And as we have joked, all roads from world models lead to robots. Like they're all just trying to build a world model. So the robots understand what the world is like.
So the Unitree robots, the, the most popular robots that you see on the internet, Uhhuh, they're now. $16,000 and I believe you can buy them online. Are are you? So are you buying one look? Is this you guys want to donate some money to
us? Humble podcasters. We'll have a third host, right? Is this in your, uh, Amazon shopping cart?
Right now? You're just wait, waiting to click by. Is it on Prime? Can you get two day, two day, one day delivery? I, I,
I'm gonna wait.
I'm gonna wait till
one of my neighbors buy it and then just kind of get some feedback. And then you just need to
buy one. So you have it with like your lawnmower outside and so it sees your neighbors and then like waves or something.
You know what, I, I don't know what I'll make the robot do. I mean, cleaning the bathroom, that's like top of the list. Yeah.
What, what did I just see? Something of, uh, that they, they were paying people to send videos of, um, them folding laundry. To like train robots to fold laundry.
Yeah. So like for, so to do that you need a video model that the robot can train on, and that video model just has to output pictures of clothes and folding laundry all day long.
Mm-hmm. For hundreds and thousands of hours. Yeah. You could just strap a GoPro at a, like, so for that Runway would be perfect if it fully trained on folding laundry.
Yeah, yeah. Good, good use of that.
Yeah. And then you could take that fine mo, fine-tune and then sell it on a marketplace. So if anybody has a unit tree robot, they buy that fine tune for, you know, a hundred bucks and now their robot can instantly do laundry folding.
Mm-hmm.
That reminded me of. Now, I don't remember the company name, but this is also probably a good segue into LA Tech Week. But yeah, I met a company where their service was basically like sourcing those types of video clips. It's like someone's trying to train a model on something. They need like very specific training data that has, that they have a gap in, you know, like, uh, like robots folding, like someone folding laundry or someone like whatever, putting, setting up a microphone and they'll source those videos for that company that's trying to train.
The model on whatever specific need they have. It's sort of like a stock footage library, but for like AI trading, like, oh my god,
yes, exactly. Yeah.
Well, it's, uh, you know, we're so familiar with like unreal engine marketplace. Like you can grab anything, you can build any world, you can add any animation.
We're gonna see the same thing with physical, physical ai. Like, uh, you're gonna see a robotics, uh, secondhand market, like used cars kind of thing. You're gonna see, um, modified. Arms and modified legs and extra like, it's just like the camera gear ecosystem, right? Like I could go to til and get something that red doesn't make and so on.
Yeah. Yeah. Alright. So yeah, sticking on that. Tangent and merging into, uh, LA Tech Week. So last week was LA Tech Week. There were a lot of events going on. Um, I only went to two Addy and I only went to one together, but they were really good. So the first one I went to was, uh, last Monday. It was, um, all right, this one's kind of weird because the was a Amazon event.
AWS at stage 15. They're awesome virtual production stage. Uh, but the. It was called the Culver Cup AWS event. It was weird because last year was the first Culver Cup and it was actually a film short film competition. Uh, I was one of the finalists, so that was my little film played there. I remember that.
Yeah. And so it made sense. It was called the Culver Cup. It. They basically scrapped that entire thing, but kept the name of the event. So it was called the Culver Cup again, but there was like absolutely no, there was no Culver Cup trophy.
There was no competition. There was
no competition. It was just two panels.
That aside, the panels were interesting. Yeah, so the main one was a panel with the, uh, house of David creator, uh, genre, Erwin House of a David Show on Amazon Prime, produced by with Wonder Project and. Yeah, it's a biblical retelling of the biblical story, uh, through a series sort of, you know, game of Thrones Bible ish version limit.
I don't know what the budget is, but limited budget and season one. They did admit, you know, not admit, but they did explain that they used Gen AI for like a full sequence this season. They upped the game and used Gen AI for 253. Shots in some fashion. That's a lot of shots. Yeah. And they did, basically it was the opening battle scene.
One of the big examples was an opening battle scene of, I forgot which battle. It was like one of the opening battles in one of the episodes in season two. And it was just a really good example of a lot of the stuff that we've talked about. Put into practice. So it was a combo of, uh, some shots were just completely fully Gen AI, uh, some shots, they had a small LED wall set up.
Uh, actually I believe view set the stage up in, um, where they're filming out in Europe. Uh, I don't remember where, but they, uh, set the wall up and generated backgrounds and then shot Gen AI backgrounds with live actors. And then just a couple other hybrid stuff with like real life shots. Uh, augmented with some or composited with some other AI elements.
Like I think they were able to do some AI horses, uh, and some other effects. And so it was just like all of those elements thrown in together, edited together where like. All of these shots created different in different fashions, but cut together, it looked pretty seamless and they played the clip and it's one of those things where it's like knowing beforehand some stuff's AI generated, then you're like, oh yeah, I could see it.
But if you didn't have the preface of that and you just watched it, it would just like a regular, it wasn't disruptive battle scene. No, it was not like, oh, that looks so weird. No, it's just a regular, I have so many questions, Joey. I'll try to answer them as best as I can. From the panel,
well, like which, which technologies were used, which models?
Who were the artists and
yeah, they didn't get that nitty gritty, which AI models were used. I mean, I'm gonna guess because it was a AWS Amazon hosted event that it's gonna be any of the models that are on Amazon Bedrock, they're AI. Hope platform. That's pretty much
all of them, right? Yeah. More or less.
Every model exists on AWS. The reason I was asking is not. For image quality per se. Clearly the quality was good enough and they probably massaged it a lot after the generation. Oh yeah, I do.
Uh, I do have a mention. Irwin mentioned Midway, Runway, Kling, and Topaz for uprezing.
Oh, that's okay. I'm getting the, uh, the, getting to my second question, which you already have answered here.
My second question was gonna be, what about the ethical and legal hurdles? And it looks like I called every lawyer at Amazon and just. Bludgeon them until they said yes, said Jon Erwin on getting approval for AI generated shots. Yeah, that, yeah, that was
what he said. Don't know what, I guess that worked.
The uh, yeah. I don't know. I don't know what the, how the internally decided what was, uh, clear and not clear, or if it just was the matter of, uh, what we discussed here before where it's like focusing on the outputs rather than the, the training data. And that the fact that the outputs, all of the outputs were just generic battle scenes or we seeing Middle Eastern back plate background plates.
Yeah. And uh, we talked about in virtual production land. We talked about generating generic enough backgrounds on an LED wall without going through the whole unreal engine and, you know, computer graphics set up, like you can just generate it. And then, uh, we talked about Kubrick, right? Like you can get parallax out of it and everything.
Mm-hmm. And this is, this is nothing new. This is a couple years old by now, but the fact that it's being done in today's sort of. Hot market with ethical and legal hurdles. That to me is the biggest breakthrough here.
Yeah. The fact that I, yeah, it was used in final shots that are on a major streaming platform.
Yeah. And uh, if you look at like net Netflix's ethernet, which we covered, I don't know, a few months ago, that was one shot. One shot of a building falling down. Yeah. Or I think it maybe
six or seven shots, but Yeah. Right. A very small
amount.
Yeah.
Very small amount. And to now we're looking at 253 shots across.
Mm-hmm. I'm guessing an entire season, which is I think so, yeah. Like a nice little, uh, exponential curve if, if that ever builds up to it.
Yeah. I mean, 'cause also the uh, uh, comparison was season one just had 73, uh, AI generated or assisted shots. Uh, you know, so now it's about a three time, three x jump in the number of AI shots or number of shots that AI was used in.
Yeah. The other interesting thing. More so I'm just curious more about this workflow was, you know, he was talking about how he would sort of show up to set and he would just be generating stuff on set and he would like generate backgrounds and like almost real
time, right?
Yeah. And generate environments in real time.
And then like in the morning, he'd generate the environments and they'd put it up on the wall and be shooting those scenes that they didn't quite have everything generated for like same day or like the day after. Which also just in the makes, I mean you could speak to this more, but like, does that make virtual production without having the huge, uh, timeframe of VAD and building out your 3D environments beforehand?
Does that just make it as a, a more viable to just like kind of crank through shots that you're trying to do, uh, in a fast. Fast fashion.
Absolutely.
Yeah. I
mean, uh, you're talking about saving weeks if not months of time for those 253 shots, and I'm guessing you still need the same number of people because those shots don't come out right.
So you're going in there, you're massaging, fixing, compositing, uh, but the, the overall timeframe's compressed because you're not having to build worlds in computer graphics.
Yeah. I mean, for a number of comparison, uh, this was, so, CDC, who, uh, what is his official title? Uh, I think the
head of VFX for Amazon, if I'm not mistaken.
Yeah. CDC
is, uh, just VFX GOAT legend guy runs, yeah, runs Amazon's innovation and, and their virtual production stage. Uh, he was on the panel as well and, uh, yeah, he said, uh, traditional Unreal Engine environment, builts typically what require 10 to 12 weeks cost between 15 to $200,000. The team discovered they could build structural bones in Unreal within a week
then use AI to add photorealism through style transfers. Oh wow. Okay. Compressing the timeline of budget. So I guess that's also, if you need something that's more than just like one single insert shop, you need like some,
that's brilliant.
Consistent environment ish. Yeah.
They're still building the world an unreal engine, but they're not putting a lot of eggs into that basket by trying to make the render as photo real as they can.
Mm-hmm. Instead of that, they're just taking maybe the blocking, like they're putting a camera in the right place, taking that screen grab and then putting that through AI to get. The full photo realism out of it.
Yeah, I think similar to workflows we talked about on a slightly smaller scale with like a live craft jet set and stuff like that, where if you build a rough world, load it into your phone and then process it later, uh, and then composite it later.
But in this case, you can build your world style, transfer it, load it into your wall, and then shoot and get pretty close to, to final pixel.
Yeah. I, I think within, uh, the next, I'm gonna say 12 to 18 months, maybe it's being worked on as we speak. We're gonna see the 1899 of generative ai. Oh, interesting.
Had something like just big, like I remember the early days of virtual production, LED Walt, uh, usage. It, it was exactly like this. Uh, maybe you and I weren't friends back then. We weren't talking about it back then, but I certainly was saying, Hey, did you know that show that one shot in that one episode that's on an LED volume?
And another guy was like, what? No way Didn't it looked like it was real? Right. So then fast forward to like 2022, 2023, uh, when Netflix did an entire season of the show, 1899 on a custom built volume in Germany.
Mm-hmm.
And every facet of that show was done in volume, you know, ship exteriors to ship interiors to, to spoil alert that inside thing.
Yeah. So we're, we're, we're seeing like the glimpses of, Hey, did you know that show used AI here and House of David did 200 plus shots here? And then it'll just be like, no, that entire show was like custom built for an AI workflow and here is the season.
Yeah. Or when it's something that's like, kind of just has a very contemporary look or just not a very.
Stylized genre look and it's just like, oh yeah, that's all. That was all ai. Because it's like, yeah, this thing, it's like insert shots, quick cuts, desert world, easy to kind of dirty up and hide some imperfections. But yeah, it's what you're saying, like the next level
1899 team, I believe, came from the show Dark, which was critically acclaim for Netflix.
Amazing show. Mm-hmm. If you haven't seen it. So those are like proper filmmakers that just kind of transferred the skillset to this new technology at the time. So I think we're gonna see a similar thing where you're gonna get like bonafide actual filmmakers who have worked on numerous shows. Then they're just gonna kind of pivot.
And now look at this entirely new workflow and actually figure it out. Yeah.
I'm curious too because, um, Jon Erwin's other project that's coming out next year, was this George Washington, like young George Washington biopic or series I I don't. Oh yeah, I saw the
trailer for it. That looks great. Yeah.
I'm not sure I now, I don't remember if it's a movie or a series.
Anyways, it's coming out next year, which makes sense. 'cause it's, yeah, it's like ultra violin
and super realistic.
Yeah. Like a little porn
identity. Yeah. Right.
That'd be cool. I am curious. Uh, what, or if they used, uh, Gen AI on that, which is also a bit more, not contemporary obviously, but how like, like the House of David, you know, it's in the desert.
It's kind of have to feel like it's easier to hide some imperfections from AI that we have now. Trees, brighter lighting, the stuff that we've seen in the George Washington trailer or young George Washington trailer. I'm curious if AI is being used in that. 'cause it seems like a little more challenging to pull some of those shots off convincingly in that type of Yeah,
I think be the house of David, house of David was kind of.
In the right usage category because it's, uh, it's biblical. It's in a world that no longer exists and mm-hmm. I guess, I mean. 18th century America no longer exists, but uh, you know, biblical stuff. I think you could have more, there's more fudging, right? Mm-hmm. Like you can get away with more, especially talking about, you know, a giant and like mythical, things like that.
Yeah. I mean, the scene they used AI for in season one was, uh, like an origin story of, um. Angels or the Gabriel, the angel. So yeah, that's my, my lack of, my lack of Bible knowledge. But, uh, it was like one of those origin stories with angels and so it was already a, you know, very, um, fantastical story. So the AI worked well for that use case.
Yeah, this is, this is all, this is all good movement in the right direction. Um, again, uh, Adobe's pivot into, uh, custom models and. Invoke acquisition runways pivot into servicing industry other than film and tv. And then finally, you know, LA Tech Week also. Let's cover the Promise event a little bit. Yeah.
Well, tell, tell me about the Promise event that we were Yeah. So Promise is, uh, another promise promising startup. Uh, you may know Dave Clark, who is, uh, quite well known in the AI community. So I think it's Dave and, uh, gosh. Uh. Jamie is the co-founder and there's, there's two co-founders and then they recruited Dave.
So the three of them are the head of Promise, and then they also acquired Curious Refuge who are, uh, Caleb and his wife Shelby. Um, so they have a. Team of, I think about 30 people over on the west side here in LA where they're building next generation workflows specifically for film and TV usage. Uh, so on Thursday last week, they held an event, uh, catered around sort of the tech week, you know, buzz that we have going on in this town.
Joe and I attended. Take a look at these videos here and the screenshots. The keynote speaker was Albert Chang of Amazon Studios. So Albert Chang comes from Disney, where he was the EVP of, uh, the ABC group, uh, very high level guy. And then, uh, the last few years he was the head of Prime video. VP of Prime video, uh, which I think is, uh, like it would be everybody that we know at Amazon, their boss's boss's boss, like he was in charge of that entire vertical, that the stage 15 is under.
All of a lot of the VFX stuff is under. So he recently, over the last couple of months, switched over to a new site of Amazon called, uh, AI Studios at Amazon. MGM studios and he is the head of that. He's building a completely new organization and is pivoting away from traditional role of, uh, production, physical production work.
So promises co-founder George STRs. Um. Interviewed Albert Chang on the panel asked really fascinating questions. It's like one, one of the questions that got some laughs is like, is there gonna be a generative AI category on prime? Right? Yeah. When you start to make content, and Albert's answer is like, no, like ideally if we do it right, you
won't even know it's sharing of ai.
Yeah. That was the, that was the question that stuck with B two I to get that one. Uh, yeah, it's like if it's up right, it's. You, it's, you don't know. Like it's just, it's good storytelling. It's not like a separate category.
Exactly. Yeah. And, uh, the, the, the blending of traditional filmmaking techniques with new technology, that is the experiment that I think we will, we're gonna come, uh, see at Amazon AI studios and have it fold out.
Now, this type of new studio formation within a studio, I think is. Pretty commonplace. I'm sure there's a version of it at every major studio right now trying to just, um, explore and see what this can do.
Yeah. The other thing they announced too that week, they didn't really announce it at the event, but uh, they also launched another, I dunno if it's a separate company or just a division in Promise called the Generation Company.
And so that's led by, uh, Nem Perez who, uh, has a big VFX background and uh, Gen AI background. And they're, that's a separate branch. It's also just focusing on. VFX services for films and productions using AI and the pipeline. So yeah, I mean it's like, I feel like the combo with a lot of the AI studios is making some stuff, making stuff for hire, developing their own IP stuff, and like services division for other companies that are.
Trying to do stuff VFX or other things and wanna use their expertise in that pipeline.
Yeah. I, I'm curious to know what Nam is up to and what his background is. Um, Nam, if you're watching this, come on the podcast. We'd love to chat with you.
Yeah, I'll, uh, ask him. He, he was in the original, um, cinema synthetic last year, it feels like ages ago, but that was only last year.
He is one of 'em. He was on one of the, uh, the winning teams. Uh, it did a zombie film. It was fun. Amazing. Yeah. Speaking of, uh, AI studios and a teaser, we've got a. Cool episode coming out later this week with an AI studio, so stay tuned for that.
Oh, shout out to Joey for dropping another one next week or this week?
This week. This week. Yeah. It'll come out, uh, it'll come out this Friday. Okay, sounds good. Yeah. Shout out to Denoisers, our viewers out there. Uh, our usual commenter, SoraNotSora, Reed4109. Thank you for your continued support. And
love the support from cometlighttheway, and ChrisCapel, we thank you.
Yeah, thanks everyone for the comments and also if you have any questions or anything you wants to talk about, just leave it on the comments over on YouTube.
That's the best spot to reach out and keep the conversation going. And links as usual are over@tnopodcast.com. Thanks for watching. We'll catch you in the next episode.