
Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
What could DeepSeek mean for the film industry? Plus, Blackmagic's $7K URSA Cine 12K Camera
Joey and Addy explore the latest developments on DeepSeek AI and discuss what it could mean for the media and entertainment industry. They also dive into Blackmagic’s new URSA Cine 12K, its lower-cost body-only package, its new RGBW sensor, and how it competes in Hollywood’s evolving production landscape. Plus, AI and copyright clarifications.
#############
📧 GET THE VP LAND NEWSLETTER
Get our free newsletter covering the latest news and tools in media creation, from virtual production to AI and more:
https://ntm.link/vp_land
#############
📺 MORE VP LAND EPISODES
How the URSA Cine 12K uses AI to get metadata from ANY lens [NAB 2024]
https://youtu.be/wLmD4ZmJHdE
The Brutalist AI Controversy Explained
https://youtu.be/otRBw0o7QlI
AI and Virtual Production: The Future of Filmmaking with Disguise's Addy Ghani
https://youtu.be/mMLMhUFJH2U
#############
📝 SHOW NOTES & SOURCES
Connect with Addy @ LinkedIn
https://www.linkedin.com/in/addyghani
DeepSeek & Janus-Pro Image Model
https://www.vp-land.com/p/deepseek-also-has-an-image-model
DeepSeek-R1
https://api-docs.deepseek.com/news/news250120
Runway Partners with Lionsgate
https://runwayml.com/news/runway-partners-with-lionsgate
US Probing If DeepSeek Got Nvidia Chips From Firms in Singapore
https://archive.md/VH5Ps
Janus-Pro
https://www.youtube.com/watch?v=rNg-MVUN_FQ
Hunyuan Video
https://hunyuanvideoai.com
KLING AI
https://www.klingai.com
Flux AI
https://flux-ai.io
The BIGGEST thing since BAYER - Blackmagic's EXTRAORDINARY new RGBW Sensor - URSA CINE 12k LF Review by @team2films
https://www.youtube.com/watch?v=_-o-YYZE3D0
Blackmagic URSA Cine 12K
https://www.vp-land.com/p/blackmagic-design-at-nab-2024
Blackmagic Unveils Customizable URSA Cine 12K Body for $6,995
https://www.vp-land.com/p/blackmagic-unveils-customizable-ursa-cine-12k-body-for-6-995
US Copyright on AI: If it's a Tool, it's Cool
https://www.vp-land.com/p/us-copyright-on-ai
Ross Video
https://www.rossvideo.com
Grass Valley
https://www.grassvalley.com
AJA Video Systems
https://www.aja.com
Sony VENICE
https://pro.sony/ue_US/products/digital-cinema-cameras/venice
RED KOMODO
https://www.red.com/komodo
Sony BURANO
https://sonycine.com/burano/
Sony CineAlta
https://pro.sony/en_AO/products/digital-cinema-cameras/broadcast-emotion-every-frame-digital-cinematography-cameras
ROE Visual
https://www.roevisual.com
Absen
https://www.absen.com
AOTO
https://www.aoto.com
Sony FX3
https://electronics.sony.com/imaging/cinema-line-cameras/all-cinema-line-cameras/p/ilmefx3
DJI Ronin 4D
https://www.dji.com/global/ronin-4d
ElevenLabs
https://ntm.link/elevenlabs
#############
⏱ CHAPTERS
00:00 Intro
00:32 DeepSeek Overview
02:46 ChatGPT 4 vs. DeepSeek-R1
04:55 DeepSeek Impact on Entertainment Companies
07:45 Janus Pro
09:45 China AGI vs. U.S. AGI
11:00 Chinese Video Generation Models
14:05 Blackmagic URSA 12K $7K Kit
17:10 Blackmagic vs. Other Broadcast Video Systems
23:15 Blackmagic's RGBW Sensor vs. Debayering
28:45 Team 2 Films on Blackmagic's RGBW sensor
32:01 Blackmagic Ecosystem and Workflow
34:00 Hollywood's Use of Different Cameras
37:59 Media Cards and 5G
41:20 Virtual Production and High-End Cameras
44:48 AI and Copyright Clarifications
53:30 Respeecher Correction
54:50 Netflix Eyeline Research's Go-with-the-Flow Model
In this video, we're going to talk about DeepSeek and what it means for the film industry, Blackmagic's new RGBW sensor, and what the U. S. Copyright Office says about protecting AI work. And finally, Eyeline Research go with the Flow AI model. Welcome back to the podcast. Welcome to VP Land, I'm Joey Daoud. I'm Addy. Welcome back. And I'm really excited to have Joey back here. And we have a lot of things to cover. We did a lot of things to cover. Uh, yeah. So we're back in our topical format of just, uh, let's talk about what's happening unless you've been under a rock. DeepSeek is probably the biggest story in tech this week. Okay. Groundwork. People might have an understanding of what DeepSeek is. Is I've heard of it and we're not going to get what I want to get into here is sort of how this could affect the media and entertainment industry. Yep, there are far better podcasts that are going to go into deep dives and articles of like the tech, the tech behind it and the business behind it. But overview, what's your understanding of how it came to be and what this model? Yeah, so this is a to me, it feels like a completely classic scenario in ingenuity with little resources. Uh, it sounds like under the pressure that a lot of the Chinese researchers are in with limited access to GPU, uh, limited access to maybe the budget and just, uh, even copying. What is already out there, they're able to create a far more efficient model using way less resources that rivals and beats the U. S. standards stuff from OpenAI stuff from Google and so on. Stuff that's cost billions of dollars. So DeepSeek company, they trained a model that they came out with that they said that cost about 5 million to train. A lot of doubt if that's the actual number, but I think the gist is. It costs a lot less to train and it's open source. So, uh, researchers have downloaded DeepSeek and they've opened the hood version and it's legit. Yeah, it works. Um, I think some of the caveats to, I think cause some of the headlines were like, well, why did we spend billions, trillions of dollars to train large language models? And they did it, you know, Tony Stark did it with a scrap of parts in the cave. I love that. Yeah. Um, it's so relevant caveat though. I mean, They could not have done this without the existing AI ingenuity. I mean, I'm sorry, the U. S. ingenuity and investment that made this possible. And this probably seemed inevitable, like, yes, models would get better, faster, cheaper in the future. But that fast? That cheap? Welcome to the AI game. Look at that, that exponential leap. So I was just watching another video where there's different versions of R1, the product that DeepSeek released. Right, so R1 is the big model that caused all this. There are some other models I want to talk about that I think went under the radar, but yeah, so our one. Yeah. So right now in order to run a, uh, let's say ChatGPT 4, you need to run it on a supercomputer. So you're talking about tons of GPUs at the same time. This R1 model is so efficient that you can technically run it at home with a few machines with multiple GPUs. And the video that I watched, there was an AI scientist. He had like, I think, three servers in his garage. And each one of them had two NVIDIA GPUs, not the latest and greatest. And he was like, look, it's running live here. And he took a higher performance, like almost like a quote unquote, mobile version of our one. And he was able to run it on a Raspberry Pi. Really? It's insane how, how good it is. And that's why I think the cost savings come from the fact that you don't need that many GPs to train it and that many GPUs to run it. Yeah. And I think this opens up an opportunity to, with. Uh, NVIDIA, which is coming out with digits, their little Mac mini computer that they announced. That's like a consumer desktop AI computer. Um, and yeah, I remember, uh, talking to, uh, Tim Porter, who is a CTO at Mod Labs and also contributes to VP Land. And we were talking, I was asking him about digits and he's like, I want stacks of them. Like, I don't think it's like one's enough, but this was before our one was announced. I don't know what he says now, but I'm thinking. Maybe my guess is one can run on one digit than we thought. Yeah. Yeah. I think R1 can run on one digit. Um, if it can do light level stuff on a Raspberry Pi, it can certainly run on a car like fully real time. That's my guess. Uh, yeah. So. What's interesting and I think you've been following too is the repercussion of the investors and the stock market and like everything that just kind of blew up and it needed to be blown up it almost the industry needed a little bit of calibration because everything was just hyperinflated, you know, yes, we need a 2 gigawatt data center. Yes, we need $100 billion model. And now it's like, do we right? And I mean, I guess that's my other question of like, Is, uh, I guess because the thought was to keep progressing, you need to keep training bigger and bigger models. Yeah. And, uh, you know, I've heard counter arguments to that where it's like, do we, and especially and this is where I want to get into with media entertainment. Bigger and better with data that you don't really know where it came from. Is that necessarily better? And does this provide, I don't know, more of a clearer route for what a lot of the media entertainment companies are trying to do anyways, which is like train their own models on their own clear data. We had the Lionsgate Runway deal where they're training to Runway model just on Lionsgate IP. Um, I've seen study, uh, other reports of just like, uh, Disney or pretty much all the other companies are. Investing in just making their own models. Yeah, 100 percent so that they know that it is all theirs. Eric, uh, Eric and I were covering this on the pod yesterday. The studios are really sensitive and secure with their data, and they probably won't allow it to go offshore. Get trained on a model and then bring the model back. They want to have everything happen within the four walls. And if so, if DeepSeek is open source and all of the neural network body is there and all you have to do is give it new training data or just even a front end custom, you know, custom LoRA model or something just to pre train it, then it absolutely could be done in within studio walls. The repercussion for that is a lot of the big software companies like Adobe, Autodesk, even OpenAI are now probably doing enterprise grade products. Specifically for studios, this kind of, uh, upsets that business model, right? So what are, now that you have this open source thing that somebody capable within a studio can leverage and make their own, why would they go out to Autodesk and get this multi million dollar deal signed? Would they still want to. Even if they, uh, you copied the open source model of R1, would they still want to use it? Or is that still kind of too much of a black box? I mean, even some reports were saying like that, uh, DeepSeek probably used OpenAI's, ChatGPT's data, uh, and ChatGPT was saying they stole our stuff, but no one really feels bad for them because they Also took stuff from everyone else. Yeah. Yeah. Yeah. But, uh, thieves are complaining about, yeah, it's like a heist, uh, you know, modern version of the heist story. Everyone's, uh, stealing from each other. Um, but I guess, is it, is it, would they actually use that model? Or is it more just like, um, proof of concept where it's like, look, you can do a lot more than we thought was possible with like, Less hardware and less powerful processors. Yeah, let's be clear here. I don't think R1 is a generative model. It's not a stable diffusion model. So I'm just using it as an example here. They do have Janus Pro. Okay. Which is their own sort of DALL-E competitor. Okay. So they did come out also with an image model which went under the radar a bit more. Yeah. Um, output wise, I mean it wasn't really anything. Impressive. Um, okay, that I thought so not as impressive as our one. Um, well, so I was trying to dig into like, what is different about it? Okay. In the Janus model, it decoupled visual encoding. So unlike many other models that use the same visual encoder for both understanding. If you give it an image, what the image is, and then generating an image, if you say you want to generate an image, Janus and Janus Pro decouple the visual encoding process. This means that you separate encoders for each task. Yes. Which, uh. So when it goes into the latent space, the encoder is separate from the decoder. That's interesting. Uh, so I, I think that would mean that the performance is probably going to be increased because I'm just thinking out loud from like a video compression decompression standpoint. Compression is always the more significant push. And decompression tends to be the easier because, you know, you're going to decompress on a device or even in real time. So you don't have the GPU. That's a very interesting, uh, approach. I do like it. The only part that worries me is the patriotic side of me. And, uh, you know, I know I'm, I'm being really optimistic about this, but the other side of me is saying, look, America is losing its advantage on AI and surely China is not going to stop innovating. And are they, have they already beaten us? And what does that mean? We're still so early in that. Yeah. So what are your thoughts on, on sort of just like a geopolitical, uh, yeah, I mean, obviously, um, patriotic team America. Uh, so, you know, I think I know people call this like, is this a Sputnik moment? And I mean, maybe in a sense of like, sure, it spurs even more innovation. Yeah. More ideas. Yeah. It's still so early. It's too early. It's so early to like, really know what. It's not over, like, you know, uh, China has like those Tencent models, the foundational models. They're pretty good. Like, they have been competing one to one with the U. S. for a while now since this whole thing started. So yeah, they one upped us for now. Maybe we'll one up them in six months. Yeah, or six days. Six days! It's not going to be in six months. Yeah. Um, yeah, I mean Uh, yeah, I know the ra the race is sort of on for like AGI and that's the big kind of buzz of like, you know, Wayne will get there and who will get there first now potentially. Yeah. Um, and, but I don't know. I mean, can we be in a world where there are two AGI models? I don't know what that looks like or what that also, what is the cultural impact on the training in us? We hold. Different values than they do in China, right? And deep seek was known or people flagged like, yes, if you ask it about sensitive topics about China, Tiananmen Square, it won't tell you. Um, I mean, there's similar things with our models to like, you ask it who, who's seen things or like racist things like it won't tell, it won't do that. So like, yeah, these all have safeguards depending on obviously depending who. Culturally, like what's acceptable for wherever it was trained. Yeah. Um, but I mean, even, uh, I mean, this is a big shot with like, uh, large language models, but, uh, for video generation, um, there have been a handful of Chinese models that have Hunyuan. Am I saying that right? I don't know. Yeah. Hunyuan is the one I like. Yeah. Uh, yeah. And then that, that's That's, that's mini match, right? That's the same. That's the same one. And then there's KLING. KLING is great. Uh, KLING. Yeah. The KLING ingredient features where you give it a couple source images and it generates a video has blown me away consistently. I believe Flux might be Chinese. I don't know. Yeah. But I mean, that's the thing. You don't know what product you're using. You just go with what works and what's good. And they're all cloud based. They are all cloud. Yeah. Yeah. As long as they have an English interface, I can use it. I think one of the tents and wasn't, wasn't Chinese. I was like, I'm not going to translate this website. I'm good. Um, the one last thing about that, uh, implication, not even, I mean, user interface implications, um, that I thought was interesting with a DeepSeek is. It should, when you use it. It shows you its thought process of how it's thinking about things. Saw that. And people like that a lot. And so I'm wondering if that's also going to be like we're going to see a whole UI change or just update in all these other models showing you what it's thinking, which could also, I think, be much more useful because if it right now you type out what you want. You got a response and sometimes you're like, not really what I wanted. But if you see how it got there, then you could be like, Oh, you're thinking of this wrong. Or you misunderstood what I'm trying to say here. Yeah. Let me reprompt you and you could get a better understanding. Like an advanced user mode. Yeah. And it's like, Oh, okay. I see where you like got derailed. Let me clarify that. Which I think would be useful both for language models, but also If there's an adaptation or way to implement that in video generation models too, where it's like, Oh, this was not the video I wanted, let me, but if I see how you got to this conclusion, I can try to, you know, uh, fix my prompter or how I interact better. I absolutely think DeepSeek is going to open the door to that for other models. Now we're all going to want to. Yeah. I think we're past the first phase of like, Oh, this is cool. And now we're like, no, I need the right result. I need the right answer. Yeah. So I mean, control control is like the buzzword, the main theme, I think for this year, it's like, we know we could do these things, but now we need the control definitely applies for video generation. Oh, yeah. Yeah. I mean pixel by pixel control. I'm tired of like text prompts something I can give me something I could steer a camera especially in our industry, but we have you know, uh, you know pixel efforts people And rightfully so it's their job to go pixel by pixel on a frame every frame in a movie Yeah, make sure it's all good. Yeah, right whether it's yeah Exactly. Scan every line. Rotoscoping, color grading, compositing, I mean, you're talking about these trades that people literally pour pixel over pixel detail for. So absolutely, if we can't have that level of control, it's just not going to be usable for the mainstream. Speaking of control and sensors, uh, other big story that, well, it's a combo. There's a big story this week and their price drop. And, uh, a new video from, uh, Team 2 Films that I thought was really interesting about the Blackmagic URSA Cine 12K camera, which that camera is not new. It was announced at NAB. Uh, we did a bunch of videos about it when they announced it and it's been out shipping in the wild for a few months now. What I didn't realize was that they completely built a new sensor and sensor technology. Yeah. And then the recent news this week is that they are selling, they didn't price drop. The original kit was about 15 grand, which is still pretty remarkably cheap for a high end cinema camera that they're trying to, you know, release out in the world. But that came with like a Pelican case, it came with like shoulder rig, it came with a bunch of accessories. Okay. That was how they sold it. I think it was, um, on board monitor. Uh, that was, I think that was still like, Oh, uh, maybe, you know, the viewfinder is extra too. I don't think it came with a monitor, but it came with, it came with their 8 terabyte, one 8 terabyte. Oh, um, like high speed storage, their high speed, their new proprietary SSD. Maybe it's proprietary. It's their own, it looks like the Ghostbusters kind of like, uh, it has a handle. Yeah. It doesn't have a little handle. You got to pop it in their media storage card thing. That's eight terabytes. So they're going to sell the reader for that as well. I'm sure. Like the doc, there's a, like a rack mountable, like three doc version. Okay. It's what like Blackmagic does with the hyperdeck. Maybe it's similar. Yeah, it's yeah. The hyperdeck can just take regular SSDs, but this is like a new format thing. Anyways, the kit came with that. I can think it was a user swappable lens mount. So. And it came with all of the options, I believe. Oh, nice. So like if you wanted to swap out, like LF, uh, you don't have to send it to manufacturer. No, you just do it yourself and you don't have to pick which version you want. So it was a lot of really cool options. So anyways, now they renounced, you could just buy the body. If you're like, I already have all of the accessories. I don't need the other stuff. You can just buy the body for, uh, 8,000, 7,000. Joey, this feels like they're giving it away for free. Like, what is their cost at the end of the day for, I mean I mean, it's very, yeah, it's very aggressive to they want this in the hands of app professionals. Proper filmmakers, yeah. That it's used on projects and making films. Why is it that every time, you know, like when the FX9 came out, or, you know, the ALEXA 4K came out, the KOMODO X came out, like, that makes such big splash in our community, people go crazy? And then when something like this comes out, it's like, I don't, I don't understand. I mean, I know there is some of a reputation of like Blackmagic stuff being cheap. Yeah. Well, yeah, it's not like real filmmakers, cheap quality. Right. Or it's like for yeah. Beginners or something, but you know, I don't know why that stigma has, as I mean, the only thing I can think of is that I know that stigma is from around, um, some of the earlier broadcast teams and hardware. But if you could, I guess if you compare their things to a Ross or Grass Valley. I'm not as much in the broadcasting world and I know like that's a that's a very demanding world where you like cannot have Hardware failure and you have to have stuff running 24/7. One thing, one company that I always compared to Blackmagic, especially on the broadcast side is AJA or AJA and uh, whereas a Blackmagic converter, like a little Pico converter would be 300. The same thing with AJA would be 3000. And then the same thing with a Grass Valley or a, you know, proper broadcast company would be like 7,000, 8,000. And they all work. I mean, maybe the Blackmagic might overheat if you just keep it on for a year, you know, but it's like it gets the job done for 99 percent of the people. Always love their business model from the very beginning. It's the 90-10 model. So give the consumers 90 percent of a product that is premium for 10 percent and that's exactly what they're doing with the URSA, right? So, uh, 10 percent of the price, which would be what 150, 000 camera with all that kit. And that's what a VENICE 2 is, right? If you put a VENICE 2 body is probably 75, 000 or so plus the memory, plus all the rigs and everything you're looking at, you know, 10x the price of an URSA now. Are they the same? No, of course, this is better. It's been noted. I mean, yes, the, um, Aries and the Sony's they have higher dynamic range. Yeah, they're, they're also 2 to 5 times the costs. Yeah. Um, So I think there's three, three things I'm going to say, uh, that's holding Blackmagic back, in my opinion, just, just to get better market penetration into the pro filmmaker world. One is the body on the, uh, URSA is still just clunky and bulky and it just doesn't look good. And it's also bigger than a KOMODO. It's a beast. Yeah. I mean, it's not like the, I mean, they didn't come out with the PYXIS. Yeah. Which was their sort of answer to like a more boxy formatable camera. Exactly. It's that's more in line with like the, the, the 6K Pro. Yeah. Um, so those sensors, by the time you rig that up, you put your battery, you put your, you know, uh, focus, you know, all of that fizz and everything you're looking at a giant thing versus, you know, It is a cinema camera, which they're like, it is for cinema stuff. Yeah. So it's, it's not a run and go set up. So I think, and I think the. The engineering hurdle to miniaturize that thing would be significant and it would drive the price up of manufacturing and things like that. So that's probably why it's 15,000 with that format. Okay. Number two, I think it's a marketing branding perception that. It is important in every segment, not just for automobiles, not just for furniture, but also for cameras. Uh, I compare it to the Hyundai Kia issue, right? Growing up in the 90s, early 2000s, Hyundais and Kias were not great cars. They were just entering the U. S. market, but they were really cheap. They were cheaper even than Toyotas and Hyundais, right? And then over the last five to six years, they've really grown up and now you have Uh, EVs that are rivaling Tesla. You have luxury brands, you know, that are rivaling Mercedes. There, if it still has a Hyundai or Kia on the badge, you're not going to pay 90, 000 for a car when you can just get a Mercedes. So Hyundai mitigated that issue by creating Genesis. Which is its own separate brand. And now Genesis has its own dealership, which is next to the Hyundai dealership. It's probably the same people running it. Uh, but now it feels like, okay, I'm buying into this thing. That is not a Hyundai. And I am going to pay 90, 000 for it. I think there may be a Blackmagic premium label. I mean, I think maybe they've been trying to do that with like the URSA. Right. Label versus the pocket label. Like the pocket was always kind of bit more towards like entry level, uh, accessible. And URSA was always the kind of this higher. You can't find a Hyundai Kia logo on a Genesis car and the websites are different. But I mean, that's interesting you say that because I don't know from my perception, it's like I never really perceived. Blackmagic as like a, even though just the right marketing, the brand, you'd always feel as high end. And I don't know if that's more recent thing because of like, whatever, my experience of 10, 12, 13 years, like going more in depth with this and being at any B let me ask you this then what's lower end than Blackmagic as a camera? Um, I don't know if I would say it's a specific manufacturer, but more of like. Models. Like, I would feel like the Lumix, as much as I love Lumix line, uh, feels lower end. Lower end, right? Yeah. I mean, the body builds are plasticky. Everything else feels higher end, doesn't it? Like even a red, which is so similarly priced. Yeah. Again, I guess because they've just really RED is just cooler. It gets done. I mean, but I think of the RED ONE and that was like, yeah, yeah. But I'm saying just from a strictly branding perspective, like, uh, what, you know, I think it's because the other ones have so many options. I mean, it's like, uh, you know, I would. Kind of perceive like, um, the Blackmagic, like the pocket cinema cameras as like higher than like an a seven mark, whatever. I'm, I always mix up the Sony lines, but is it a BURANO or is it a VENICE? No, but it's all Sony, you know, but they have a variety of lower end cameras and higher end cameras. I mean, they have sets in Alta, which is their sort of, they have, yeah. A different separate sort of sub brand. Yes. The BURANO, I think, is part of CineAlta. Yeah. Yeah. So, the Verona is their myth. But, I'm sure you can relate. You go on set and the DP has a VENICE 2 that he's, you know, rigging up. And everybody just pauses and looks at it. It's like when, uh, car people go to a car show and there's this very limited edition Rolls Royce. Everybody just That's the sort of aura around the VENICE 2. And a lot of it has to do with price and availability and difficulty in using it and also superior image quality and just preference by DPS, right? Like all those, I think Blackmagic has to maybe. Uh, get behind a few DPs and have them ambassador the brand. Yeah. And I think that's probably a big push of like why the, the, not a price drop, but why make it available at a lower price as a body only. Having said all that, I am in love with this camera, especially because I took a, I took a little bit of semiconductor physics in college and I, I just know that it's super difficult. To, uh, a design it. Yeah, no. So like I said, I set this up at the beginning and we have not talked about the sensor at all. So, uh, you know, more about color science. So give me, explain Debayering process and sort of how most existing sensors work this way. Yeah. And then for what you know about. How Blackmagic has changed this with a new type of sensor. Sure. So all of our imaging is essentially gonna go from photons of light down to three channels, red, green and blue. And so in the past, during the early days of digital sensors, um, there used to be a prism in front of three sensors and it would split the light into red, green and blue. Three CCD. Three CCD. Yeah. Three CMOS. And uh, those were tiny too. Those were Quarter inch, three quarter inch, you know, cause that's all you could do back then. And then it was just a matter of mathematically, you just more or less add them up because they're operating in their own frequency in their own wavelength and so on. So now we have one giant sensor and within that giant sensor, there are three photo sites. So a red photo site, green photo site, and blue photo site times, however many pixels you have diagonally. And, uh, You receive, the photons hit the photo sites, then it generates voltage, each different for each color, and you can more or less, uh, debayer it. Uh, so you would have to figure out what the interference from the green channel is versus the red versus the blue. The Bayer pattern has twice as many green. Yes. So it's, it's a little bit like how our eyes and our retinas work. Like we have more cones than rods because we want to detect more, uh, differences in bright and dark versus differences in color. And so the sensors, uh, also put more preferences on, I think green more than red and blue is the least something like that. And then in the end, what they output is going to look uniform. Blackmagic introduced a fourth photosite white. So there it is an RGBW sensor. Yes, times however many pixels and it's insane. Uh, the, the challenge is now that white picks up red, green and blue, right? All of the wavelengths. Um, so now how do you take the value of white and then. Distill it down to each of the other three and contribute to it rather than interfere with it. So the white value alone will probably overload and blow up the red, green, and blue values. So you have to like take what's important in the white value, however you filter it or process it, and then contribute to the red because ultimately the signal that goes out is not going to have any white in it as it goes down into your media. Pipe and into the processing unit of the camera. The white is just going to be at the capture site and then contribute to the red, green and blue. And then, uh, like whatever you're recording on disc, ProRes raw format, that's not going to have any white in it. These will only record Blackmagic raw. Yeah. So maybe Blackmagic has made room for a W channel, but as soon as it hits resolve or premiere, there is no white data. Right? So the white data has to be used to enhance the image, then go away. Mm hmm. Okay, so it's like they have these white sensors, these white photons, but like, how are they using that data to Yeah, I'll give you the flip side of it, and, uh, like, I think you and I cover a lot of LED processors and panels, and, you know, uh, ROE, Absen, I think AOTO as well now they have the RGBW panels. The issue is the incoming signal, like an HDMI signal, SDI signal is still RGB. So how are they creating the white? So they're doing the inverse of Blackmagic. So they're creating white from RGB values. They're like, okay, based on these three values, I think the white value should be this. And they're then sending that down into the LED panel. It's like doing the reverse where it's like, well, based on the values of the photons around this white. Photon yeah, what do we think this white photon should be? Yes, and uh, does it actually make the image better on the led panel? Right, or is does it just blow up and it's just like you're seeing more white than red green blue and all the colors So it's it's a really complicated uh color science algorithm problem that Is i'm sure you can Figure out different versions of it on paper, however, to actually run it in real time on that many pixels at the same time, you're talking about more compute that's needed. And so the camera has to be slightly bigger. The central processing unit on the camera has to be more beefy. It's going to consume more battery. I think that's also probably why they have this proprietary, uh, 8 terabyte. Yeah. Media card deck. I bet you the RAW data probably does have the white channel in it still, like it's probably remains there. And then the minute you convert it into a conformed standard, like ProRes, it goes away. We can ask Blackmagic about, Hey, Blackmagic, please comment and let us know. We're very curious and, uh, we applaud you for going in this way. Uh, yeah. We'll build up a list of questions I'll ask them at NAB. Yeah. Uh, this year. As far as how the, the, the sensor and everything works. So yeah, Team 2 Films, they had a really phenomenal video kind of breaking down the science and animating how these sensors work, the practical application of all of this stuff. Like, what does this mean for actual filmmakers and why this is good is so it is a 12K camera. There are arguments or it's like, what, what do you, what do you need to shoot for 12K unless you're shooting like plates for an led wall or VFX shots or something. Um, and, but it will, uh, you can adjust the settings and shoot AK four K, which is common in other cameras. With a bare sensor. Yeah. What is it? How do those cameras handle down sampling if you're shooting at a lower resolution because you are giving up. You give up some stuff. Yeah, so I did see that video and I think the way the Blackmagic, the new sensor handles it is from a group of pixels. It creates a cell. So let's say six. Yeah. Yeah. And then each of those cells represent a pixel on the lower res version, right? It's basically still able to use the entire sensor. Yeah. Which contrast is, but the other way, if you're shooting with a camera on a traditional camera, you either pixel skip. So you, uh, you know, go from 12K to 6K by skipping every other pixel or you pixel crop. So you just use the middle 50 percent of the sensor. Um, and that is annoying because then all of your lenses, everything, your image is now yes, yeah, tighter. And so you like lose your wider field of view 100%. And the problem with pixel skipping is, uh, you just. Turned up the aliasing and more a by 2X or 3X because I think details like this. That's true. I probably go crazy on a damp down sample. Yeah. And then also, um, some cameras will take the whole image and then down sample 12K to 6K using math, but that also takes up GPU and CPU power. I don't think cameras have GPUs, but it takes CPU power and that's not efficient either. Uh, so with Blackmagic cell arrangement, you're still using every single bit of those photo sites. You're capturing all those photons. You're just not losing any quality. Yeah. And that's, that's the biggest benefit. Like I think tactile benefit of like filmmakers where it's like, okay, you can have this camera, but you can just shoot 4k. It can resolve the fine details. You're not losing detail. You're not like giving up. You're not losing your field of view. You're not really giving up anything. Yeah. Which makes us a very versatile camera. Yeah. The other I would imagine, and I'm not a hundred percent sure because I haven't played with the camera yet, please send me one, but having that Ws, uh, extra W sensor on each photo site would give you amazing low level performance because you don't, you don't. You're not just capturing this thin slice of the wavelength of the spectrum. Uh, you're capturing the entire thing. A W is going to be the whole thing. So you're just letting all this light in versus, you know, every like thin strips of the wavelength. So my guess is at night, it probably looks great. Yeah. There was some tests in the team to film of night shots. I thought it was a little, there was a 3200 ISO. Yeah. I thought it was a bit noisy, still usable. Um, but I'd be curious. I would feel like also the and this is something I've been kind of wondering a bit about when we think of camera workflows now and sort of what happens in the camera and what happens on the computer. Because if you're shooting Blackmagic with Blackmagic raw and then you go into resolve, the part of the argument would be like, well, the resolve has really good machine learning noise reduction filters and it's sort of a bit of like splitting up the process of image capturing of Blackmagic. Some stuff is handled on the camera and some stuff maybe you think about and just handle on a beefier process Yeah, if you want to get that really good final image 100 percent if you want to users in the Blackmagic ecosystem And you should because you've made them by this camera You better make sure they use Resolve and then upload to cloud. I would do it that way Yeah, I've gotten sucked into the ecosystem. Yeah. They have a hundred percent pulled the Apple model and, yeah. Sucked me. Yeah. With between like going to resolve Blackmagic. Raw black, magic Raw, yeah. Cameras. Yeah. I bet you the, I mean, I'm just guessing here, but the Deno in Resolve is probably fine tuned to the Blackmagic raw format. I don't know, but I would not be surprised if I mean, if you have the training, like if it reads the metadata and it knows there is a W channel from this camera. Oh, it recognizes the camera. Then like it, it like auto adjusts itself to denoise exactly that. Hopefully that could be like that. I mean, I, one of the biggest things I like about BRAW and, you know, the cameras that their cameras have gyro sensors in them and it records the position of the camera in the BRAW file and then you could use that to stabilize. And it is a phenomenal stabilization because it's using real data and not trying to you don't get the weird warpy effect because it's using actual is that just would BRAW or can it do other RAW? I don't know if it could do it if, like, if you're shooting red rod is to red cameras have sensors in them that it reports the RED has a media player, but it doesn't have a, I don't think it has an image stabilization. I mean, does it have to recording any data on that? Probably. Yeah. I think that's pretty standard for a lot of high end cameras. Okay. Yeah. Yeah. High end camera. Yeah. 1, 500, the sensors in them. Look, uh, I have an URSA here. You can see it. Um, it makes me want to upgrade and of course makes me want to go out and shoot stuff. Uh, I think the, um, where the rubber meets the road will be when, when, when you hear about something like a Creator where they used an FX3, you know, or Civil War where they used the DJI 4D camera. If there is a major Hollywood feature that used, uh, the URSA 12K. It'll happen. I'm sure. I think that's when we're going to be like, Oh, sure. That's a serious camera. It's interesting the 4D got so much attention, the Ronin, uh, cause I remember, I think even if I remember correctly, going all the way back to the first Captain America, I believe they used one of the first. Blackmagic pocket cameras as a crash cam. Yeah. For a few shots for like putting on the car like the Jeep that smashed into another Jeep. Yeah. Um, 'cause uh, yeah, the, the, the, the Ronan 40 got a lot of attention, but that was for like a handful of shots. It wasn't like No, they shot the whole film on it. Yeah. I think a lot of the car shots. Yeah. Or the, um, was it like the, the kind of, um, uh, war photojournalist, POV kind of stuff? Yeah. Of like following, um, makes sense because I'm a. F1 fanatic. I did notice and they've been shooting the Brad Pitt movie that's coming out this year at real races last year. And so I'll see the clips behind the scenes clips people like at real races, but they're like, look, Brad Pitt was like acting it out and they're like acting out the stuff. But I did notice they had the Ronin 40 on some of those shooting some of those shots where they were like. In the real melee of the race and chaos, but Brad Pitt's acting like a real racer and then there's like the crew following him, but like, it was a driver in front of Brad Pitt. That's driving the car and he's just that was different stuff. The stuff they did. The race was like, uh, at the real races was like taking advantage of having all of the racers and the crew there. So like that, I saw a behind the scenes clip or it was a real life clip, but for the film where he was on the podium and like an, uh, an award with like, Real racers next to him, but you know, it's Brad Pitt accepting the fake award or like Brad Pitt pulls up in his car in Mexico and then like gets out, you know, like he won the race and it's like waving to the crowd. So like those clips where they shot him as a real racer at a real racing event, but it was for the movie. Yeah. And you can't bring a giant camera rig there with the crew. Like you got to be stealth, right? And I mean, in that case, you can also have someone else, uh, yeah. Uh, further away, remotely controlling, like controlling the actual frame of the camera while someone else is just the operator is like, yeah, you know, we can camera geek out for forever. Uh, one more camera geek moment. I'll give you, uh, the Sony Rialto. Are you familiar with it? Yeah. Or you take the cable and you can, um, have it, the sensor is separate from the body. Yeah. So they used that for Top Gun, right? Yeah. All the shots though, where Tom Cruise's, uh, flying, it's just, uh, sensors in front of him and the body is elsewhere. Mm-hmm . I thought for F1 that would've been perfect for Brett Pitt. Like, like F1 cars are so tight. They used it. I mean, I know they built a lot of, uh, custom Briggs for the race. Yeah. Cars and scenes, whatever they did. Yeah. Which I'm sure, yeah, a lot more than behind the scenes stuff once the film's out. Um, and also, uh, same director anyways, uh, uh, what's his name? Top Gun. Top Gun Director. Uh, whose name I can't remember. I'm blanking on his name. Yeah. All right. Sorry. There was, Oh, there was one thing with the new, less expensive, the body only price that they all, that they did announce that it comes with is that you don't just need the proprietary eight terabyte media doc or media card system, which, uh, read sells for 1, 700 a card. They have a new adapter fits in the same slot, but it can, you can put two CF express cards in it. Okay. So that's also nice too, because it makes it more accessible to a lot of people already own CF express, which they're not cheap. Um, but like, like that is nice that you could also now use cards you already have. You don't have to purchase their, uh, the expensive, you know, working on, on the 5g telecom side of things like, uh, I think Sony might have, um, a card that goes into one of their high end broadcast cameras that takes the raw format and then sends it over 5G or over Wi Fi to be recorded somewhere else. Oh, and without losing quality? Yeah. In raw format. This has a 10G, um, Ethernet. Maybe able to do that? I don't know. I think that was more for like, just a faster way to offload. You know, the large data sets. If you don't buy their data rack to plug the cards into. Yeah, interesting. But ultimately, we are going to go away from physical media in high end cameras. It's just a matter of when. Yeah, so I mean, I know that's sort of been the prediction for a while, like, especially with the frame. I owe camera to cloud and Blackmagic has their own for something like this. The camera to cloud workforce, probably a bit more for proxies to cloud to cloud, not raw to cloud. But you think eventually, yeah, that was the whole promise of 5G, right? Is like, yeah. Right, right now your phone is, uh, you know, I think what, 10 megabytes a second to the network 5g at its best is supposed to be gigabytes, right? Like two, three gigabytes a second. So it is good enough to do raw if you're. Rest of your cloud infrastructure can at least, uh, the RF part can handle expensive from what I know data costs to be right now. But if that comes down, the, the challenge with 5g is cell coverage. Uh, so whereas, uh, you know, with 4g and the legacy, uh, towers, one tower can cover like a area, five square mile area, something like that. Massive 5g, because it's a multi band multi channeled, you have to have like. Um, a farm of towers surrounding your area, and then the phone will connect to multiple. Okay. So, um, they're more distributed like bonding. Yeah. So you're bonding. Yeah. So that's like, especially at a, like a super bowl game where every. Camera will have a 5g interface card. Yeah. You're going to need a lot of cell coverage. They bring in their own extra towers and stuff for that, for something like that. Dude, I can get into the, the 5g stuff is so fascinating. So I think you saw, yeah, you have to buy the bandwidth from the government, from FCC and there is bids on it. So Verizon bought like slice Band C or whatever for like 50 billion dollars. Oh, so they just own that band? They just own that band in America. Nobody can be on that band. Yeah. Like there's the FAA, uh, you know why they say don't use 5G phones when a plane is landing? Yeah. I think the FAA owns a band for Plane traffic that is close to the C band or something like that. Oh, and they just don't want to have any risk, any interruptions. Interesting. You think it's something like Starlink going to blow this all up? I don't think so. I think you're dealing with, uh, limitations of physics and light and speed of light. That satellite is still like, I don't know, however, 10 miles up and divide that by speed of light. You're talking to, you know, a few microseconds less. Then if a cell tower is right there, a thousand feet, so you can't beat the speed of light. So I think, uh, all satellite based links will be inherently capped by how far they are. Last thing I wanted to ask about, uh, for virtual production, go back to the URSA. Yeah. Do you see being a good option for not shooting the plates? That's seems like a good option, but like acquisition, like filming. Cause I remember, I don't know if they'd come up with another version, but they had With the previous URSA, the G2 or whatever that version that I think did have a 12K version, but, uh, their old sensor, they had an optical low pass filter version that they were trying to make it better for virtual production. Really? I missed that. Um, yeah, I guess that was not on your radar. And does that improve? Does that help when shooting with like led walls and does it make a difference? Here's the thing with led volumes. By the time it's all said and done and you built the stage, you're a few million dollars in. At that point, you're not going to put a cheap camera on that tripod. Like you're going to want something that is tried and true and proven because that's the last thing you want to have to worry about. You're already thinking about led tiles and the angle of view and the color space and then the media server and how it can play back media or unreal running. And if it. The scene is optimized. There are so many variables. You don't want to have an extra variable on, Oh, is the Blackmagic camera going to be okay? No, I'm just going to get a VENICE 2. I'm just going to get a ARRI ALEXA because I've already spent 2 million. I guess this is going back to your like tier one, you're like high end. Kind of thing, because I feel like I've seen the earth says like a big go to camera on a lot of volumes of the, uh, Global Objects when I was filming there yesterday and they, that was like their stage camera. I'm sure they bring in other cameras if they have other things, but that was like camera. That was their camera on the stage. Like 90 percent of the time, the DP is going to bring their camera, right? So the camera that you have in stage is going to be just a test camera at best and the red Komodos have done a really good job of being ubiquitous with VP. Yeah, whether red wanted to or not, because they're a global shutter and they support all types of gen lock. So even if the low light performance is not that great, they're just synonymous with VP. And I think, uh, you may see inertia 12K here and there, but like the cameras have been bought and they've been placed in the stage already. And, uh, and then, yeah, that isn't out yet too, but that's, I have a 17K camera that is coming out, which is like their 65 millimeter equivalent. Just from what was on their website, it's the same RGBW, RGBW sensor. Just a giant photo site, yeah. I assume you'd have the same benefits of like 17K, 12K, 8K, 4K. I'd imagine, uh, if you pair, now I don't know how the exact results would look like, but if you have RGBW panels and you pair that with RGBW sensor, There may be some, uh, you know, a better transmission to capture a W cell on an LED tile transmitting W and then a W photosite picking up the W and maybe there's some extra fidelity to be had, but that's a very cool test to do. Just a big W big win. Yeah. Just like here's my white. Got your white. Uh, interesting. Yeah. Maybe we can, um, see if we can get some gear to test that. Uh, okay. We definitely went on a lot of time about Blackmagic or cameras, but cameras are always fun to talk about. Uh, one last interesting thing that came out this week, um, the U.S. Copyright Office issued a report on AI and some clarification of what can be copyright protected with AI or using AI tools, which is a huge benefit because this has also kind of been. A kind of wait and see with everyone of like, can we protect this work if we use AI tools? Um, so the gist of it is works created solely by AI without human input cannot be copyrighted, which I don't feel like that's much of a surprise if the computer made something by itself. Does the computer even know what it made? Yeah, um, human AI hybrid. So copyright protection applies. Uh, to human authored elements. So if you had a human involves your editing of movie or film, how to used respeech or or something. Yeah. A human was involved. It was derivative. It can be copyright protected. AI assisted works must show quote, sufficient creative control by humans. So human needs to have an input. Yeah. Which I feel like this. Makes sense. Totally makes sense. Uh, and just to go back to the first thing that you said, where if it's a hundred percent machine output with no human intervention, it can't be copyright. God, I hope so. Yeah. Cause like, there's going to be a lot of that, right? Uh, I think there will be an issue with the monkey that took the selfie and there was a debate on, could the photo be copyright protected by the monkey? And I think it wasn't because it was not a human. Yes. I think the monkey knows what a court of law is. Monkey and computers in the one bucket and then humans in the other bucket. Absolutely. Absolutely. Yeah. Uh, look, uh, how, There is going to be a future where AI models will train on AI data, like, uh, you know, generative AI images and videos will get so good that you could pass those for training set data. I mean, yeah, it's another thing now, synthetic data, when there isn't enough, or they use all the data, or they don't have access to data, or they want cleared data that they know is copyright issues, you do synthetic data. So. The interesting thing there would be like, okay, so let's say this, uh, data center in Norway spit out two petabytes of images that are now just on the internet. I think anybody can grab it because they're not copywritten. Oh, like they made synthetic data. It's a hundred percent machine available. Yeah, it seems, yeah. Um, that synthetic data itself would not be copyrighted. A human then use that on a mom thing. Yeah, yeah. Or a human went through every one of those photos and then just conformed it to 8 bit rec 709 color space, like something really basic like that. Now there's human intervention. So now I'm sure this is going to be a debatable thing in the future of like, well, how much human intervention? Yes. Cause I got a question. It was like, well, what if I, they were like, you can't copyright protect prompts, which I guess I can understand that. Yeah. But if you. Spit out solely just an image. You make a prompt, you spit out an image from Midjourney. Yeah. I don't think my interpretation was, I don't think that image could be copyright protected if you just spit it out. My guess is Midjourney, first of all, owns it. Like, you probably have to Well, some of these tools, depending on what plan you buy, they're like, it says commercial It's for commercial use. Use. There's commercial use licensed on some of these Okay, tools depending on what plan you're purchasing. So I would imagine if you paid for a mid journey subscription and you generate an image of a llama, then you own the rights to that llama. I think so. I mean, you made the prompt, it spit it out. Well, that's the thing. Like, uh, I just, I always go back to unreal engine and how their license is structured. Cause I understand that. Yeah. How does that work? Uh, so basically there's two types of, uh, use. One is, uh, entrepreneurial freelance use. Like, Hey, I'm just doing this as a hobby. Go for it. It's free. You know, like, uh, if you make a movie out of unreal or game out of unreal pass, yeah, then, uh, and then hit some kind of app store. Then I think it's like 5%. But the game IP itself you own. So you would own the Unreal Engine scene, the project, but the minute that it is running on Unreal Engine on an, on a mobile device or whatever, then they have rights to that royalty. Does that make sense? Their rights to the royalty. They don't own the IP. They don't own the IP. So you have, yeah, you've built like you, you have this crazy game where a bunch of monkeys are taking cell phones, right? And you're like, uh, no, I own the rights to that monkey land game, you know? Um, but minute monkey land runs on, uh, you know, a hundred thousand iPhones. Then that generates whatever revenue through in app purchasing. Epic has. Yeah. I mean, it's a model with a tool AI tools in the future. Um, yeah, so I'm sure there'll still be use or cases of like sorting out if you make the prompt yourself, but you spit out the image and you don't modify the image of the image copyright protectable. If you take the image and then run it into like runway and turn the image from the journey into a video from runway. Is that human intervention enough to yeah, I think so to copyright protect that for sure video output. You're just using, um, tools to get to what your output is desirable output is shrimp. Yeah, I mean, I feel like it's all ties of you. I mean, I applaud the copyright office for staying relevant and staying up to all the stuff that's happening. And I think there's also the other note was like, um, Copyright applicants must disclose AI use and specify human contributions. That's tricky. You need to track your process, which I think is also going to be too like, Oh, you really need to. Yeah, I had a documentation of like what you did, which is not something. Yeah, I mean, you would have to open up the image metadata and then just see like if the metadata has any sort of, you know, coverage of what it's been through. Right. I doubt it. And that this is the stuff that a lot of the bigger companies like Adobe is working on is, um, metadata driven, uh, protection or at least recognition that it's touched by an I I system. Yeah. And also in the vice versa, which we mentioned last time of, of, of like, if you take a photo with a real camera or shoot video with a real camera, uh, uh, sort of watermark in that to, um, invisible watermark to. Verify that it was shot also just to have the control of, uh, Hey, I, I don't consent to this image that I took to be trained to be training data for any AI models. Can that be built into the meta? Yeah, maybe. Right. I think we're going to want it because you, you know, artists are really precious about their creative work as they should be. And if they don't want to be part of a training model set, then they have the right to opt out. So how do you, how do you. Scale that. How do you do it for billions and trillions of images that I think the image formats themselves would have to adapt and change for that, like the JPEG image format, the PNG image format, all that stuff would have to have built in. Yeah, well, I mean, good to have the report. Still a bit of the Wild West. Yeah. A lot of this goes to the root too of, um. What well with movie studios and what they want to do with AI of even if the outputs copyright protectable They still might not touch something with like a Midjourney or where the where the the data sets are. yeah Unknown of like where they got the information from or if it was, uh, trained on copyright protecting material. What about a Chinese movie studio? Don't think they care as much. I mean, yeah. I mean, it's just a different, uh, yeah, IP is just kind of looked at differently there than we treated in the U. S. How long do you think? Before we see, uh, like, uh, uh, major feature film that acknowledges a use of AI other than The Brutalist that acknowledges, uh, as like a, I think the line is like where like an AI generated character or background or, you know, or like actual pixels on the film. It's, it's, it's coming up, uh, Everything Everywhere All at Once. That's vocal. They used runway for their effects to create. Oh, really? Yeah. For, for in like in movie use or previous pretty sure it's in the movie. I'm pretty sure some of the effects in the movie. Okay. Yeah. final pixel. Okay. Um, yeah, I just don't think it was, I think it was like more of a novelty back then. And it wasn't, I wasn't as contentious as it is today. They got away with it. It was like, yeah, it was a weird movie. I'm talking about like a, like a major studio with a major film like Captain America, that's probably not going to happen anytime soon. If it does, yeah, they probably won't talk about it, especially after list. Yeah, which, uh, speaking of The Brutalist, we have our first correction to issue. Please see us. Thank you. Um, I don't even just go mix, uh, user on YouTube. I pointed out on the Lex Fridman podcast. We incorrectly said that Respeecher was used to translate, uh, the voices into English. Um, it was ElevenLabs. ElevenLabs.Sorry about that. Yeah. And, uh, I went on the Respeecher, uh, X account. And there was a tweet with Lex Fridman's face and Zelensky's face. It was very misleading. So they did participate in that conversation. But it was not their software that was used. It was 11labs. Yeah. Which does, uh, Sounds great. Similar stuff. Yeah. I mean, 11labs is great too. Yeah. I feel like RISCfeature is a bit more, um, Has focused more on, like, kind of movie production side. 11labs is a bit more user interface. You know, side topic. There is also an AI company called 12 Labs. Yep, I know. Yeah.. Yeah, we've talked to 'em a few times. Okay. Totally different. Totally separate, yeah. Different product. Uh, um, uh, image, um, video identification and tag. Yeah. Yeah. Category categories, categorization, uh. Yeah, they're actually, they're cool because they train on actual videos because they're like a lot of the models when you're trying to categorize videos or, um, sentiment, it's just trained on the transcripts of the videos and not actually trained on the images of the videos. And they're, they have their own models that are trained on the videos. Have you seen the Eyeline Research paper? On, uh, what they call go with the flow. Oh, is this, uh, Netflix? Yeah, so iLine is now part of Netflix. A bunch of geniuses, uh, under one roof. Like a research lab, or like, kinda? Yeah, they're a VFX studio. But they also have a very strong research side to them. Uh, if you know who Paul Dubébec is, he is like a legendary, uh, VFX artist, uh, researcher. A really nice guy. I see him at events all the time. Uh, he is their chief research officer, I believe. Uh, so, just like Disney research in Zurich, Eyeline Research once in a while will drop some knowledge on us. And this one was, uh, a really useful, a really useful innovation in our space. Uh, the problem that we now have going from image to video is the motion tends to be inconsistent. And they solve for it by using both optical flow and something called warped noise. So optical flow is, I think it's a little bit more old school. You have optical flow built into Premiere and Resolve. So when my option, if you're like, want to do slow motion, how does it? How does it figure out the interpolation? Right. The new frames that you're making. Yeah. So how it figures it out is I, and I think it's not using any AI there is it's a old school, you know, machine, um, computer vision. So, um, it'll detect features in your frame and then the two frames and then it'll sort of, um, it fill in between because it just knows where the major features are. So optical flow, nothing new. I don't think. Warp noise is really cool. And so. Uh, , let me take a step back. Yeah. Uh, Stable Diffusion models are built around noising and denoising, Uhhuh, I think you know that. Uh, so when a model is trained, you give it an image and you add noise to it, and the neural network learns how to remove the noise without removing the image. And you do this millions and billions and trillions of times. And then this neural network is now a de-noise around steroids. Mm-hmm . So feeding, uh, noise with. Your data is, uh, is just like food for the neural network. Oh God. Now they've fed a new type of noise called warp noise. So if you look at a single frame of that noise, it just looks like noise. You look at the next frame, just looks like noise, but if you play it. In a video, you could see there's 3d features in that noise that's moving around. So like if you have a coffee mug warp noise and you play the video, you'll see the coffee mug in there. It's a kind of like those like magic eye. Yeah, yeah, yeah, exactly. The pattern and then it's like a 3d effect. I've gotten so many headaches from those things when I was a kid. But yes, exactly that. So to the average eye, it just looks all like noise. But then when it's playing back as a video, you're like, oh my God, I could see features. So how they generate that work noise, I don't really understand, but okay, they did it. And then they feed that work noise along with the, like a very rough video guide, like a cat moving down a tree, something like that. And then the solve for it comes out so much better than what it would without the work noise in the. Okay. It's brilliant. So brilliant. Uh, I know they had a paper. Is it available to like, uh, that's not the cap for me always. Like I, it's like I see these things come out and I see like clips on X Yeah. And a paper, and I'm like, well, okay. I, I, I mean, how do I, how do I use that? I would imagine that's like a Netflix competitive advantage in the, in the war of like having the best AI tool. So although Disney Research does drop. Uh, you know, research papers that lay it all out with like, um, a big deageing- age modification because it was, yes, deaging and ageing faces, age altering, yeah, age altering. Yeah, that big paper last year. I remember that one. Yes. And, um. So, uh, I line research, uh, did drop the entire thing. Uh, but I think if you have enough understanding of building AI systems, you could probably put it together, but I bet you where you'd get stuck is generating that warp noise model. And that sounds like a good model. They train. Yeah. Cause you could probably use like a, uh, a control net. On top of an existing model, like you can put a control net in front of, uh, Flux and then feed the warp noise and the video to it and then have a generate the nice result. But then how do you generate the work? No, cool. All right. We'll end on a mystery bonus story. A mystery note there. Yeah, I mean, I mean, let's just take a step back and in our world of M and E, I think it's really cool to see that studios have, uh. Big brains that are working on tough problems. Yeah, they're not just hanging back and letting open AI do all the problem solving. They're actively doing a lot. Yeah, and it's a I mean, I'm sure there's just so much happening in these studios to that. We just don't know. We just don't know about it's like their competitive advantage. Like that. Yeah, they don't need to share it. That it's also so sensitive, right? With everything politically on. Yeah, yeah. Um, And yeah, they're not trying to build a consumer facing tools. They're trying to build competitive tools internally to train on their own better data. Like how many thousands of hours of footage does Netflix have? Right. That's that they own, that they licensed. Yeah, they can build something kick ass all internal. We would, um, are cool. That's fine, Addy. Yeah, man. Yeah. All right, guys. Thanks for watching. I'll catch you in the next episode.