.jpg)
Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
Runway AI Tips, Creator Economy Studios, and Parallel AI Processing
Tips for for control over AI outputs, a look at big studio's being built by YouTube creators, and a new type of neural network.
I guess the idea is not anything new, but it's all in the quality and execution. The main thing is control. Yeah. And this is like control over your AI output of like what you want and making sure the AI right follows it. In this episode of Denoised, tips and tricks for using Runway for better AI image creation, the newest studios are being built by the creator economy, and Continuous Thought Machines, a new type of neuron network. Let's get into it. All right. What's up Addy? How you doing? Hey, good, good. Uh, welcome back. Yeah, you survived the, uh, LA heat wave. I, I quite enjoyed it. Uh, you know, other than the fact that the air conditioner was not able to keep up, but hey, we'll it, pushing it to the limit. Pushing it to the, maybe, maybe there's a warning sign for summer coming. Like, yeah, it's a little test and you gotta check it out. Now I'll, I'll just say prepare. Prepare for summer during the summer. Don't train AI models on your GPU. Yeah. It'll add to the overheat of the home. Yeah. All right, so speaking of AI and training, well, not even training, okay. This one kind of been collecting interesting things I've been seeing on X, about more specifically, Runway. And their new references feature. Yeah. Which kinda gives you a lot more control over, uh, AI outputs and image outputs, which we're talking about before. But I've been seeing some interesting tricks and tips and hacks, and they did improve the model for this. So the first one is a kind of a series. It's been coming out from a Cristobal. Mm-hmm. The CEO of Runway. Oh, we know Cristobal here. Cristobal. Cristobal is recurring. We need to have like a Cristobal theme song or something, or a counter. Every time we say Cristobal, we put a penny in the jar. So the stuff he's been posting is basically doing very crude basic. Sketches, uh, not even sketches. I mean, just, you could just go to like Microsoft paint and draw some boxes of like how you want your image laid out. Mm-hmm. And add some labels, and then give that as a reference image to Runway. Mm-hmm. And then give some like characters or a scene and then have it create the image and mostly honor the layout that you wanted in the reference image that you gave it. Yeah. I guess the idea is not anything new, but it's all in the quality and execution. The main thing is control. Yeah. And this is like. Control over your AI output of like what you want and making sure the AI right follows it. The only two that I've really seen that can kind of do this well is a Runway, obviously one, and then, uh, ChatGPT, the image generation in the chat interface has been excellent at, you can give it reference images like this as well. Mm-hmm. And it'll give the output and then it has a, the chat interface works well with that because you can, if it does something incorrect, you could be like, no, like I've literally totally like tilt the camera down. Yeah. And then it does a new image and it like readjusts the angle. And that's been the most. Conversational back and forth control that I've seen. Out of any of the models. Right. The issue with ChatGPT, is the image quality still a little bit too kind of, that has that AI synthetic vibe? Yeah. Uh, Runway probably has the best photographic image, quality output. I've just personally had some hit or misses with trying this technique. Okay. Uh, walk me through your workflow and how you went about it. So I wanted to try to push the store to the limit and also bring in something that we've talked about a lot. Yeah. Of like. Camera control and like being able to actually like, use a camera and frame my angle and then can I use that as the input? Yeah. So I used some building blocks, like literal. I had a Are we gonna show the audio chair? Yeah. We'll show, yeah. We'll show these images on the, on the video. Okay. My first question is why not use Legos or something else, or humans? The practical answer is we did a lot of product photography over the years. Okay. And so, and for a lot of, uh, children's products. So I just have a lot of random children's. Uh, props. Okay. Like wood building blocks? Yeah. And wooden cubes. Um, so I, I just used what I had, but yeah, Legos would probably be, make the most sense. All the one challenge with Legos though is scale. Scale. Yeah. And, and just focus and I mean, I was using my phone. Yeah. And even with the building blocks, it's kind of hard to like get in there and get the focus. Yeah. Because we were basically doing macro photography. So I took a shot of building blocks and I kind of blocked it. I blocked the blocks, so, so it was like an over the shoulder shot. And I used wooden blocks with letters as. The character head. Okay. And I was thinking like, okay, maybe I could actually tell it like the block D is this person and block J is this person. So then I gave it this. Photograph I took as the reference image of how I want the shot composed. Yeah. And then I gave it two images of two different people Yeah. That I wanted to replace, uh, as the characters. And so with Runway, I gave it all these images. You could do three images with Runway. I gave it the images, and then it really. Didn't adhere to what I was doing. Yeah, I could see that here. Yeah. Kind. Not even close. No. And I, this is the shot you're looking at is like one output, but I'd given some other ones where I think it tried to put the building blocks on the table and it didn't, yeah. It didn't really stick with it. Um, so I, I'll have to do some more experimenting there. Also, every time you see someone post something of like, oh look, AI did this. Mm-hmm. Like on X or whatever, you have to keep in mind a lot of the stuff, AI stuff is cherrypicking. Yeah. So, yeah. You know, Cristobal posts a lot of stuff. Is that the first attempt or was that like the 10th attempt? Are you saying Cristobal exaggerates a bit? I'm not saying exaggerates, but just saying I don't, you know, was that like a one shot output, one done that he was posting or was that like, you know, attempt 10. Yeah. This is also nothing new for tech companies. Right. They're gonna present the best possible output in front of you Yeah. To entice you to use it, which absolutely. I mean, that's fine too, and just know it, but I mean, also like every time you generate, that's the credits. Yeah. And like, uh, Runway is not cheap, so Yeah. Like, you know, if, if. Budgeting wise, you're like, I need to get this good shot. You might have to budget out where it's like, well, we might gotta spin this thing 10 times to get Yes. The shot we're looking for. And they're charging you every one of those. Yeah. Yep. Now I did the same thing with ChatGPT. No. And the, it's the first image. Image you generated. Oh my God, that's So the first image it put the two people, uh, facing each other, but then it put the building blocks in between them. I mean, I gotta say the building blocks, it's came in pretty solid. It's not a bad image for kind of a hearing, everything. It just didn't understand what I was asking. Right. But then the advantage with. ChatGPT is is a chat interface, so then I can have a conversation with it and so then I said, no, you misunderstood. The image with the blocks is a blocking reference image. To compose the final frame. So there should not be any actual blocks in the final frame. Yeah. And then it created the image and then I said, put the people in a park just to give it an environment. Yeah. And then it created the image, but it created the reverse angle of what I was looking for. So I, my instructions, I was saying we should be looking at a woman character over the shoulder of a male character. Yeah. Yeah. And then it gave me an over the shoulder on the male character. Yeah. Like literally sticking to the 180 degree rule, like literally is the reverse angle of what the scene is. But then I said, no, flip it. And it flipped it and it showed me the woman. Yeah. And pretty much the shot as I kind of blocked it out with the building blocks. This superpower here is combining a really solid image generation model with a really solid LLM and understanding your natural language and having, iterate a power to go through it. Uh, I think, and being able to use, you know, the tactile, like, lemme get my camera and frame it up. Yeah. And having that as, as the, as the foundation. So does this mean that perhaps there is a world if Runway integrates a really solid LLM and a chat interface, like could there be room for improvement on the iterations? I mean, I feel like that was sort of what we got teased with Cristobal's video. Yeah. Of which I keep blanking out on the name. Yeah. The uh, uh, audio, what he is talking to the thing in real time and it was like making changes and it was just coming in. Yeah. And so yeah, like that, that could be the LLM underneath the hood for sure, making his changes in real time. But I would say yes. It seems like that's also the direction. Yeah. And just a nod to, it seems like that's the direction that Runways going to For sure. Just a quick nod to the sort of usefulness of what you generated. If you look at your, you know, your reference blocks versus this, you know, 180 shot, uh, I mean, the character distance is too close. Like scale is not really adhered to. Obviously we talked about the AI look versus having a more photorealistic look like, is it almost there for final use? Not even close, but certainly for, you know, conceptualizing, storyboarding, storyboarding look great. Yeah, yeah. This is, yeah, more than usable. Yeah, for sure. I mean, yeah, this, what I use is in an actual project, no, but when I use this to convey to the DP the like, type of shot I'm trying to get. Yeah, yeah. A hundred percent. Yeah. And a DP will take this over like a, you know, a hands sketch chain. Yeah. Uh, crappy.'cause I gave my stick figure thing and it's like, yes, it's this. Yeah. No, this is, uh, again, another tool in the tool in the, the tool shed. Yeah. Interesting that I, I do wanna call it a couple other examples that Cristobal did post 'cause it's not just sketching the frame you want. Some other interesting cases have been sketching an overhead layout. Yeah. Like a top down view. Yeah. Of like, this should be the geography of everything and then telling it, oh, I want, you know, like a wide shot of this. Yeah. But it understands the overhead geography that you gave it. That's so crazy because it's almost like. Spatial awareness and 3D world building. Yeah. Yeah. And that's what they've been trying to train. And the, just the being able to understand the, the 3D world. Right. I feel like there's another one I saw too, because you're, oh yeah. This, someone reposted this one. It was an interesting, I guess, sort of hack two because you're limited to only uploading, uh, three images mm-hmm. In Runway. But someone uploaded an image where they had the overhead layout of how people should be placed and then they uploaded an image. Yeah. That had. Four people in the single image. Right. So they kind of cheated, but it understood all four people. They were of celebrities. So there's obviously a lot of more training data on them. Yeah. And, but it understood, you know, all four people in the image and then created a pretty accurate. Layout of the image based on an overhead view and a character sheet of like four different people. That's amazing. Yeah. So very cool, right? Like we're, this stuff is just coming at us so fast, like we're having, uh, a hard time just like pausing and appreciating Yeah. How, how good it's getting. Yeah. What I'm gonna be curious about, 'cause, um. This upcoming weekend is gonna be the Cinema Synthetica Yeah. AI competition, which, uh, we documented last year. It, it was like a pre-event that the films will get played at the, uh, AI on the lot event, which is happening in a few weeks. And I'm just thinking, 'cause like last year it was really interesting to see, and that was sort of like my first deep dive into a lot of, uh, seen AI creators work and in just a year. I, it's, I'm, I'm like, I'm like really curious to see how this is going change and the quality of the output For sure. Because just like, yeah, and, and you just model, wanna go back last year, look at a film, look, go back this year, look at a film, you're like, oh my God, there's a big delta. Yeah, I would love to, I don't think they're gonna do it, but I'd love to see two if they revisited. Yeah. Even a couple shots from the ones last year and just reran them or redid them with some of the tools today. Like how much. Better it would be, or how much different it would look for. Sure. Yeah. I'd be curious about that. Okay. And then a couple other things that were just not tied into the references thing, but I think other interesting tips and hacks that I've been seeing people post about, uh, specifically to Runway. Mm-hmm. One person realized or figured out that you can. In your prompt, kind of just say to bisect and subdivide your video into four quarters. So you're making a video generation. Okay. But the video output is four. Quadrants. Quadrants, yeah. So you're kind of getting four outputs in one. But the advantage of that is because it's the same generation, it's much more consistent characters and physical space. It's sort of what, when we talked about a while ago with that, uh, with that hackathon that Dylan did, where he. Would generate a bunch of stuff in the same generation, right? So that it's the same awareness of like the. 3D space, AI space, yeah. That's happening. Yeah. But you can kind of do that in run Runway. Yeah. I think, uh, a lot of it has to do with the, the seed of the noise. Like the noise itself is unique every time. Mm-hmm. So having that same seed, you know, proliferate through each of the four generations will at least get you in the same ballpark. Yeah. Yeah. You know, I haven't messed with that either too. Like if, 'cause yeah, seed is a big thing too. And like, if you just find some stuff that works and then lock off your seed. And give it the same reference images or, and Runway references. How? Yeah. There's another, uh, well, that would work. There's another dial called CFG or Classifier-Free Guidance, uh, which is basically a dial to tell the AI system how much it wants to, like free will hallucinate versus how much it needs to adhere to your prompt. Oh. So like, like the, like. Creativity or the uh, yeah. Yeah. So that's just the CFG dial. So like, I think, I don't think Runway gives you that option. Maybe it's just rep or packaged in, in another number. Maybe if you're using it through their API, you can. Right. But on their web interface, it's I think just the seed. Sure, sure. Oh, so it does have a awareness of a seed id. Yeah. In the webinar interface, you can yeah. Override the seed and lock it off. Right. Theoretically speaking, if you give it the exact same seed, exact same CFG and I think three or four other variables, Uhhuh. You should be able to get the same result, the exact same result. Yeah. But like I, I don't know, like I'm not an AI expert, so not sure if that's true or not. Yeah. But there's an advantage sometimes of to walking the seed off to try to get some more consistency. Yep. With the outputs if you're trying to do Yeah. Consistent characters or, or, or spaces. Yeah. This could. Certainly be useful. I mean, at least, uh, you know, if you're trying to just generate a ton of stuff mm-hmm. And you need it to all conform to this world that you're trying to build, like certainly this will cut down your time by 4, 4x. Yeah. Right. Yeah. I mean, your video output will be very small.'cause it's gonna be the video, which is already like a 720-ish HD split by four. Split into four. You're gonna Yeah. Yeah. Be having a standard deaf output. But yeah, I mean, I think it more for visualization or ideation if, yeah, you're looking to get some, well push it through, not scaler, push it through whatever you need at the, I know, post process. Another interesting use case that this weekend I was ideating with, with a friend on, he's a, he's a pretty legit. Like a brand photographer Uhhuh. So he does like corporate videos and things like that. And he was asking me, Hey, one of the things that I'm trying to figure out is with AI, can I replace a traditional TimeLapse workflow? Because like as you know, you do a lot of photography, TimeLapse setup is just a completely different beast. And like your video setup. Yeah, I went down the time-lapse rabbit hole. Yeah. A number of years ago. Yeah. Yeah. I mean, especially if you're doing sunrise, dry sunset. You have to deal with exposure. Yeah. And you have to leave that thing there, make sure it's not touched and Yeah. Battery life, all this stuff, right? Yeah. Like it's, it's complicated to get a good time-lapse result. He's like, what if I feed it like a few images, you know, throughout time I. Huh. And have the AI generate the time. Oh, so like, uh, bright scene, dark scene, same location, locked off shot, and be like, yeah. Golden hour. Right. And like, yeah. And then just have the thing do a time lapse, but you make the time lapse for me. Yeah. Okay. So that'd be an interesting experiment and maybe one of our viewers can figure it out for us. Uh, yeah. Now this got me thinking too. Maybe I'll try this. Yeah. My building has a good balcony and maybe I'll, uh, yeah. Try to do some, uh. Shots. All right. Now I'm thinking I'm gonna, trying to test this out. All right. All right, cool. So had he done this yet, or thinking about it? No, he's just, uh, very, uh, new to the world of AI. Okay. So naturally asking question. I'll try. Yeah, because also this would be a good example, a good case too for the, uh, first middle, last currently, first middle last is still only in gen three with Runway. They haven't rolled it out to Gen-4, Gen-4, newest model. So, um. Yeah, I'm gonna give this a shot. Okay. Lemme know how it goes. I'll try this out. Alright, cool. Yeah. So yeah, these are just kind of good grab bag tips and stuff. But yeah, if anyone has any, uh, have you found anything, let us know in the comments that have worked well for you. And I'm gonna keep experimenting with my building block blocking attempts, add more things. I'd like three or four people. I mean, yeah, I'll try that. I, I, I took some shots with some toy cars, which I haven't tested yet, but I was gonna see if Yeah. I can make some like Fast and Furious, uh, kind of shots. Yeah. I mean, I, I did this experiment, uh, just like a couple of months ago where I built an entire city in Unreal with is like cubes and planes and mm-hmm. Yeah, no shaders. And then I fed that into, uh, a Comfy UI workflow. Okay. And I was getting really solid results. Just images or video to video images. Okay. Yeah. I ever seen, um. I was using something called Control Net. Oh, okay. So within Control Net, you can have different types of control, and I believe the one that was giving me the best result was a depth map. Oh, okay. It was turning the Unreal Render into a depth, depth map, and which was guiding the inference. Okay. Yeah. I remember a while ago I seen a video from, uh, Bilawal who did like a kitbashing in Blender, uh, or in Unreal. Yeah. Of just like a bunch of like, kind of city scenes stuff and built the city. Yeah, the city. And then did some camera moves, and then ran that through video to video, right. With Runway and then, you know, make it like a cyber punk scene. But the advantages, you get all your shapes, you get. Camera movement. Yeah. You just are, you know, and you're getting the rendering, like the photo rail rendering, that's the hardest part out of any 3D engine. Yeah. Is, uh, good luck building all of the individual shaders, lighting it like a lighting artist, and then having the CPU GPU to render all of that. Mm-hmm. Next story is a Hollywood Reporter story that was going round about the rise of new studios. Yeah. But being built by creators, YouTube creators, the Creator economy. Yeah. Which we have talked about. We've co we've many times. We've covered creator economy here quite a few times, and we're gonna continue to. Track it because Hollywood is evolving to sort of incorporate the creator economy. Yeah. I mean, I feel like it into the fold treated as like a separate thing, but it's like more and more this is just gonna overlap and blend in. Like the big creators, like their agents are, you know, from, uh, William Morris or UTA or ca a like yeah, these, these are like legit also, I mean the audiences for some of these channels, the like brand awareness could is more than. Network TV in, in some cases, yeah. Like network, uh, TV stations. Yeah. So this story, uh, one of the examples was, uh, Darman, which I don't, I have heard of him. Don't watch his stuff. We're not in his demographic. So Damon's got 25 million subscribers. No small feat. Right? Which also crazy 'cause I don't think you had even heard of him before he started rolling. Right. I have heard of him. You heard of him? Had him. Okay. I just never saw his content before. Yeah, it's like, like very optimistic. Yeah. Feel good kind of stuff. Yeah. I would say it's a cross between like UGC and Nickelodeon or Uhhuh or, and Nickelodeon's a good example.'cause that's what I mean the article is, you know, about like, oh, you know, these are like in the realm of who development of where new studio's being built. Yeah. And you know, these YouTube creators are like building. Full on sets and sound stages and back lots and stuff for their videos. Yep. You know, it's like the idea or the curiosity of like, are they gonna be the new like Hollywood creators or you know, is Hollywood gonna adapt to YouTube? Yeah. It was mentioned a few times in the article where it's like more, their analogy is like, this is the new Nickelodeon, or this is the new Disney Channel from like the 90s. Like it's like that type of content and replacing that realm, like I like, I don't think this is gonna be the next mission impossible out of YouTube creators. Maybe, I dunno, maybe in five, 10 years. Nah. But uh, I don't see it. I know, but I don't see it being like that type of like 10 pole film releasing. Yeah. Or if they would even want that. But I do see it being the replacement of kind of like mid-level linear network content. Yeah. Daytime tv. Yes. Yeah. I totally see that. In some cases, kids tv in some cases, maybe more like. Discovery Channel or TLC back when it was like educational for sure. Yeah. Like that quality, that budget. I mean they even said like their budgets here are more than Nickelodeon shows. Were like a lot of the true TV stuff, Uhhuh, uh, like what are those? Uh, impractical Jokers. Right, right. That kind of content, that kind of felt UGC at the time before UGCI could totally imagine. You know, some, somebody like Darman or one of one of these big creators having an umbrella to do that with. Yeah. The other thing is, I mean, one of the big. Famous examples is MrBeast has this big, um, studio operation in Greenville. It built out pretty much a massive entire back lot in Greenville, North Carolina. Yep. There's several big studios, uh, studio infrastructure in Louisiana. My guess is because of the tax incentives. Four creators or just like studios? Uh, both. So, yeah. One of the other creators focused on in the article is, uh. Alan Chow, who is doing a show, Alan's Universe. I'm also, if you're familiar with him, I'm probably butchering a lot of like, I have not heard of him at all, but he's building a studio and sort of analogy for himself was like similar to Nickelodeon type shows. He just goes to show how massive the creator economy actually is. That we, you know, although you and I consume so much YouTube, we still don't know all the big players. I mean, and it's so massive. And I remember actually this was interesting, uh, point because do you remember there used to be the YouTube rewind? Yeah, it was like a video YouTube would do every year and it would like kind of include like all the, you know, all the big creators. Creators. And it was like a big, just kind of, you know, cool video. But then they stopped doing it and I think it was like M-K-B-H-G or someone pointed out where it's just like, part of the reason was like YouTube got so big. Yeah. And is also so sort of personalized to like so many different interests that when it was smaller it was like, yes, there were like central focus, like main big characters. Yeah. But now it's just so massive where it's like you can have. Someone with millions of followers and like, never heard of them. Never heard of them. Yeah. Yeah. It's just like, it's not your like lane, not your interest. Exactly. And so yeah, it's just like, so. Dialed into different niches and like a huge universe basically. Yeah. That, yeah. I think everybody sort of knows the big creators from like og, YouTube, like, you know, MKBHD, uh, iJustine, PewDiePie, Casey Neistat, like, all those like big names that were huge when YouTube first launched and they're still around, but then Yes. Yeah. I mean, uh, but now all it takes is like a good few years. To grind and hustle and you've got it, you've got an audience. Right. I I think that ties into another kind of like question too.'cause like I think a lot of channels did blow up too because YouTube added shorts. Yeah. And. They were pushing shorts hard. Yeah. In like 20 20, 20 21. Right. And so like a lot of channels, like, you know, a short goes up and like they got like a million followers and it's like, oh, that would take, yeah. Years before and now like a couple shorts can blow it up. But is there, like, there's no character recognition. There's no brand. Like it goes into the bigger branding question. Mm. And so it's like, is it purely the numbers? I mean, the numbers are like, you know, important, but it seems easier. I mean, it's not that easy, but easier to, like, if you play the game right, or like mm-hmm. Tap into the right type of interest. I mean, there's a whole separate world of just like faceless YouTube creators where they're Yeah. Making these like documentary style videos or millions, millions, millions footage. Yeah. But it's like there's no brandy. There's no. Character's. No. Yeah, they don't, you don't have it's optional longevity. Right? Like you could still make ginormous revenue and have a giant fan Yeah. Mean they make video and they just get the ad revenue and it can a profitable business. But if you want to conform to traditional Hollywood branding and things, like I think, uh, Dude Perfect. Is like a perfect Yeah. Perfect. Yeah. Perfect example, no pun. And also it ties back in his article. Yeah. They built a huge a hundred million dollar studio, right. Uh, in Dallas and I think they have some sort of deal with like a major streamer. I believe possibly Or some show development. Yeah, exactly. I mean, yeah, like a lot of these creators do have deals and shows, right? Or talking about it so that yeah, if you want to go the, the quote unquote traditional route of turning your, channeling your brand and your audience into Hollywood, there's certainly a way to do that. MrBeast has showed us that, right? Or merge conforming Hollywood to you.'cause like, it's like, I don't really think the turn your brand and go into Hollywood is really the goal anyways. It's like, yeah, MrBeast got. A deal to fund his show on Amazon. Right. And reach a new audience. But he probably would've been fine without it. Yeah. And he also made, you know, kind of conformed them to like fit his style where it's like, yeah, okay. Gimme a bunch of money. Yeah. I do the show, you know, and we level up what we were doing before, but it's still like, yeah, it is like, this is where I'm at, at a hundred million dollars. Yeah. If you're not there, let's not even talk. Right. And so, uh, you know, it's not like, oh, let me adapt this to fit, like, you know, uh, to make it a double Derek. Yeah. Kind of like Right. Nickelodeon show. It's like no, it's like, it's gonna fit my style. Yeah. Like with this money. Like, that's the only reason it would make sense.'cause like, I could do this without you. For sure, for sure. We've doing it without you. Yeah. Going back to Dude Perfect. Yeah. They built the studio and then also that said, there were talks to, uh, eventually add a virtual production set. Oh, really? Into their studio. Yeah. You know, I think there was talks to have MrBeast at a VP stage in the North Carolina facility. This was a couple years back. I don't think it happened. Or maybe it did and he take, took it apart. Who knows? Yeah. I don't know. I'm trying to think of what the stuff, the thing that I, I find so interesting with, with, uh, YouTube content creators, like people like Darman or Mr. Peace. Jimmy is that. They've figured out how to make content as efficiently and with as little of a tech lift as possible. Mm-hmm. So for them to add the ginormous tech lift that is virtual production. Is probably unlikely. Like they, they'll just figure out how to make the thing that they wanna make or until it gets easier where you can, I mean, it depends what you're trying to use it for. Yeah. But it's like, yeah, I mean I think it would make sense sort sort of like Mrwhosetheboss, Mrwhosetheboss Vu system and it's like he's not doing it for elaborate sets. He is doing it 'cause he just wants to like Yeah. Have cool changing backgrounds for his talking head videos for sure. And stuff. But like somebody who's so. Tech forward and just, just as an expert in this domain in m and e, like, uh, MKBHD. Mm-hmm. He went to the zero space facility in New York. I remember that was a big deal coming. Oh yeah. That big deal. But he still doesn't have a VP stage in his own studio, right? Yeah. Like he totally gets it, you know? I mean, yeah. Just maybe doesn't. Fit with the style and what they're doing. I mean, yeah. Uh, a lot of, with him and with MrBeast, it's like different visuals, different setups every time. Speaking of which, I think on the creator economy, I'm still not seeing a big convergence with generative AI and the creator economy yet, like a lot of these big YouTubers are still not quite exploiting all of the goods that we have today. You know, you don't see insert shots that are generated with ai really? No, and I don't, I mean, I think they probably have the same. Quality level, you know, with traditional ME and Hollywood studios where it's like, like they, I, I'm guessing it, it's just, you know, either the quality's not there yet. Yeah. Or. They're using it, but it's like, you know, like we talked about like to help with processes or like some under the hood thing, but not at not actual final pixel. Right. Not at the point yet where it can do final pixel. I know the one thing they've been having issues with is other people ripping their face and likeness. And creating UGC style ads of them. Right. Hyping up products that they have never heard of or endorsed or anything. So like that, I think that's been the bigger issue Yeah. For them of, of just fake testimonials and stuff. Hmm. Because obviously there is a lot of training data available of them specifically. Yeah. Uh, so it's pretty easy for people to rip their stuff and, and turn them into a AI chat bot. Voice bot. Yeah. And, and what's the creator? You, you talk about here and there she goes and does something for 30 days. Oh, uh, Michelle Khare. Yeah. Yeah. Like. With her as an example, like these creators have figured out the thing that they make and they make it so well. Mm-hmm. That they really don't need to deviate and try traditional content per se. Like they've perfected this formula. Right. And she's the example I've to mentioned before where it's like she's had the talks with like traditional TV studios, like Let's adapt Your Thing, do it a network show. Yeah. And then it was like a lesser product than what she's already doing on YouTube. Right. For more headache. And it was like, we don't. Need to do this. Yeah. And yeah, this also ties into the, the, the creator upfronts, which I think did happen. And same, same idea, like creators big enough where they have a content schedule for the year and they know what they're gonna produce and they can sell to advertisers. Mm-hmm. And advertisers are getting more and more interested in, in this type of, uh, content. Also, another flip side of this article too was like people at his company were from Disney and Lionsgate, his CEO was previously president of MTV. Oh wow. Okay. And yeah, from Alan Chow, who I mentioned before too, uh, his casting director was from Nickelodeon. So yeah, just a lot of overlap or movement from the people, executives, you know, people that work behind the scenes at traditional MM. And e moving over to these creator companies. Yeah, I find that fascinating because MrBeast said the complete opposite. He said that. People from traditional studios just don't conform to the type of organization that he wants to run. So yeah, he said something to effect of like curious, I'd be curious of what the timeline was. Yeah. Too, when he made that comment.'cause I feel like also it's like once these companies reach a certain maturity level Yes. Yeah. Where it's like you need, you need mature execs. The adult in the room, the adults in the room. Yeah. I mean I, yeah, because I'd be curious too with MrBeast, 'cause it's like, um. I'm sure, I don't know who runs Feast Bowls, but I'm sure it's someone who like is from the food and beverage world Right. Has the knowledge and the expertise. Yeah. And also in his doctrine, that was leaked or his, uh, he has a doctrine. It's his, um, onboarding doc that he wrote a few years ago. Okay. It's really good. But like, one of his big things was like, hire consultants. He is, it was like, he was like, does spend waste time trying to figure stuff out, just like hire the person who already knows it. Yeah. And just like, you know. Take advantage of them. That's so efficient. Yeah. And I feel like unusual to hear.'cause it's like, oh, well I'll just figure it out myself. Or I could just, you know, do this too to waste money. Why reinvent wheel. Yeah. What do I need to waste money for? Like hire someone else so I can do it myself. Yeah. And so he was a big, uh, hire consultants fan. All right. So yeah. We'll obviously keep. An eye on the space. Continue to keep an eye on the space, but, uh, always interesting to see the merging of new creator economy versus, uh, traditional m and e. Yeah. Uh, it's an exciting space for sure, and in a lot of ways, the creator economy is picking up a lot of the slack that we have in Holly at the moment. Like, there's a lot of action on that side of the fence, you know? People getting hired, productions being made. Mm-hmm. For a completely different use case. Yeah. Yeah. And then you have vertical videos right in the middle. Vertical soap operas. Exactly. It's, it's gonna tie us all. All right. And third story. So we have a small AI company in Japan called Sakana. Okay. And Sakana is not well known. Uh, they're not associated with like the bigger Japanese brands like, you know, uh, cannon or Sony or whoever. And obviously with a lot of Japanese companies, uh, that stuff doesn't really make it across the pond. Here I came across this and I thought this was fascinating There. They do a lot of thought leadership in this, uh, AI space. And some of the stuff is really, you know, theoretical rather than an actual product that you can use like Runway. And one of the things that they concepted on is the idea of continuous thought machines. So this is just, uh, I guess a really nice fancy name to attach to a neural network that has. Incorporated explicit timing. So, uh, in the world of computers and GPUs, we have a clock, right? There is, you know, like Intel used to advertise, you know, it's a, it's 60 gigabyte per second or whatever. Mm-hmm. So within one second this many computations happen and there's a clock internally that makes it so, in the world of neural networking, it feels very much asynchronous. So you have a bunch of layers of neural networks, you know. Your, uh, prompting and your input goes in on one end, and then it goes through these layers as it progresses, depending on how complex it is, and on the output, you get your output, right? So you put your references in on one end, you prompt it in one end, boom, boom, boom, layer, layer, layer, layer, and then an output. But that's not how our brain works. Our brain works by generating. Different sub dependent thoughts at the same time. Mm-hmm. And then because they're generally at the same time, they then work together to form a bigger thought, like combining the elements to be, so like if you're, if you're driving, you know, a portion of your brain is processing the road and what's around it, another portion is. Probably thinking about where to go, the direction, the map, the GPS at third portions probably, you know, thinking about safety and like keeping it on the speed limit and all those three things together combined to the driving decision you're making at however many fractions of a second. Mm-hmm. You know, so CTM continuous thought machines, uh, really is a neural network architecture that. Make sure that that notion of timing is incorporated into a neural network, so it's not asynchronous. There is a synchronicity after all. Um, I'm not, uh, a hundred percent sure on how this will improve AI thought process and AI inference. But, but so I mean, is it sort of like, if you, like hypothetically if you're running a large language model or something that it can like, kind of think more about It can have, my guess is it can have several different thought processes happen at the same time and then combine it into a super thought process that is far more powerful than those, and this different than like the re how the reasoning models work.'cause part of the reasoning models was like they're supposed to sort of think or like do the output but then like Rico back and kind of question it. But is that still kind of a linear process? I think that is, uh, is just going. Backwards. Mm-hmm. In the neural network, where typically you, you have a sequential forward process and reasoning processes also have a backward, like a feedback loop. I think this is more, I think of it as like parallel things happening in parallel. Mm-hmm. Like how the neurons in her brain Yeah. Go. Triggering different things all over the place to kind of, for sure. Um, so think we're gonna link you to this, uh, article here. Go ahead and read through it. This is a little bit, you know, a little bit theoretical and, uh, a little too technical for me, but I, I just look, the reason I bring it up, this is why we're at the very early days of neural network architecture and something like this can potentially change the way the future models are built. Completely. Yeah. I mean, yeah, we had, we had DeepSeek come out of, not come outta nowhere, but, you know, kind of just pop on the grid and be like, Hey look, we figured out there's other way to do this. Exactly. Stuff like in a totally different way. Yeah. I mean, if you go back to the early days of compute, like in the 80s and 90s, Intel was. Making CPUs and we thought this was the way computers were gonna be made. Mm-hmm. Right. And CPUs are really good at doing a very large, uh, computation at the same time. But, you know, they could do maybe a. 10,000 of them at once or a hundred thousand of them at once. It's sequential. It's gotta go in. Yeah. Yeah. It's sequential. But there's still some, a notion of parallel pipelining and the way they do it is they actually just add one CPU on top of another. Mm-hmm. So it's a bunch of CPUs running together. Nvidia comes out Mm. In the 90s. Jensen, you know, much younger than he is today, completely changes the game with GPUs. Mm-hmm. And GPU computation and the math and the underlying silicon is completely different. Fast forward to, to the world of today, what. Is actually benefiting us the most. It's GPU architecture, right? Mm-hmm. Like all of the modern CG runs on it. Computer graphics, ai, everything is running on GPUs. GPUs are really good at taking smaller computations, but having millions and billions of them happen at the same time. So I just go back to. Something like this where this introduces a completely new way to make the thing that powers ai. So maybe this is something that we'll look back and to say, yeah, that's when the path kind of diverged. Yeah. And you heard it here on Denoised. Your, uh, car comment made, uh, remind me this morning I saw someone, uh, getting into a road rage honking battle with a Waymo. Oh, ouch. That, that is a battle you cannot win. I kept hearing like honky, honky, honky. Then I see the car like, and then it pulls up next to the Waymo. But then I, I dunno if they didn't know what a Waymo was, but then I think I saw them look inside and then realized that there was no driver, and then they stopped. This person must have had no idea that that was a Waymo to begin with, because we could see the Waymo car from a mile away with the lidar in the sensors. Yeah, they're not, they're very conspicuous. Yeah. Like you could tell what it is, but yeah, I'm just like, oh, this is gonna be the future. Just arguing with. An AI robot that like is unaware. Yeah, dude, you just got a physical glimpse into the future. That's so awesome. Yep. All right. Good place to wrap it up. Yep. Thanks for everything we talked about, as usual@denoiistpodcast.com and give us a comment on our YouTube videos. We'd love to see you engage over there, so thank you. All right. Thanks a lot everyone. We'll see you in the next episode.