Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
2026 Predictions: What's Coming for AI Filmmaking
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Addy and Joey dive into their tech predictions for 2026, categorizing forecasts from "super confident" to "long shots." They explore the future of AI tools like Comfy UI, real-time video generation, and VFX-focused models. Will we see feature-length AI-generated cinema or neural renderers integrated into major production tools?
--
The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.
All right. Welcome back to Denoised. Uh, this episode we're gonna talk about our 2026 AI predictions. Yes, sir. Yeah, there's a lot going on that we will completely regret saying we won't check this out in the year. Let's get into it. Alright. Hello, Addy. Hey, I don't know when this will be out, so maybe I'll say fake. Happy New Year's or fake or an upcoming Happy New Year. Yeah. Uh, we're in a very rainy season here in LA and uh, I feel festive and, uh, I thank you for doing 80 plus episodes with me. Oh, yeah, it's been great. Yeah, man, it's been a good. Okay. And here's to 80 more or a hundred more 800 man. Yeah. We we're not turning our podcast into an AI-generated podcast. No. Yet we're gonna. All right. So I've got a board you follow along on the visuals with, but we'll talk about everything here. So we're gonna do our predictions. We each kind of have our own batch predictions if we don't know what each other predicted. And, uh, we're gonna kind of go through three levels of super confident, like Yeah, pretty much. Sure. Bet that this will happen in 2026. Very likely. And then long shot. Yep. So yeah, we'll start with the super confident stuff. All right. My first prediction for 2026, which I'll be wrong about, just kidding. I'm super confident about this is ComfyUI will get acquired. By someone major, you know, since I have something similar. But, um, I have that same one, but I have it for very likely. Yeah. So we talked about Invoke and Vy getting acquired by Adobe and Figma and, um, I mean, we've also talked about how we are not entirely sure how Comfy makes money. Yeah. But like they, and we also covered in our end of year episode, like. How much it's sort of grown in popularity amongst hobbyists and people that are, you know, between amateur and a professional, or even pros. Just like even pros, sort of the, I mean, I remember months ago a meetup, someone was like, I've been to VFX. Like, you know, this feels like the next iteration of Nuke. That's, that's powerful. Yeah. That's a powerful thing to say. Yeah.'cause Nuke didn't get there overnight. Nuke is like, you know, 20, 30 years into what it is today. Mm-hmm. So the fact that Comfy is being accepted as a tool is enormous weight and sort of legitimacy to the tool that it is. So I think somebody to acquire it. Um, maybe it is AWS or Azure, you know, Microsoft or Google. Oh, you think it'd be one of those? Yeah, because it is is a big cloud play and I like their cloud. Thing, their implementation of it is okay, but I think it could use the touch of a bigger company. Yeah. I mean, my guess is were of like, who might acquire it? Foundry, which makes Nuke. Yeah. And maybe they, you know, it's like his is the next iteration. I mean they spend a lot of time building out the stage. The virtual production tool. Yeah. Yeah. Which. Might have been a little bit too late after they're too late to the party, the party for virtual production. But, but it's like walking into New
Year's party at like one, 1:00 AM Right. Like, oh, what happened? But I mean, I know it's solve some issues that in, in the VP Land, but, you know, maybe it's like, okay, hey, let's get it on this. Yeah. And it already has a lot of very similarities too. What, you know, nuke is and could have really good integration. Sure. Or Autodesk. Which does Maya, they acquired, uh, wonder Dynamics. Yeah. The ai, God, I hope not. Character animator. Um, it'll just go there to die. Yeah. You know, we need, we need Comfy to proliferate somebody that truly makes it grow beyond what it is today. Not just acquire and hold it and keeps the, the, the open source foundations. Absolutely. Because like some, the thing that makes it so powerful is there's just such a huge community Yeah. Of people building nodes and plugins and additional features that like, make it so customizable and powerful. Comfy Anonymous. If you're out there, if you're listening, we would love to have you on the show. Comfy Anonymous, he's a, is a Comfy, he is a sat, a Comfy, so I'm trying to make a Bitcoin joke, but I can't think of it. Satoshi Satoshi, but it's a Comfy. He's not so anonymous. He's been on other podcasts. I just don't know his name. Alright. All right. Let's see. Uh, so I had that as a, as a a as a very likely, so I'm more middle confident about that one. Um, lemme go with one of my super confident ones. Mm-hmm. Realtime video with ai. So I Nice think that we'll get some, I mean, I know we sort of have CREA ish, but I think we'll get something better that, uh, is very much real time like a VO three, but like. Stuff's happening in real time. Mm-hmm. That we're able to, we're not, we don't have to, you don't have to sit around for a minute and wait for stuff to generate, but like you think that's in 2026, we're gonna get some type of model that's legitimately real time. Yeah. Where you're generating video, you'd see stuff happening, modify. In real time, would you, do you think it's gonna be local or cloud-based? Cloud? Yeah. I don't think that's, no. Not juice. Yeah. I don't think we could do that locally. Okay. But yeah, some sort of real time video. Nice. Okay. For super confident, I think in 2026 we're finally gonna see what I call a VFX model. Think of it as like a Kling O1 on steroids, Runway ALI one, steroids one that can actually do real VFX. What, what does that mean? Like, so like, give it, uh, in this image that I have here, like give it a green screen, uh, input from a camera. Like from your image. Mm-hmm. And just like a VFX department, you're doing a pull and a push. You're pulling original imagery, you're modifying it, adding background relighting, adding costume, removing wires, whatever, what have you, all the VFX stuff, and then you're pushing it back out. Into your edit, but with the original quality. Yes, with all of that stuff, because that's the right, we could sort of do that now, but then you're getting this. Eight bit HD crushed file. Yeah. And, uh, Luma does a little bit of like savior on that side with Ray three. Sorry. Yeah. Yeah, dude, the incoming frames are like 4K, 8K, right? Like they're huge. Yeah. So you gotta, you gotta make adjustments out a pixel level and then unify the original image. Yeah. I mean, I think the closest we've seen now of something that kind of works like this is. Beeble. Yes. Beeble will take your footage. It will create, you know, will create pb, your PBR maps and like a all of the layers for VFX, but they still have to composite it traditionally, but it gives you the info and the tools you need. But you're, it's non-destructive. You're not Yeah. You're, you're not, you're able to regenerating it. Yeah. You're able to keep your original camera Yeah. Uh, files and work with that. Um, and also just launched a new plugin, but, um, yeah, like Runway All was the first one that we're like, oh shit, you can do real V effects Clean oh one. You're like, oh, this is better. 2026, we're gonna see one or two models that are being like, okay, it's here. Yeah. It's like I want a cling O one that gives me back the same source material that I gave it. Yeah. Because that's still the weak link in the chamber where it's like, oh cool, I can give all this stuff and it can make it look cool, but it's like making it look cool, like it's taking my 8K footage and like compressing it to this. I've seen, uh, demo videos and demo. Like ComfyUI workflows of some VFX that's done like that. Mm-hmm. But to actually have a productized thing anybody can do. Okay. That's the part that is yet to come, but super confident that it'll get here. You think that Super confident. Yeah. We'll have that here. Oh, one year is such a long time. I know that's, yeah. Yeah, yeah. All right. What else? I got super confident, more control options. Just how we work with ai, like just getting rid of or beyond text prompts. So whether that is some virtual camera control or joystick or, I mean, this ties into my real Sun video thing as well, but just more options, more tools. We kind of saw that a little bit with like what Moonvalley was trying to do, but just more ways to, you know, control video or outputs. Mm-hmm. That is not just text-based. That's, I, I think that's a valid assumption for next year. We are already seeing glimpses of that. Right. Beeble we just talked about. Beeble is a non-tech based, right. And Beeble is like a Yeah, it's a tool to analyze some footage you shot, but, um, even a, a Adobe Firefly. Yeah. And their new model, they have a feature where you can upload a camera movement and it'll, you give it a first frame, you give it your camera footage and then it, depending on the camera movement figures out the, it, I had some, if it was a very moderate camera move and I had one where I was pushing in and then like tilted up. Yeah. And it did that. Uh, great. So stuff like that, if we do stuff like that with real time video. Even better. Oh my God, I can't imagine. That's powerful. Yeah. I just want the two of these to be happening at the same time, so. You can blow up this image if you like. I just generated on Nano Banana Pro. Okay. I wanted to get the most photoreal person that I could generate. The reason I did that, my prediction and I'm super confident about is we're gonna see an image model that's gonna be able to generate stuff that is visually indistinguishable from reality. Like, I feel like no uncanny valley. I feel like we're kind of there, like we're, we're just about there. Exactly. Yeah. You're saying we're just, we're 96% there. I mean, it depends on the end, but I'm saying a hundred percent there. Hundred. Yeah, yeah, yeah. So like if you look at this photo, it just happened to be like the right angle, the right person with the, you know, like asymmetry in the face and all that stuff that makes somebody real. We're gonna see a model output reality consistently. Okay. Every time. Yeah. Yeah. It'd probably be Nano Banana Pro too. I don't know. I have a, a question or We could bet. So we, obviously, we know there's gonna be, you know, new VO models. Yeah. How high of a number do you think we'll get by the end of next year? I think we'll have VO five. Yeah, you sound like that one guy that's like, oh, Google already has VO seven, eight, and nine. Okay. Well, yeah, talk about, I've seen it. I'm not gonna talk about crap I don't know about, but I'm just gonna guess like obviously we know Veo 4 is gonna come probably the beginning of the year, and then I'll guess like maybe a few months later we'll get a Veo 4.5, I think by the end of the year. We'll have a Veo 5. I don't know if we'll have a Veo 6, I think every six months a new Veo sounds good enough to me. Yeah, like uh, if you look at like unreal engine versions, it's like one a quarter, something like that. Or Yeah, they've been shipping a lot. Yeah. So three a year, that's a new decimal. It depends on how aggressive Google invests into vo. Mm-hmm. You know, if the team gets bigger, they add more researchers, whatever, like we're gonna see it faster and faster or if they merge it with some other stuff, uh, which are some of my other predictions, right? Oh, okay. This one more. Uh, in the social ad space kind of video space, UGC ish content AI is just gonna take over. Or be a big part of it and no one's gonna care, or no, you're, you're talking about like a TikTok or TikTok Instagram. Yeah, like short, vertical ads that. You get like in, in vertical content. I'll agree with you on that one. Yeah. I think nobody's gonna care. No ones gonna, yeah. I think it's good enough where you can't, like right now, I get, you know, they're clearly ad generated ads and the voice is weird and the performance is weird. I think it's gonna get so good that it just, you don't really, you can't even tell it's fake or not, or whatever. Just like AI-generated person and, and, and it flashes by you so fast. Like within 10 seconds you've consumed it and scrolling up, going to the next one. Yeah. Right. And I don't think people are gonna. Nowhere care. Yeah. It, uh, it goes hand in hand with, uh, my predictions for Photoreal image model as well as the VFX model on steroids. Mm-hmm. Like, I think all those things will play together in giving you this capability. Yeah. Yeah. You got any more super confident ones? Do you know what this photo is that I'm putting in here? Black hole? No, it's, it's the, at the end of the last matrix, it's the AI overlord, like it comes at a face of a baby. Which last matrix? Or the third one? Or the fourth one? The third. Oh, sorry. The third one. Uh, I only saw that one. Ma, is it revolution or was it Revolution? Revolution. Revolution. The reason I put that up there is. And this is probably obvious to most of our viewers. I don't think we're gonna have a GI next year, but we're gonna have a AI baby, the AI babies, the ASI, artificial super intelligence that we're not gonna have probably for decades. Okay, so what that's, wait, what? So what is the prediction? My prediction and I'm super confident about is we're not gonna have artificial general intelligence in 2026. So like, we need like a, we need like a, an ex on this baby or something? Yes, please. Okay. So the reason I say that, and it, I'm sure a lot of you are following sort of, there you go, Ilia, Suzi, and some of the, mm-hmm. Big names Jeffrey Hinton and some of the big names in the A A GI world. Thank you for that. No baby. No baby. No baby. Next year. The architecture on which we're basing LLMs on is fundamentally not gonna get us to a GI and we have to invent a new way of building that neural network, which is yet to come. It's, I'm sure in some research labs somewhere in one of those big tech companies, they're experimenting with new architecture. But for that to come into fruition, get trained on billions of parameters, trails of parameters, then come out as like a product everybody can use, like that's gonna be way beyond one year. Yeah. Okay. Yeah, I can see that. Okay. And my last one of like, uh, uh, very likely, I think that there's gonna be sort of this bounce back of like just so much AI-generated stuff and this side of kind of question of like, what's real, what's fake? What was like human crafted, not human crafted, that there'll be a resurgence in the make things craftmanship. Thing that we sort of had like what, in 20 12, 20 14, the kind of hip stir, like the whole hipster movement was that? Yeah. Which was, you know, like, oh, I'm gonna get, uh, re-emerge printing presses letter press I'm making so, and handmade soap in Canada, all that stuff. Yeah. That's how Etsy was born and all. Yeah. Yeah. So I think there's gonna be another comeback of that. And this bounce VAE this, uh, sort of like counter reaction to just, um, everything you see online, the general, you were talking specific specifically about media and entertainment, like visuals and. Imagery. Not even. I mean, I just, I think people's consumption of, or like how they spend their time or what they wanna do and just interest in stuff in general. Yeah. Just interest in more resurgence. Interest in maybe getting off some of the social media feeds. Yeah. And doing more handcrafted stuff or getting, I love that. I'm all for it. I, I think especially anybody younger than 25. Mm-hmm. Like go for it. You know, there is a big resurgence in, um, finding used DVDs, Blu-ray, CDs now. Yeah. Like, it's a big market. I mean, I'm remember I was younger into like itself, vinyl records and stuff, and it was like, and then my like, parents are like, what the hell do you care about this stuff for like, this is old. And it's like, oh, that's cool. Yeah. I didn't have it. There are, there are kids that are now going into like, um. Really obscure formats like mini disc and all the stuff that Oh, like the stuff that just never took off. Like laser disc. Laser disc. Yeah. Yeah. Like, uh, I love that. Like, yes, there is absolutely val, you know, value in, um, tangible goods, physical goods, and associating that with art. Yeah. So I think we lost that as a humanity. Maybe there, I mean maybe specifically to media entertainment, there will be a more of a fun reemergence of just like stop motion and other Yeah. Types of handcrafted filmmaking tickets, hand sketch 2D animation, hand sketch 2D. Yeah. So I mean, I would expect a lot of this stuff to also have sort of a comeback as a rebound to just being inundated with, uh, AI stuff. Um, maybe it might be a trendsetter here stating something way ahead of You heard it first? Yeah. We're gonna be like the Pantone and we'll set the color for the year. Yeah. Okay. Very likely. All right. Very likely Frame.io. So. This goes hand in hand with my a GI prediction. I think next year we're gonna see a market correction of the Nvidia stock and just the Nvidia ecosystem in general. They're, they're on a roll. They're a $5 trillion company. They grew like 2 trillion in like three months. Insane growth. Nobody's seen this level of growth in. Commerce ever. So, uh, I think we're gonna see a correction in that, not investment advice. Yeah. We're not non-investment podcast. Yes. Uh, we're not an investment podcast. If you listen to us, you're screwed. So do not listen to us. I think, um, the minute the public realizes like a GI really is. Five to 10 years away, you're gonna see a significant correction. Like I just read the Walmart article today that they are freezing hiring for the next three years. Oh really? I didn't see that. Yes. Because they're just like, yeah, we don't need to because AI will be ready by then. And we don't. The people we have is the people, which is so bleak, right? Yeah. Yeah. So when, um. A GI doesn't come through in 12 months. Walmart will probably be like, ah, shit. It turns out we need to hire more people if we want to grow as a company. I feel like we already had the cycle a year ago or whenever, uh, the companies are like, wait, we AI first and you know, we're gonna automate all these processes. Yeah. And then they realize now it's like, hmm, not as robust as we thought. We, we are seeing inklings of that crumble, but I think we're gonna see not a major crumble, but a significant one next year. Where things will be kind of adjusted, calibrated back to reality. And I'm, I'm just using Nvidia as an example, is because they are one of the biggest player, if not the biggest player. Yeah. I mean they're the big, yeah. And like everything needs the chips and there are other companies trying to do chips and Yeah. It's not just chips. Like, no knock on Nvidia. Everything that we are doing in the AI world today. I think a lot of it is attributed to their innovation jenssen's vision from 20, 30 years ago. I know. Yeah. And they're just relentless, less focus on this for Right. For, yeah. This was, they were not like an overnight success. They've like, they have zero competition in the market. That's how ahead they are. Yeah. Yeah. They've just been doing this for a long time, so they deserve the, the success. I just think it's a little bit overinflated around a GI, which we, we we're gonna see a correction of. Mm-hmm. My very likely, I mean, I had the Comfy one. My other one is I think the AI training. Lawsuits are just like, what can get trained on it either gets sorted out, they strike deals, or it just becomes a non-issue. Like with the Claude, with the Anthropic case where that kind of basically said if they bought the books it's and trained on the books, it falls under fair use. Right. I think next year, like that'll get clarified, sorted out, whatever something will happen that it'll just be like, okay, like the how models are trained is not as much of an issue. What you do with the models. Still an issue. Yeah. And how you, how use it. How use them. Yeah. The outputs, uh, more focus on that and IP infringement and stuff like that. But the training stuff. Settled. I couldn't agree with, with you more like, I, I think, um, the fact that like today, you know, Firefly hasn't leg up on other models because it's quote unquote ethical. It's trained data. Yeah. Them and Moonvalley, like, that'll matter less and less as the line shift towards publicly available data. Yeah. And I, you know, I think people and creatives are just gonna be like, the training cool maybe for the stop gap right now when everything's uncertain, but ultimately it's gonna be like, is this model good? Can I get good stuff out of it? And if you can't, then it's not as Yeah. Strong as Right. Uh, what you need it for. Yeah. There's been several lawsuits that favored AI companies over the, uh, the training data right holder. Mm-hmm. And, uh, every one of those victories is helping the AI industry. In, uh, achieving success with publicly available data. Yeah. And also just getting companies and partners and clients on board. Yeah. Feeling more comfortable to utilize the duals.'cause the legal, legal hurdles are more betters is, it's just getting better. Yeah, there's less to it. I don't think it'll be fully sorted in 26, but we'll certainly gonna shift. Shift Uhhuh into the direction of getting sorted. Yeah, so I just threw a picture of deeps seek on there. Nothing to do with deeps. Seek my prediction 'cause I couldn't find a better image for this prediction. There's gonna be a Chinese model. And it has to be Chinese because it's always out of the left field. I don't know. It could be uh, could be a South Korean model. I don't know. I was gonna get a new country on, on the map. Yeah. There's gonna be a model that is a non-US based company that's gonna offer the same level of image, video generation as a VO or Sora as the big boys, but with a super lightweight. Mode. So like it's gonna be Comfy, UI friendly, probably open source, probably less than 40 gigabytes, whatever, right? It's just anybody can download it, but then it just gives you incredible results. Okay. So it's sort of like the Deepseek moment for image and video, so sort of like the next gen of like Wan 2.2 Yeah, because that's probably the closest we got right now where it's like open source. It's pretty good. Run locally. Yeah, it's pretty good. As good. It's not as good as DO three. Yeah. I think what, what I'm trying to say is like, it's gonna challenge the notion of like, are the big boys the only ones that can make this? Do you think it's even, that's even feasible though.'cause even with, uh, Alibaba and Wan, the next versions of have been 2.5, 2.6, and those have to run on the cloud, right? They have gone API. Yeah. So you think it's technically even possible for a, a VO three level model quality model. To run locally. I think with a different architecture, the computers has have to get better too. And they will, right? Like we're always getting better with GPU and compute. But with that aside, I think the image and video architecture. Will shift faster than LLM architecture and when the architecture shifts, you have new leaps and bounds in efficiency. Mm-hmm. What 10 GPU can do yesterday now one GPU can do because this model just runs leaner. Right. We'll see some of something like that in the image and video world next year. I mean, that could also be, you know, uh, because I'm wondering too is like, well, does it really matter if like I run on my computer versus like, spinning up in the cloud? But the cool application of this could be something that's lightweight enough. To run locally, maybe not for your computer, but for your hardware, which goes into the. Realtime video camera or that kind of stuff. So like a model that could run on the camera? Yeah. You're filming, you get realtime pre-vis or something that it's like just happening on your handheld device and you don't have to like have a separate CPU processing all of your stuff. Yeah. I mean some of the Apple models are sort of along that line, so yeah, I mean, that's where they're kind of going with it. The trick is to, uh, not rely on NVIDIA's Cuda architecture. Mm-hmm. So then, you know, when you rely on general CPU architecture mm-hmm. Like anybody's CPU, Intel arm, whatever, and if you can run your model on that, then it goes on a. On a red camera. Mm-hmm. Or a Ven camera or whatever. Mm-hmm. Yeah. Yeah. Right. What did you get Your real time previs as you're filming it. That's crazy. Yeah. Okay. All right. That'd be cool. Maybe that part would be in long shot for next year. That's good. I got another long shot 'cause I was trying to think of long shots.'cause I'm like, it's so hard to pick long shots.'cause like everything is probably gonna happen. Like I've got some long shots. Okay. I'm gonna, yeah, I'm excited to hear your long shots. All right. My mid, very likely I think we'll get an AI animated feature. Qualifying bar for this.'cause obviously I know won't get a bunch of AI animated stuff that'll appear online, but something that gets into like a main festival or a major streamer. Wow. Um, that anime, I mean, animation's gonna be the first sort of really be Yeah, for sure. Heavily lifted with this. I think I, it's such a fuzzy line of like, where do you, how do you define it as like, AI animated, but like ai, I'm defined it as something like AI was very, very heavily in the process for like a lot of the. In betweener stuff and generating shots. Yeah. Maybe you do a base pass with computer graphics, like all the animation's done with computer graphics and then AI comes in adds VFX and photo realism or style. Yeah. Or it's like a very kind of a much more lo-fi Yeah. Blender layout of the scene and AI enhances it and stuff. Yeah, absolutely. Um, I dunno, something that where ai like. It wouldn't have been possible without AI tools at the budget and the timeframe that they did. Yeah, like flow the movie flow or also, I mean, well, I guess if, if they pull it off, I guess this is sort of a, this would be a very likely with, um, if Critters gets pulled off from, is that the Open AI movie? I totally forgot about this as I wasters doing this, um, as I was planning this out. But if I can't remember the name of the studio. But yeah, the Open AI partnership with um. Nick and his studio, but Critters was the animated short Yeah. Of an AI partnership that they're, they're gonna do an animated feature that's gonna premiere a can. I forgot about that before when I was doing this. So I would put it in the highly likely category. But yeah, I still think mid-level, likely a very heavily AI animated feature film. I, I love that prediction. I, and I don't think it'll be like a $30 million budget. I think it'll be like less part. That was right. That was the other part about that. That was, yeah. Fuzzy. I think that's a good threshold line too. Something that is Yeah, in the million, million and under two or three people. Yeah. Somewhere in a flow. A flow. But you know, done next year. Yeah. I think, were those guys like Lutheran or like Lithuanian? Yeah. Uh, yeah, they were foreign. Yeah. Yeah. It was like 10 of them and it was, uh, done. And Blender. Blender and it was like, I dunno, 4 million, but it took them like, or maybe it was a little less than that, but it took them like four or five years to do so. Yeah, something like that done in. A year. A year. Yeah. Yeah. It's probably halfway done already and we're gonna see it next, next year. Oh, probably. Yes. Yeah, exactly. Okay. Uh, what else you got? All right, so for very likely I have Unity and Unreal partnership, or perhaps. Unreal buying Unity. Oh, okay. Or Epic buying Unity. Epic. Buying Unity. I think they got like an antitrust thing with that. Yeah. But um, I'm sure you can pay off some government official. Yeah. So my reasoning behind the prediction is they are, look, three, four years ago, it was unfathomable for. Unity and unreal to partner up with anything. There were fierce rivals, fierce competitors, you know, they're both massive, massive companies and since then they've both sort of stagnated a little bit, shrank a little bit. And, uh, I'm, I'm not gonna say like, you know, they're not doing well. Like Epic is doing incredibly well. Still, Fortnite still a big game, but, uh, at the same time. There is new players in the field. You've got open ai, you've got Google, like, you know, the media and entertainment, the gaming space is ripe for innovation. So these two companies, I think, are more alike than dissimilar. Mm-hmm. And, uh, I actually saw, uh, a friend of mine who was at Unity got let go during the big layoffs at Unity. I think, uh. A year back and now he's at Unreal. So like, okay. Yeah. You're, you're, you're part of this really small group of game developer groups, like, um, people that make game engines. Mm-hmm. And so I think it would be wise for those two companies to partner up perhaps the acquisition. Epic is really good at pushing the engine visually and sort of being, um, really ahead of, uh, research and development in what traditional computer graphics and real time rendering can do. Unity is really good at shipping games, especially on the mobile side, right? Like something like, I don't know, a billion downloads happen a week. On Unity platform on. Okay, that's great. So like you combine those two things together. I think it's a, it's a win-win for both companies. Okay, cool. Maybe not ai. Uh, I think AI has nothing to do with, this is still playing in the traditional, uh, triple, triple A and mobile game space Uhhuh, but that just might make sense for them to, yeah, to acquire. Yeah. All right. My other one for very likely is that we get a new kind of paradigm for. Video generation or how we work with AI-generated videos. So, uh, something in the realm of the world model. Mm-hmm. The Genie three kind of teaser that we saw with Google, where it's generated a world, it's persistent. You can move through it. The only issue is right now, it can only last for like a few minutes. Put something in that realm. They, it's more of an expanded version of like my real time video prediction. Right. That feels more likely. This would be like you're generating a world. In real time, you're moving through it. Uh, you know, maybe you get your shots or you kind of can move a virtual camera in this generated space. Or, or for gaming. Or for gaming, yeah. Obviously I'm, yeah, I'm thinking like specific filmmaking applications, but there's a top like gaming, robotics, yeah. Industrial cars. Yeah. There's a million applications for this. Yeah. In the film sense, generating your world, like as you're talking through it and. Getting your shots, your scenes, all that stuff. I think so. Yeah. Yeah. I think it's not, it's maybe not in the super confident. I think you're right, like we're definitely gonna see a big one up in the world model category. Yeah, yeah, yeah. That's where all these things are leading to anyway, so I think we'll get some, some big leaps there in actual usable world models. And it could be a player like Runway, like somebody you're not thinking of. It could, yeah. I mean, yes, chances are it'll probably be like a Google. It might be Google. Yeah. Uh, it could be Runway. I mean, I've still had hit or miss with Olive and the models there. Yeah. I've had hit or miss with Luma and Ray three and their models. Yeah. I mean just, but it sounds like they are working on a world model for sure. Yeah. I mean, I, I still think back to that demo with, um, the voice demo from, uh, Cristobal of like just speaking in real time and the world modifies based on what you're saying, which I think, you know, I think that is a hundred percent like where things will are going. Yeah. Who will be able to make a usable version of that. First, I, I don't know. Yeah. Just make it last indefinitely. Not two, three minutes. Exactly. Yeah. Like, uh, and I think, uh, sustaining that is really, really difficult. Yeah. My last very likely is one of the big-ish AI video model companies gets bought or shuts down. Mm. Something in the realm of like a Pika or Krea or LTX, some consolidation or, uh, something in that realm of that mid-level. Kind of AI model companies. Yeah. We're gonna see sort of a, a withering and a, and a correction in the AI startup world, right? Like there's a ton of startups and now we're gonna see kind of like a adjustment into. Survival of the fittest. Yeah. I mean, I also like, I think that'll also happen with the AI studios that launched, but that's such a, yeah, we'll see how that plays out. I mean, there's a bunch of them. I mean, I think even with that fabric study, there was like 60 self-identified like AI studios. So, but if something shuts down, I, is it really like a shutdown or is it just like a pivot or, you know, it's just like a small production company that pivots to something else. I think, I mean, I could be wrong, but I think it's a lot cheaper to run an AI studio than an ai. Manufacturing company, like one thats models. Oh yeah. Yeah.'cause that is incredibly expensive. Yeah. And that's also why I'm kind of focused more on the model producing companies. Yeah. Um, like a Runway or a PIKA or a Exactly. Yeah. But they generated their own models. Right. I think the ones that haven't gotten as much traction mm-hmm. Someone will buy 'em. Uh, or, or they'll just kind of shut up. Yeah. If you're, um, a small to mid-size company like Luma is a good example. And I think I could be wrong, but they have like, uh, the Saudi Public Investment Fund Yeah. Funding of $900 million. Mm-hmm. Like if you have that, you're good. You're gonna write it out. Right. But if you don't have that, it's probably time to close up shop. Unless you're making incredible revenue. Yeah. Or some, yeah. There's some hit or something. Yeah. So, yeah. Yeah, yeah. We'll see how that plays out. Okay. Good prediction. All right. What do you got for long shot? Okay, so for my long shot, the, we just talked about AI studios, so you know, you take like a Wonder Studios or Moonvalley or Hysteria. They're Steria, yeah. Studio Arm. Yeah. Yeah. Even like secret level or Promise. Yeah. Like, uh, these guys are all out there trying to create mm-hmm. Like actually make something long shot. But I think we're gonna see some kind of 90 minute cinema from. Some of them that that is a live actiony thing. Yeah. Yeah. Not animation. That was, uh, yeah, that was my, that was my long shot prediction too. Oh shit. Sorry. My bad. That was my one and only long shot prediction. Yeah. The reason it's a long shot and not super confident is these guys all have expertise in house as far as researchers and creatives and directors and cinematographers. But the technology's just not ready. Yeah. I think, you know, one of the most successful live actiony sag things is like the, uh, you know, the echo hunter from Uh Yeah. From the kid on the kid. Yeah. Yeah. And you know, even that, it's still like, it's the best that technology can do right now. And I was using real actors. Yeah. And as for driving the performances Yep. But the tech for taking the performance and actually turning it into something. On screen is still uncanny valley, not quite there. Mm-hmm. Yeah. I mean also it's like where do you draw the line with what you consider live action versus animation? Real people, but real people on screen or real people like, would you consider Echo Hunter real live action?'cause it's like they filmed actors, but everything was ultimately like, it wasn't like live actors blended in the AI world. It was the actors were driving the synthetic performance, which is sort of like, you could do that as an animation. Yeah. So that's like final fantasy. It's like, uh, it's still an, that's like an avatar category. Avatar Avatar is an animated movie, right? Yeah. Yeah. Okay. So actual people on screen and then maybe just everything else is ai. And when I say actual people, that imagery comes from a real camera. From a real camera, right? Yeah. Yeah, yeah. Okay. So yeah, I think, uh, along that line, I think, you know. Could be somebody outside of the US but perhaps your Steria promise. Mm-hmm. Or wonder like one of these AI studios will probably make something. Yeah. I, yeah, I think it's a long shot for next year. Yeah. And my bar is still kind of the same with the, an animated feature of like something that is at a major festival or on a major streamer. Yeah. Like I'm sure we'll get a lot of stuff appearing on YouTube that blends, you know, people with AI stuff, but an actual. Feature film that is marketable and like 12 months is a long time. Like look. Uh, and now that we're sitting here in, uh, you know, end of 2025, I'm thinking about like you and me back in 20 end of 24, like, we're texting each other trying to meet up at CES. Mm-hmm. We had no idea of the things that we would be covering on a podcast that we have not yet recorded. Like we didn't, we didn't hit our first episode till like the LA fires, which is Oh, right, because we were talking about the fire. Yeah, yeah, yeah. So like. A year is a long time in now. Yeah, it is a long time. Yeah. But I think it's appropriate. It's a long shot category. Yeah. Okay. What, uh, what else you got? Because that was my, that was my one and only. All right. I got a couple more long shots. Okay. A long shot. But I think in 2026, we're gonna see a neural renderer built into one of the major DCC tools. Okay. Whether it's a Maya or a Blender, or perhaps even Unreal Unity Houdini. Mm-hmm. All those tools that are now using traditional ray tracing, you know? Mm-hmm. Bouncing a photon around capturing that, figuring out the math of like, what? Yeah, everything's doing. Like you'll still see the underlying thing unchanged. So there'll still be a CG model that's rigged and you know, there's probably a texture on it, but the actual rendering itself will be done through ai. Mm-hmm. And we may see, uh, the first deployment of a large scale neural render. I like that one. Yeah. Okay. I feel that feels, that feels more likely than long shot. Okay. It's between, uh, long shot and very likely. Yeah. That feels. You could put it in that line, in, in the middle. That feels plausible. Yeah. Okay. We'll put this, we'll move this one up. The only reason I made it a long shot is because it's, it's quite an expensive endeavor. Okay. And like that it's actually integrated into, there's a lot of research needed to a, a product that everyone that, that is shipped and usable. Yeah. And it's gotta be better in terms of, uh, compute performance and quality mm-hmm. Than traditional cg, which is a high bar.'cause traditional CGS had 30 years to get. And it's really good. So if you introduce a neural renderer, it has to look better, it has to be cheaper on a GPU than what mm-hmm. A renderer is. Yeah. So, okay. That's a good one. Like this one. Okay. All right. I got a couple more long shots. We covered the, uh, the Suno Warner Music Group. Yeah. Partnership in our last, uh, last episode. So. I am predicting it's a long shot and I'm gonna get a lot of hate for this, but guys, I'm just reporting here and uh, I don't feel good about saying this. Suno might become ethical. They're gonna make more partnerships with more. Labels and like Warner Music Group is probably, I don't know, 20, 30% of the music market. You go to Universal Music Group, you go to Spotify, you go to whoever, and you try to capture 75, 80% of the market. All of a sudden, every, all of your training data is now licensed and uh, SUNO becomes. Ethical. So you think that like what they're doing gets the blessing of the music industry or the cooperation? I, I look at it the same way, like how Napster and Limewire kind of broke the music industry and then, uh, apple Music, Pandora, Spotify fixed it in a new way. Right. Yeah. So I think Suno both broke it and they'll kind of fix it in a new way, in a new direction. What do you think that's gonna look like consumer wise? Because it's like even now we have some of those like country music bands that are ranking, that are AI-generated. Yeah, yeah, yeah. I'm sure. Like, I feel like the ambient background music. It's just gonna be generated like that. I think the top artists are still like a Dale Swift or like Kendrick Lamar. Like you're still gonna have human artists as the zeitgeist, as the pop culture, uh, beat of the moment. But yeah, we're gonna be swimming in a world that's just full of AI background noise. Yeah. Like it's undeniable, unavoidable. I feel like it's also already happening. It's already happening. Yeah. And, uh, this will only accelerate. Accelerate. Yeah. Yeah. I mean, that also leads me to more to believe my touch grass make, make things go reverting back to the craft. Yeah. Like the, the artist, that band person plays a real guitar, like a Lynyrd Skinner type. Like that's gonna win out over like very synthetic sounds. Yeah. You know? And maybe another resurgence of like digging up old funky instruments like the, the thein has a, a bit of a comeback of people digging up. I don't know what that is. Thein, that's the thing that makes the sound in the 1950s sci-fi movies. And it's like this metal rod that reacts to the proximity of your hand. Oh, okay. And so you're like, kind of wave your hand and make the funky nap noises that I'd love to see that. Yeah. So I think, you know, we'll get these yin and yang of, of, of things bouncing back and forth. Cool. All right. So yeah, you. So those are all of our predictions. Our predictions. Yeah. So there we go. We got our board or do you know, 2026 prediction board work of art. Beautiful. We're gonna regret all of this in 12 months. Yeah. We'll revisit this in 12 months. Um, yeah, let us know the comments, if you've got any of the, any predictions you think, uh, we're off wrong, missing something or if you got your own predictions. Um, I feel, 'cause I feel like we're missing some stuff here. I mean, our audience always has good notes. Yeah. Like, yeah. We'd love to hear from you and, uh, we'll riff off of that when we do the next episode. Yeah. Yeah. When we're back in the year. All right, cool. Yeah. Well, I would normally say links for everything talking about, but we don't, this is all new stuff, so Yeah. There's not much links, but if you wanna subscribe and all that stuff, uh, deno podcast.com. Yeah. And we, we thank you for a year, our very first year of listenership and growing an audience and listening to all of you and. Having you appreciate what we talk about. I mean, it, it's so rewarding to do. This is so much fun. Yeah. I love seeing the comments doing good or bad, it feels like we're having a conversation here, but that everyone else can be part of the conversation. So Yeah. So we thank you and we hope you have a wonderful 2026. Yeah. Here's the 2026. See you in next step episode.