
Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
XGRIDS' $5K Scanner, Krea Realtime Video, Acer's AI Workstation, + More AI News
XGRIDS PortalCam reshapes film production with a $5,000 LIDAR scanner that's changing how filmmakers capture location data. In this tech-packed roundup, Addy and Joey break down Krea's Realtime Video, Apple's surprising AI research moves, and how Wan's new speech-to-video model brings avatars to life. Plus: Adobe Premiere lands on iPhone, Google Flow offers unlimited Veo 3 Fast generations, and why sound remains crucial for selling AI-generated environments.
--
The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.
All right. Welcome back to Denoised. We're gonna do our weekly Friday roundup of all the top news stories that if you're filmmaker, you should be aware of in the tech world. AI roundup. Yeah, let's get into it. All right, we're back. Let's check off with the first, first story. It's actually not ai, but this is one of the coolest product drops that I've seen in a while. PortalCam from XGRIDS. This thing's cool. I, we have talked a bit about LIDAR in the past. We've got some LIDAR videos. I did it with Leica Geosystems about how LIDAR can play into film production. The problem with LIDAR has these scanners have been really, really expensive and they're come from a background of surveying construction. They're looking to get very accurate data, but not the most photogenic data. Here comes Xray, xray that has some other scanners in the past, but now they've built a new scanner that is specifically targeted for film production. Mm-hmm. Called PortalCam. And also its Price Point is excellent. Scanners traditionally cost anywhere from 15,000 on the low end to 60,000 to a hundred thousand dollars PortalCam is $5,000. That's insane. Yeah. That's awesome. So, and is they lighter? Yeah, the quality is excellent. So it is a wider scanner combined with a variety of cameras, uh, on. The front and on the side, which let's get the specs of these up, but these are much higher resolution than is normally built in. And, uh, so I, I had a briefing from them, uh, a few weeks ago and they showed some scans of like an apartment. And basically however long you sort of hover on a space is you get more detail. So if you kind of walk around the space and quickly, you'll get like a decent amount of detail. But if you kind of walk around and like go in crevices or go towards walls or details, uh, or, or closer to objects, you'll get. A higher resolution. Mm-hmm. And it had like some pillow cases and some like very kind of fuzzy texture items in this apartment they scanned and if you like, went in and zoomed in really close, you could see the texture. Of the objects. And so just the combination of the cameras that is capturing the actual like video data or physical data, and then the LIDAR scan that's capturing, uh, the actual like geometric space and then their software just processes it all for you. And then you get these really beautiful, really complex, high quality scans. Yeah. So it sounds like it's doing SLAM tracking on device. Mm-hmm. Um, similar to what Leica or any of those higher end scanners would do. I mean, it's, it's funny we're covering this on AI roundup. Like, as much as we'd like to think that a lot of this production is, you know, AI powered. The reality is 99.999% of productions that are using VFX are using traditional stuff like this. Um, and you know, this is still very much, uh, a work in progress, right? Like there's still room for improvement with photogrammetry and four D photogrammetry videography. Mm-hmm. And, um. When you are recreating a world for VFX, uh, one of the first things that you do is scan the real world and you bring that in even as a reference object, and then you start to augment it with 3D objects. So this is like a bread and butter tool, I would say, for any sort of VFX team that's doing, uh, combining digital elements with practical elements. And yeah, I mean there's a variety of use cases with this. In in film production? I mean anything from location scouting, I mean, so, mm-hmm. You can go out, you go out to location, you know, usually you're like, oh, lemme take a couple photos and stuff, and then you're like, oh, I missed that angle. Or like, oh, what's the measurement of that door space or that window space? Yeah. With the lidar and the scanner, you just walk around and scan the space and then later on you can just navigate around the space, move around it, take measurements, and the measurements are really accurate. And also even. If you're trying to scout somewhere, you know, super far, uh, distant, you could send a person or have someone with a scanner and somewhere else scan the space and then you could just navigate it on a computer and a headset. Virtual location, scouting shot planning, same idea. You have this, this space scanned, and then you could just yes, figure out your shots in that virtual space. Yeah, it's really important. Even on pre-production as well as principle photography. Exactly. Uh, I mean, you could load it up on a wall, you could do virtual production with this, and this is your environment. Um, what do you think? I mean, we haven't seen the actual scans here, but like what do you think, like what would the level of quality be for what you would need for final pixel for loading, like on a virtual production on a LED wall? Yeah, so it's, it's hard to say from the. The outside of the device looking in because you don't know what type of lasers it's using. It's obviously looks like using some RGB cameras. Uh mm-hmm. Multiple of them. So either there is a really advanced algorithm in there that's doing the SLAM tracking. Or there is just really high quality, you know, capture elements in there. Uh, the reason why the, like us and some of those, uh, what I would call industrial grade scanners cost so much is because the actual hardware inside is inherently expensive, right? Mm-hmm. Like, uh, you know, if you want a high powered laser that can, you know, you can shoot. Up to like half a kilometer or whatever, that's gonna be expensive no matter what. So they're, they're relying more on our GB sensors here, which tend to be cheaper. But then you still need a pretty high powered compute on, on that device to crunch those polygons in real time and make sense of them. And that's what the SLAM tracking does. Yeah. I mean also, and I mean, they had told me too, it's like the more the slower you go, the closer you get to. The surface, the space, the higher detail your scan would be. Yeah. If you're, than a big, large, open field or something where you need to capture a larger area. Yeah. It looks like it's a class one laser and it gives you the, an the nanometer range there that's beyond visible light. Mm. Here like here's the camera specs, four, camera array, two fisheye, two front. Right. So it's got, it's got enough hardware in it to be very, very useful. And you know, at $5,000 price point. That's insane. That's like one 10th of what you would pay for a traditional scanner. Yeah. I mean this price, like, I mean even just for a regular scanner, I don't think I've seen a price point like this. Yeah. If, if you're a professional VFX artist, you probably own like a, like a Canon 5D, you know, for capturing HDRs and things like that. I would imagine at this same price range, you would probably wanna own one of these and do both photogrammetry and light capture at the same time. Mm-hmm. And also the nice thing is their software does make it easy to load and process this 'cause like. I think, I think the gap too is like if you're just a regular filmmaker, not a VFX artist, so not someone who works as super familiar with like 3D tools or 3D space. Like how. You know, the, the process from scanning to getting a photogrammetry 3D model scan that you can move around in and making that easy to use. I think their software handles the bulk of that. But then if you want more, you know, if you wanna take that model out, load it into Unreal or Blender, you can do that as well. So it's sort of, yeah, I would say that throughout the levels of, of user. Complexity for sure. I would say the game changer here would be if they integrate directly into reality capture, which is the standard go-to software for anything. Photogrammetry has been for a long time and Epic Games acquired it and now it integrates really well with Unreal Engine. I'm, I'm guessing they're probably know about it if not already integrated into it. And also going back to ai, the one thing, uh, we mentioned, and I'm 99% sure their software does do it as a go splat. So you could also take your scan, turn into a go splat, uh, something that's. Lightweight, easy to navigate and have that as well as an option versus a. Standard, uh, 3D model with third. Do you know if it does four D like if it actually does anything moving? I don't think it does movement. I did see they don't have it here, but they did show me some demos where, I mean, we were talking about scanning 3D spaces, but like it is also designed to scan objects and the cameras are high enough quality that like, and you just can kind of go up to, you know, your object and walk around and scan it so you can turn it into a a 3D model. And that's also demoed. Scanning a person, right? Where there's like a person stand still and they could really walk around and scan, but they were static. I don't think anything was video. I don't think there was anything with. Movement in it. The only thing is it has the opposite of movement feature where like if you're scanning a, like a, a sidewalk and people are walking by, the software can remove people from the scan. So it's clean or, or cars or stuff like that. So yeah, that was the first one I'm excited to see once that's out in the world and more and more people in filmmakers are, are using LIDAR scans as part of uh, their production process. Yep. Alright, next story. And this is firmly in the AI space. Uh, Krea has, uh, teased real-time video. Wow. It's almost here, man. I've been talking about this for months. Uh, yeah, we've talked about crib before and uh, yeah. I mean, they're always the one that had the sort of best real time AI image generator where you would kind of draw your shapes and as you move the shapes around, it would update. The image in real time, but we keep making like different very, uh, versions of the image. Yep. Uh, in this demo that they're showing here, it's the sort of same shape idea, but the video is consistent. There's a lot of temporal consistency, which is really difficult. Yeah. And this is, I mean, obvious, the most obvious thing that comes to mind is ideation and previs and pre-production, right? Like when you just need to quickly iterate on a bunch of visuals, craft your shot, block your character. I mean, this stuff would, because visual quality is maybe not the most important thing, then. But my mind also goes to games, right? Like, how far are we from realtime AI rendering? Oh, for games and spaces like that. Yeah, you're right. E even like a 2D mobile game, right? That doesn't rely on any sort of 3D, any 3D mechanics per se. A real time, 2D renderer from crea, like the way you're moving vectors and shapes around. That could be enough of a building blog for you to deliver a game on that. Hmm. That's interesting. Yeah. I'm just thinking too, like, is it just gonna be like a record, like a camera, like record and just everything you're doing, it records it and then you stop recording Like puppeteering something? Yeah. Or yeah, just like what the, what's the, you know, 'cause like now the video's going in real time, can you just like record everything you do on screen? For sure. I, I mean like what is this interface look like? This is almost also getting into the world of performance art. And any sort of like immersive art mm-hmm. Where, you know, there is a, uh, spatial and a temporal element to it. And you could do all, you can do some really cool things by driving, uh, like an, uh, puppeteer or an artist driving some basic shapes and lights, and then have that transform via AI into something completely else. Yeah. I'm allowed to see this out. It was, uh, they had a wait list before, but I think I just saw today that they're starting to let people into the beta program, uh, that were on the wait list. So, yeah. I'm curious to see. Yeah. Also, I'm, I'm curious to know if this can run locally on a local machine or this is just real time on the cloud. I think everything they've done on the, on the cloud, right. Except they, I think they had that partnership with Flux, with a model, an image model. But most of their stuff's the cloud. And I would imagine this is the cloud. But yeah, I'd be curious, can you imagine like running this on Comfy, a Comfy version? Yeah. Yeah, that would be cool. And if you could connect it also, I mean, they didn't show it here and I don't know if they'll enable this feature, but the other thing that crea image generator would do is it was known for the little shapes and you could draw and move around, but you could also use an input, uh, your webcam as an input. Yeah. And you could, you know, use your phone as the webcam. And so there were ways to sort of like find a shot and use that as the framing for the image generator. I'm, I'm curious if they're gonna do the same thing with their video generator.'cause then that would also. Unlock, uh, you know, like what if you could just film the person, but you change the person into an instant dinosaur or creature or something? Or maybe you're able to augment it, right. Add a different costume or, you know, something in the background. Mm-hmm. Something in the foreground sound kind of like what Luma does with Luma modify. Right. But you're doing this in real time. Yeah. And like you would actually have a device you could move and frame your shot instead of playing around with, with shapes and stuff. Exactly. I know actually, I guess probably answered my own question because in this demo, I just, in this demo here is a guy. Yeah. You know, with a, that looks like a webcam and he's messing around with real objects. So, yes. I think I just answered my own question. You, you can do that. Yeah. In real time opens up a whole new world of possibilities. So this is super exciting and I think this is, uh, like a little bit of a fork in the road. We're, we're still gonna have traditional ai. Rendering, if you will, that's high quality, that's trying to achieve federalism. And then we're gonna have an offshoot that's this, that's like a real time version of it that's meant to do cool things that not, not, don't necessarily translate back to quality. Yeah, yeah. It's like, oh, what if you could just rapidly ideate like super fast, right. Or just make some, you know, new genre or some experimental film kind of thing. Absolutely. Alright, next one. Our old, our old favorite. Wan. Wan 2.2. There's another model, they have another model that they dropped, they're not done. And another one, so this one is Wan2.2-S2V. I believe. The S is speech to video. So this is a voice powered avatar animating model where you can give it the image, you can give it your uh, audio file and then it will animate and the character to say beak. With what your audio file was and like Wan, you could run it locally on your computer. So yeah. So just sort of like another, you know, just in that same bucket of like things you'd like your AI to do and we're showing people talking given the character input and the audio input. Yeah. I gotta say listening to it, it sounded pretty good to me. Mm-hmm. Like much better than what, uh, Veo 3 originally dropped. And then, um, I think they're, well you are giving it, you have to give it the audio so it's not generating, sure. It's not generating the audio. So presumably your audio, you either recorded or. From an actual source or you know, ElevenLabs or something. It generating the lip sync and the timing and everything. It's generating the lip sync. Yeah. Yeah. Which is still the performance. Yeah, I, yeah. Is that what ElevenLabs does? ElevenLabs is text to speech. They don't really do character sync. This is more like what, uh, Hera does he draw? Or Hey, gen maybe, or Hagen. Yeah. This is more in that category of the, um, animated avatar to with, uh, a speech input. But you could run this locally. And also comfy already has native integration built in with a built-in workflow. So yeah, if you update your comfy ui, you can load up this, uh, their workflow that they're already built and download the models and, and run this on your computer. Very cool. And I'm guessing it's free. There are a variety of cloud options you can do. Yeah, it is free, yeah. As is with most Chinese models, as we have talked about, the Chinese models. Yes. They're free. Yeah. They wanna get you hooked on them. Yeah. You are. The product. I had also another audio update, uh, from Ian. Yes. I I got it this time. You got it. Got it. They had another open source model called Ion Video Foley, and so it is, uh, I believe it analyzes the video. Yeah. And then it generates what it thinks are the full, the full appropriate Foley sounds for it. You know, we, you heard it. Yeah. It's all right. It's okay. It's getting there. I, I think, uh, look, uh, they're, they're all trying to close in on the missing components of an entire movie pipeline. Mm-hmm. And obviously, Foley is missing up until now. Uh, sound and speech is something that is still, leaves a lot to be desired. So, uh, yeah. These, these guys are closing in on every single bit of movie production and, uh, I think it's a step in the right direction. Obviously, as they version up, it'll probably get much, much better, but for now, I, I don't think it's quite usable yet. For any sort of feature films. Yeah. Yeah. I wouldn't say for feature films, but for, I can't now I'm trying to remember.'cause this conversation feels semi familiar and I don't remember if it was this specific model and we're repeating it or there was another model that came out that also focused on Foley background sounds, because I remember saying, and this last time Adobe, we did this. Adobe was, Adobe has, uh, something built into Premier, I believe. Oh no, but that one was cool. Yeah. That one was like, you could make the sound affect yourself and it makes it sound Do the fully for, yeah. Um, something good. No, something that was a similar like video input. And tries to figure out the Foley. So I don't know if we're repeating this or if it's a different model, but the gist of what we were talking about with that one too was even if it's just AI generated stuff, like, uh, sound is so important. Sound also just helps sell the effect. Yeah. Uh, and just background sound and that just like back that audio texture Yeah. That helps sell the effect of the fake world that you're building. Absolutely. I mean, sound does have experience the fact that someone's focusing on it. Right. Exactly. Yeah. Yeah. And um, so the fact that there's models and focus and attention onto that, you know, is important and it only. Improved. Yeah. The, the audio models tend to have one advantage over video models is that they are much lighter and they can run much faster. So a lot of times they can actually run on your device pretty easily. Yeah, yeah. Speaking of on device, Apple, Apple AI update, which we don't normally talk about. Nice transition. I love it. Yeah, apple threw me for a loop, man. I, I kind of gave up on them after the whole liquid glass announcement and everything else that they missed on the AI front. But it turns out they're doing a ton of AI research in-house. Yeah, I mean, look, I'm not surprised they doing a bunch of AI research, but it's interesting too 'cause so they basically released some models on, directly on Hugging Face, which is. If you're like an AI nerd, are you gonna know about this? Or hugging face, face. I love hugging. It's not a face, very publicly facing thing. Yeah. And, uh, you know, shout out to, uh, Clement Delangue sorry if I'm slaughtering your name, but he posted out that he found this alert of this new model that Apple posted on hugging face that is based, that is called FastVLM mm-hmm. And MobileCLIP2. Mobile clip two. Yeah. And it is analyzing video in real time and doing live captioning. Yeah. I think the mobile clip two works in conjunction with the FastVLM to generate those, uh, live captions to look at the video and then live caption. Yeah. The, the caption generation is actually probably from mobile clip, and then the VLM is analyzing the frame and the video to figure out what's, what's going on. And so it says, so this is pretty much doing it in real time as the video's playing. Yeah, that's really fast and really fast. Yeah. And so yeah, it could even do live video captioning a hundred percent locally in your browser. Now I think a lot of people are wondering like, why? Why this is exciting. Why do we need live captions? Well, let me just tell you. Imagine you have an iPhone and that iPhone has a really nice camera, which it does. And now that camera can look at the world, analyze the world, know which cafe you're looking at, know which street you're crossing, know which person you're talking to, and then become almost an AI assistant. Give you advice on what to order at the restaurant, which direction to go for that destination you're trying to reach. Or, um, you know, what agenda you have with this person you're meeting. So it could become a very capable interpreter of your day-to-day life. Yeah. To like understand what's happening in the world around you. Yes. And do this in real time and do it locally. And so, yeah, I think Apple has sort of missed the ball with like large language models and building out these massive models and like they're talking to Google and other companies to partner for that. But I think where they can win is like, if they can get models that are small and light and like. Specifically functional, that run on device. Yeah. And they have, they don't have to go through a cloud processing. They have amazing device that this stuff can actually run on. Exactly. So imagine if C devices great chips was like an actual, uh, you know, assistant that can see and process the world, not just like a voice assistant. And then so what? What? Yeah. And, and you don't have to wait, have an interconnection or wait for it to like process or figure stuff out. It just doesn't, it'll even work on an airplane where you'd have no reception. Right. Because it's all running locally. Mm-hmm. Yeah. So I think that's where Apple still can win in the AI game especially.'cause you know, we all have like the devices are in so many places. Yeah. And they have the custom built hardware and they can build the same models that are fine tuned for this, for this custom hardware and processors For sure. I think in our world. The stuff that gets the most attention are image generation, video generation models. We're, we're very visual oriented mm-hmm. In film and tv, and so we tend to focus on that. You, we went bananas for Nana Bananas. Right. But I, I think what will really change the world for everybody else outside of our industry, just like 99% of the public, is stuff like this. Either it's gonna be an LLM that's super capable, or a VLM tied to an LLM that then gives you complete AI agent capability on device. Yeah. And I mean, like, go talking about image generation, like the one thing that Apple has had released with AI is Memoji or whatever their image generator thing is. Yeah. Playground. We've jokes about how, you know, playground. Yeah. How, um, not really useful that is. Yeah. What like actual useful ai. Tools come out from Apple, like a Siri that works and is functional and actually helps you do stuff. Can I tell you a funny story? Um, so I have, um, I have a family friend who's, um, I guess not as tech proficient, so, and he just discovered ai. So he'll occasionally send me playground Apple Playground generated avatars. To make fun of me and my me and to retaliate. What I do is, as it was meant to be used for, I go into nano banana and send a highly photo real, highly complex thing back just to one up him every time. I will say the most useful function that has happened in Apple IT or iPhone, the iPhone recently, is, um, where you can, uh, where the AI that can isolate the subject in your photo. But then you can turn that into a sticker. Yes. And so in our group chats, we have like taken the, like most unflattering photos of all of us and turned them into stickers. And now use that as like reaction post on how do you on delete. Yeah. So, um, you know, that's using ai. That's using AI for AI for really important things. Joey. Yeah, exactly. Next one. Ai. Hardware update. Nice. So you know we got the D uh, the A, the Nvidia Spark, GIA G, they renamed it. They're a little, the Nvidia mini computer. Yeah. That was talked about at CES and we haven't heard about, it'll probably hear about it again. Yeah, I think it called D DX or something like that. I know they changed the name. Yeah. Of, so it was like, there was one, there was like the Spark and now it's the DGX or GX. It's something. It's that little gold box. Yeah. It looks like a little Apple mini, but it's built from N Nvidia and it's like a little personal AI computer. Acer now is announced. They have their own. AI mini workstation built on the Nvidia GB 10 super chip. So this one is called the Ton gn, 100 AI mini workstation. Wow. Now very, very catchy. Very, it's, that's, that's a really good naming. That's Veritan. It sounds like Voltron. Uh, so it's mini workstation. It is selling or retail for $4,000. US. What would you use an AI workstation for? Yeah, so 4,000 is actually not bad for a high performance computer. Mm-hmm. Like it's, you know, A GPU alone is$4,000 and I'm, I'm guessing you are probably getting as much juice as like a 40, 90 card, perhaps even a 5 50 90 card. So. Again, it's what would you need it for? If you are a developer that's using comfy UI day in and day out, I could absolutely see high usage of U, but there's not that many of those people out there. I would imagine this goes into, maybe not now, but in the near future when you have. A company that relies on AI tooling and a lot of is local inference based. Um, you give your employees these computers to run on their desktop. So like this replaces the computer you have, but everything on it is agentic and AI based and running locally and running locally. So, yeah, I mean, I guess I could see that out for, yeah, like large enterprises where if they wanna have ai Yeah. I mean, an example from AI models running our world is like VFX artists, so maybe not now. We are still very much, you know, transitioning in a transitionary period, but a year or two from now, I, I, I strongly believe a lot of the VFX tools and capabilities will be. AI based, right? And it'll be very complex, comfy workflow type things where you're doing multiple things with multiple models and all that stuff will most likely run locally with the maybe a couple of API nodes that are connecting to the cloud. So for those instances when we. Typically relied on VFX hardware to be, you know, GPU heavy and incredible ram and carbo memory. Now we're gonna transition to something that's really AI efficient and ready for local inference. And yeah, I hear from the press release too. Uh, developers, researchers, data scientists, students can leverage common frameworks and tools to prototype fine tune test, and deploy large language models locally. Or seamlessly scale out to any accelerated cloud or data center infrastructure. Uh, you could also link to vers to scale up and work with AI models reaching up to 405 billion parameters. Wow. Pretty big. Pretty big. Yeah. That's awesome. So yeah, also have your training, your own models, that's as well. So yeah, totally. I could see that. Um, a lot of the. AI training. Mm-hmm. I mean, you have resources like replicate and, you know, like cloud-based resources Yeah. Where you can pull a bunch of GPS together. Right. We type into Exactly. And not have to buy a bunch of gps. Yeah. So I, Nvidia is really doing this to accelerate AI development. I don't think they expect to make a lot of money from this. If anything, I think they're losing money, making these boxes, but mm-hmm. It's, it's like a, have more AI models out there. Yeah.'cause if there's more AI models out there, they gotta run on something. Yeah. Cards. You gotta give people the hardware to make the models that then make them money. Yeah. Yeah. There's a, a pitch for cu human. Yeah. Jensen's shovel three. Chess moves ahead of us. Metaphor somewhere in there. The more you buy, the more you save. Yes. Back to apps. Yeah. Not ai, but interesting to the film space. Adobe Premier, they now have an iPhone app. They brought Adobe iPhone. Well that's, that's pretty cool. I, I thought they had given up on this whole mobile effort with Premier on the mobile thing. Right. Because this is not the first time Premiere. Or not Premiere, but Adobe has done mobile apps. Correct. They just never called the Premiere. It was like Adobe Rush and it was sort of like a different branding that was sort of trying. It was clunky. I used it, positioned it as like more creative friendly. They had, I think they still have Photoshop on mobile. As well. Yeah, yeah, definitely for the iPad.'cause that's like a bigger. Use case 'cause you know, you go the iPad's. Cool. Yeah. So I mean, so it is cool that they kept the premier name, the Premier, premier branding and brought it to the iPhone. Uh, I think this is more to compete with like Cut and these other. Mobile first. Yeah. Even in their demo video editor that are really popular, they're cutting vertical videos together and it's, it's all like mm-hmm. Three second, 10, second clips. You know, very, very social media oriented. Yeah. Which makes sense. I need for Yeah, that the entire creators working on iPhone, I mean, all the creators that, you know, make their living off of vertical content. I I, they're doing so much of it on the phone, on the fly, right.'cause yeah, that's their main device and that's where they started on. I'm wondering too, I'm thinking too, like, probably like also by keeping the premier name, this is probably a good entry point too. You're a creator, you're someone young. You like start on your phone, you start making stuff on your phone. Oh, hey, by the way, there's like a much more powerful unlock of the software goes by the same name, same exact thing, probably similar, you know, ui, like similar features. And then it's not that big of a leap to be someone who starts working and editing on their phone and then switches to, uh, the desktop person, premier 100s. It's, it's a great play on Adobe's part and the, and in fact, I think the other side of it, if. They pulled this off, and I'm not sure if they already have, is if they have Creative Cloud integration on this, and then you can upload your project, file your video, all of the proxies up to the cloud, and then pull down on a desktop. What's the syn gonna be like and continue your work. Yeah. Which I think Rush sort of had some version of that. I think it was like a one-way sync where if you started a project on Rush, you logged into your Creative Cloud account. I think you could pull it into Premier, but I think it was like you pull that. And you're in, like, and then you're on Premier. You couldn't, like, wasn't it wasn, wasn back and forth, a two way sync? Yeah. Um, well, what, what Black Hear even just has done with, uh, black Magic Cloud and integration into resolve, I, I think that level of sophisticated distribution is something that is really missing in this world. Yeah. I mean, Adobe, like, I, I, I, I'm curious if they will ever. Integrate, I mean, they own they own Frame io. Yeah, they have the cloud hosting capability. I mean, that's kind of been kept as a separate brand and company, but, um, their team collaborative system is basically like, it'll manage the project, project files, but it won't do anything with the proxies or media files. They. Documentation is like, that's on you. We recommend, you know, like LucidLink or something, which can work great. But then you're buying another subscription LucidLink to store files there. Most people are doing editing on their phone, are not gonna make that that leap. Yeah, I mean we're, we're talking very closely. Yes, exactly. Uh, workflows. But I mean to what you're, what we were talking about before in their press release, if you're familiar with Premier's desktop experience, they'll recognize the multi-track timeline with colors and dynamic audio waveforms. So. Trying to make that connection there. I don't see anything about syncing. Yeah. Maybe not yet, but Project I'm, there's thinking Project and I hope I, I would imagine that's gotta be on the pipeline, but I think it's a, to your point or what you're talking about with resolve, I was thinking of Resolve, but more in the sense of, I think they did such a good play by making a free version of Resolve that. Has turned into the entry point for so many, uh, entry level editors and you know, just kids in high school because it's like, Hey, it's literally free, like legitimately free. You don't have to bootleg it. And, and then you start on that and then it's like, oh, hey, you wanna upgrade and unlock some more features? But that's been a really, I think that's helped black magic over. The decade that editing has been in resolved. Yes. You gotta hook 'em while they're young, gain more footholds, and that's where they learn. So I think this can also be a counter play from Adobe's end to like help get people, younger people, um, into the premier and Adobe ecosystem. For sure. And also, speaking of ecosystem, I just see that they have Firefly integration into this app. So again, this is, you could bring some generative AI element into it. Um, pending quality. Yeah. Interesting. And they got the generative sound effects in here too. Yeah, that that's a, that's a pretty capable mobile app. App. And bring it in. Bring in the good features. Yeah, I would totally pay for that. That's pretty good. Yeah, that's pretty cool. Yeah. Especially if I could do it on the phone, on the go and then Yes. Push it to like the next on the, the new Apple iPhone 17 air that's gonna come out next week. Oh. Then I like, I like, I like big monitors. Yes. All right. Let's see. Well, it's like, got a couple left. Another one from Hunyuan. They're a world model, which I, there have been a lot of world models and I'm 99% sure we've already talked about this because they teased it. The, uh, Ian World Voyager is now visually out, so this is their world model, uh, 3D generator. But you could also do a direct 3D output. So you could export point clouds to 3D formats from the world that is being generated. That's cool. Yeah, and they call it long range, ultra long range. I guess what that really means is, you know, it has enough retention to go beyond just a few seconds of traversing the world. It could probably go on for much longer. Yeah. That you can move around a little bit. Yeah. Yeah.'cause I've seen some other like, um, World Labs, but it was sort of like. You're sort of at the position and you can kind of move in a little circle and like get some depth perspective, but you couldn't really like, move around and these demo videos, you're, you're, you're moving a bit to the world. Yeah. I wonder if you turn around and like this village, you're moving towards the origin. Does that, you know, still remain of what you saw? It seems like because you're, where I just saw that it had a full like, map of the thing that was generated. Uh, on here you see this? Okay. Yeah. Little village. So it built the full village, and then this is just walking or the built. Enough of the village where you could walk around and get a couple different perspectives. That's cool. So this is a competing ride ride on with Genie from Google? Yeah, a bit. Well, but Genie was, I mean, Jeanie was a video generator for multiple minutes, but it wasn't giving you, that's a point cloud or, I mean, you could in theory, you know, probably extract it if you had the video, you know that you record the video and could, could extract it. This is like, oh, hey, we'll give you, we'll generate space. Yeah. This is meant to replace traditional world building in games or movies or whatnot. Yeah. Or prototyping or just, or maybe, or something like that. Use like autonomous driving even. Yeah, I'm sure That's really what, that's where the money's gonna come from. All roads lead to robots. You've been saying that lately. I think, I think we're living in the, in the, uh, Westworld of, uh, I think you need to do another battle box. It looks like a documentary. Yeah. There is The Fighting Robots in San Francisco. But no, we're living in Westworld. It's just instead of a theme park. It's the little models to make videos. But the real goal is to, you know, yes. Agi. Yeah. And then with Neuralink, you're, you're in there. You're, and now it's the Matrix. Yeah, we're in the matrix. All right. The last one, uh, just a more of a. Quick update from Google Flow. So they're basically making Veo 3 Fast generations unlimited. If you have the Google AI Ultra subscription, which is the $250 a month subscription. So it's kind of pricey already, but it was a still a credit system and it was like credits to do fast. But now we'll just say free pick start of the trend that they're going, the unlimited route. Um, so you could make a bunch of Veo 3 fast. And then the, I think I talked about this when I was messing with Veo 3, where it was like I would just do fast a bunch. For cheaper credits. And then once I found the prompt that worked well, I would then run it on the Veo 3 full. So then it was less guessing and less credit spending. Yeah. I think they're doing this to save it money internally. Because people are making so many mistakes, please, please stop using V three. Like just give them the little one for free. So they, they do less of the full generation. Yeah. So, uh, let's go. I mean, I'm, I'm, I'm glad I definitely, there's a mental unlock when it's like, oh, you don't have to keep worrying about like, credits and, and stuff. Remember when you can just keep making stuff. We went from like plasma TVs that were, you know, $20,000 down to like now L led TVs that are $200. Like that, that decline in price was so rapid and so fast. That we just don't care about TVs anymore. They're just cheap. Like you can just go pick one up anytime. Yeah. You can get a really good tv, a lot of money. So I feel like that's happening with AI inference, you know, with image and video generation. Like at first everything was like super precious, super expensive, you know, oh, I'm only gonna generate 10 videos today. And now you're like, oh, everything's free everywhere and just gonna go for it. Yeah. Yeah. And uh, yeah, that's. Pretty much it. Well, we enjoyed our AI Roundup. Roundup. Sorry, Joey. I just, I don't know if that's, I think we should double down on this and make like actual cowboy cowboy stuff, cowboy sound effects. We, we need, we need some more music kits in this. I can do a text and accent if you like. Oh, right now. Let's go for it. Yeah. Yeah. You had some, you always, you bust, you bust out like some really good accents out of nowhere. And this year's our Texas and this year's our AI roundup Outta the way. Haw. Pretty good. Okay. I, I want it to be a voice actor. I just, it, it reminded me of the Big Thunder Mountain, uh, uh, voice. Voice guy. All right. Thanks for everything we talked about. G Noise podcasts.com. And a big thank you to our Apple Podcast subscribers. We have hit an amazing milestone. Joey and I high five each other. We've hit a thousand downloads per month, so thank you. Yeah, thanks for, thanks for, uh, following along and, uh, recommend it to friends and. Leaving a review. Or leave a review if you haven't done so. Right. I appreciate it. And on the last nano Banana episode, we've been getting some cool comments, so thank you for those, uh, Joey's been answering them, so keep commenting. Yeah, yeah. We try to get back to everyone. All right. Thanks everyone. We'll catch you in the next episode, uh, for Apple Keynote next week. So we'll catch you there.