Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
Animating Characters with Wan 2.2 AND Wan 2.5
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Addy and Joey dive into AI animation workflows using ComfyUI, demonstrating how to transform static images into dynamic characters using Wan 2.5 and Wan 2.2 Animate.
--
The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.
let me take you through my local desktop version of Comfy. I took a screen grab of me. And I wanted to do two things. I wanted to do a background replacement and a costume replacement, and then relight myself. Welcome back to Denoised. Addy, good to see you? Good. Good to see Joey. all right, so you've got, we're, you've got Comfy, a big. spaghetti Company UI Workflow, shared, uh, what have you been working on? So we did a Comfy basics episode a long time ago, a few months ago, and a lot of our viewers asked for more, and we are now gonna deliver. So I have a few Comfy workflows that I wanna show you. I'll start with the videos first. the first thing I wanted to test was. Wan 2.5, which as we talked about, is a multimodal model. So you can not only input a frame reference of the video you want, but if you give it an audio performance, then it'll Animate to that audio performance. So, um, a long time ago I worked on a character called Space Cat, which was all fully built in Unreal Engine. So I just took a screenshot of that show and then put it through Wan 2.5. That's what you're seeing here. This is fully Animated with ai. There is no unreal engine whatsoever. I wanna show you that Wan 2.5 output. To the show that we actually made about four years ago, fully an unreal engine using motion capture, using audio, OBS, you name it, it was, it was quite the extravagant tech setup and this is what it looked like back then. Shout out to Jeff Sloniker, who was my, who was the character and was the chief behind this. So I'm gonna mute it and just talk through the tech setup here. This is Puppeteered in Unreal Engine, using a Rokoko suit, a motion capture suit, and it's, it's being rendered in real time. The fur and everything, all of the reflections, right. Versus today using just a screenshot from that show. We get this from Wan 2.5, what are your thoughts? I mean, just look at the screenshots. Yeah, they look similar. my question is 'cause like, yes, I know we could get outta the box, like something that kind of looks similar, but much control are you getting over Wan 2.5 with its performance? Yes. Fantastic segue into my next show. Yeah, that's exactly it. You none to answer your question. I mean, Wan 2.5 is essentially mocap soup, you can move, right. it, can follow exactly what you want your performer to do. What can we do in 2.5? Absolutely. So Or any, or any of the Wan models? Yeah. Okay. So the question you asked, how much performance control do you have over the hand gesture, the head gesture and stuff? I would, I would imagine a little bit, maybe through prompting, you can perhaps prompt for exaggerated or subdued, but not one-to-one, not what you would get with motion capture and a human performer with that intent. I'm gonna show you what I did with Wan 2.2 Animate to answer your questions about matching performance. Human performance. So I'm gonna play back this video here. And you could see here that in the video, my performance is pretty spot on, right? I mean, it retains it one-to-one, The down to my finger. Gestures. yeah, your face looks good. Exactly. And you can even see that when my hand touches my face, like that hand registration part. Which is generally difficult to do in computer graphics is now taken care of by ai. So the only thing, and this is a big thing, the thing that it's messing up on is the proportion of the character, uh, to that of the original character. the big note with Wan 2.2 Animate is that because it's trying to match the character's performance to my performance and using me as the reference, and I have human proportions, right? Like long arms and long fingers, and. Relatively small, uh, torso compared to Space Cat, who is a short, stubby sort of cartoon character. So it's stretching that model out to fit my proportions, which is, um, which is wrong on the, on animation level, and that would not be considered ready. Right. Okay. Let's go into the workflows. Yeah. that I'm trying to understand, you used Wan 2.5. Which you can't give it a motion performance. Correct. You can only give it an image and you can give it an audio file to drive the performance. Also, it does not run locally. It only works in the cloud. for me Wan 2.5 was all done with Comfy Cloud, which I'll show you here. Okay, so you're using the cloud. There are new beta platform, so you could run Comfy in the cloud. Yeah, so this is Comfy Cloud and if you go through all of their templates and there is a Wan 2.5 template, which I'm gonna open up, lemme just search for it. 2.5. There it is. So you double click that and now you get the full note tree and it's a really small. yeah. Uh, and yeah, you could see in the top right it's got the, the 50 cents sign. So it's a API node so it's not running locally, even though this doesn't matter because you're in Comfy Cloud anyways. But if you're running Comfy on your system, this is a API node that you pay for. Um, it, you can't, 2.5, you can't run locally. That's right. Yeah, I think it's just too big. Um, so here the audio input note is optional. So what I'm gonna do is tie the audio input to the. Um, tie the audio input note to the audio input on the one image to video note here. 50 cents a run. Very clear here. Does it give you any oh, setting options like, uh, duration or, um, output resolution, uh, yes it does. uh, Um, it's all down here. Yeah. So. I have the option of 4 80, 7 20 or 10 80 p duration. I think it could go up to 10. I just did five just for testing and of course you have seed control as well. And then typically on the saving side of things, I set, I hard set it to MP four and H 2 64 as the codec. Otherwise I see it having some issues with playback. So that's it. It's a very simple no tree for Wan 2.5 on Comfy Cloud. None of this is running locally. direction did you give it, uh, in the prompt. And did you experiment with the Right. kind of find that different prompts could give you, could give you some. of the hand gesture performance that you got with the other test. I actually didn't give it any prompting because I wanted to just have the audio drive the whole thing. Yeah. Okay. Uh, you wanna run it now? I'm curious. Give it a prompt like, uh, you know, the cat like and gestures with its hands. Let's see if like, we can get it to move its hands more. Character has exaggerated gestures as he talks points at the camera a lot. Yeah. Sound good. maybe change it back to five seconds. So we, uh, this, this, runs a little faster. Five seconds. Let's go down to four 80. Here we go. there. All right, son. 25 cents. So I'm gonna hit run here, and right away it's gonna go out and grab a free GPU. It's already executing, so if you look at the queue here, it's running and even has history of my past renders. The last one took 155 seconds that at 10 seconds. This was 2.5. It was 2.5. Okay. So I think within, I don't know, half that time, 75 seconds or so, a couple of minutes. We should see a result All right, Joey, so that came back super fast. Here we go. We're gonna play this back for you. Well, it's gesturing. What do you think? I can't hear it. But, uh, it has more of the, I mean, it's kind of doing finger guns shoot a ga So, yeah, I mean, trying to control, trying to control physical performance through prompting is obviously not the right way to do it. Is that is, a But task. yes, but it, it, it's kind of there. I guess if I were to get really granular, I would go prompt structuring, you know, one second mark. Make this hand, two second mark. Make that hand. you would go the JSON, the JSON route Yeah, yeah, yeah, yeah. See how that works out. So I just wanted to quickly show you what Wan 2.5 is capable of on Comfy Cloud. This is something anybody can access today. You don't need a Nvidia GPU. You can do this on your Mac. Yeah. Yeah. Wan 2.5. It, I did find it was the best 'cause I was working on a project and we had audio and I wanted to drive the performance of an image and I tested everything that could, you could give it an audio input and an image. And, um, what were the, what's the other main one? Cling. Does Kling have that? Cling. I think you could do, You could put audio in into cling. I think, you could do audio to drive like, uh, a character performance. Um, I wanna say clinging was the other one that was like a close competitor. But Wan 2.5 was really good at something that felt natural and that also as natural as it could be given all of the limitations and restrictions of being, uh, just an audio driven model and, um, did not'cause a lot of these other models are. Feel very like, designed for UGC videos. So they have like very weird over the top performances. Um, so Wan 2.5 felt the most realistic of the available options right now. So it was really good for that. Yeah, this is the Kling, uh, this is Klan's, um, note tree, and I don't see a audio input model here. it would be like Yeah, cling. Like either, I don't even, I don't even know if it's an API think it would be like clinging on their website'cause they have more. have more. options on the website. So now I want to get into Wan 2.2 Animate. This is a much more complex node structure here, and I'm gonna walk you through what each of these are doing. Just just know that you don't have to build any of this yourself. In this case, what you're looking at here is on Comfy Cloud. It's fully built out for you. And then I'll show you the one that I actually ran, which I ran locally on my desktop with Comfy the application. We'll get into each one of those blocks. So this is on Comfy Cloud. Unfortunately, I think because this is in beta, there's some issues with it. I couldn't get it to actually output anything, Oh, but it was a good learning moment. Frame.io to just learn the note tree, Okay. And so also to clarify, so this workflow they're looking at right now is from their one 2.2 Animate template that's built into Correct. Okay. Can you Yeah. So you can go to browse the templates. And what I do is just typically just type in Animate and it will have Wan 2.2 Animate character animation template. Double click on that. It takes you to a whole thing, which actually this has a, a sidecar YouTube video, uh, with this gentleman driving it. I, he doesn't tell me his name, but yeah, he's, he's the guy that basically drives it. Yeah. he's also the face of like the Comfy YouTube channel. Yeah, he's the get Comfy with Comfy guy. Shout out. Shout out. to, we can't remember your name right now. Sorry. Yeah, please, please introduce yourself on the next video. So this is the note structure. I know they, uh, have it sort of built up into sections. Um, and also they do have really Yeah. documentation as well. So like they have notes everywhere and things identified for like what you would need to change for whatever you wanna do.'cause like there's really only three or four nodes in here that you really need to mess with to change, to do what you want. Everything else is already set up and it's just like, don't mess with it. Exactly. Yeah. So if you don't need to mess with anything, just don't. Don't disconnect a no tree or anything like that. The first, the, I think the most cumbersome part to any workflow is finding the models, loading the models, and being Comfy Cloud. This kind of just takes care of it for you. So this, this is saving you a ton of time here. So these are all the models that it's looking at. For the actual model itself, it's Wan 2.2 Animate, 14 billion parameters floating 0.8. E four, M three fn. Um, and I think this is very important because this is how the GPU actually computes, uh, precision. So it could be a E five model or a E four model depending on your GPU. Um, I just ran with what they have. Actually okay.' cause it's on cloud. I don't even care what GP is. Yeah. So when you load this up on cloud, is everything already quote, downloaded, like and and ready to go? Yeah, for sure. So I just opened the template here and you could already see that, um, well this is, uh, the two LORAs that are needed are loaded. This is the actual model itself. This is the heavyweight big boy model. It's already in there. If you click on it, it kind of takes you to load other models, but Oh, I haven't even it's all in there, that new UI yet. Oh, that's interesting.'cause yeah, before it used to just be a little dropdown and you had to like decipher with the long text name what the model was that you're looking for. Uh, okay. That's a, That's a, nice new interface. Okay, cool. Yeah. And then the clip, the clip encoder, that compliments that one model is also here. And again, this is in that same enumeration, E four, M three, as well as flowing 0.8. Uh, these things have to match. The clip encoder definitely has to match the model. Okay. And, but by default these things are already set up where it has the models that you would need to run. Um, I would also say if you are running this locally, you know, if you have a machine that could handle it, a PC and stuff, uh, when you load this workflow up. If you don't have the models, it'll pop up with a big window that says model's missing, and then there's just like a little bunny to say download. And so it's very easy on a local machine to set up the models that they need. It'll just, they'll automatically download them, install them for you. Uh, so it's pretty, pretty straightforward. I'll go through that when I show you my local instance of Comfy, where I actually did the manual work. Perfect. So once you have your models loaded, it says step two is your prompting. You know, very simple prompt here. And then, um, there there is some, uh, global parameters, the video size. Here you can define if you want a wide aspect ratio, you could put 12 87 20 19, 20 10, 80, what have you, and this will propagate into both the incoming image as well as the output video. All right. Did you try full hd, like I tried seven 20 P for seven 20 p for most of my generations. I did 720 too. I haven't. don't know if it can do full hd. I'm cur I should try it. We should try it I mean, I'm sure it could do 10 80 p. I don't know if it was trained on it. I remember looking up the specs and I think seven 20 was the max. I I, I'll try it in the future. I'm curious. Alright, so Wan 2.2 Animate has two different types of solve, um, and it kind of has the note here with it. Um, so if you read the note here, it has. A mix mode and a move mode. So the mix is basically retaining the background image with the character being replaced. So, and I'll show you an example of that. Yeah, and this is what we sort of talked about in the last episode when we were talking about this at a high level and I experimented and it turned me into an elf.' Cause I was trying to drive the performance of an elf. Yeah, so this video that you're seeing of Goku being Animated, that's also done with Wan 2.2 Animate, but it's using the mix mode where the background is retained, and just the foreground, the character is kind of comped in. So we're not gonna go with that. We're gonna go with the move mode, which replaces the entire frame. in this example here, um, I'm gonna just upload my image. So give it a second to upload. So this is your reference image. So this is what you want your starting frame to be, roughly. Yeah, and it's, it, it's not gonna an Animate the background, obviously. It's just gonna Animate the character, but it's replacing the background from my performance. Okay. were doing a mix mode, you would like take a screenshot of you or whatever you wanted, and then like Nano Banana, modify character or you. And then give that as your starting frame. Totally could do that. Yeah, exactly. So, um, once you have the mode that you select, then the next thing, if you are doing mixed mode, then you have to dial in which part of the character you want to mask and which part you want it to leave alone. All right, so I'm gonna go ahead and give it the video performance takes a second to upload. And so, so your video performance is a vertical video? Yeah, it's, uh, still worked And stuff that, yeah, yeah. Okay. Interesting. this is my video performance frame by frame. And then, um, one, once this loads up, this will actually switch to an image of me where I can move these dots around. Um, add more of these dots if needed. Yeah. and I think it says it in the notation. So like, if you hit run, Yeah. run the cycle. Grab the first frame of your performance video and then load that frame here. Because you're, what you're clicking on right now is not going to match up to you could either hit run, it'll grab the first frame from Addie's performance and then load that into that box that he was clicking the green checks in. Or you could just upload your own reference image right. so let me take you through my local desktop version of Comfy Instead of that Space Cat example, I ran another example of what I would call live action. So I took a screen grab of me. on this very table, right? Uh, just wearing a black T-shirt. And I wanted to do two things. I wanted to do a background replacement and a costume replacement, and then relight myself. So I actually went into Freepik and I did that with Nana Banana. and here are just some of my notes. I had a poster, remove this ho I had a bunch of stuff on my table that I wanted to remove, remove my watch, and so on. Right. And then I ended up with something like this, and then Joey was like, Hey, you look like you're in Breaking Bad. Yeah. Cooking back. So I was like, okay, maybe a different look. But I, I love the, like, the apocalyptic bunker and like the whole, the whole vibe with the, with the light on top. above your eye. I don't think I've noticed that before. Yes, so I ended up with the still image here, so it, it slightly distorted me, like it added, I don't know, a few pounds. And also like, you're eating it's, it's not, it's not my face a hundred percent, it's like 99% my. Yeah. but the nice thing is added some dog tags, which I wanted a distress sweater with holes in it. And then I wanted a gun rack in the back. The, the background, the background to me feels very like unreal engine marketplace. It feels synthetic, but it was a good enough test to try. Right. So here is the video of me, So you could see there that my dog tag is, uh, moving slightly. There's physics on my dog tag, which is so cool. And so, unintentional. And then my performance is there, right? My hand and my, you know, me touching my face and exaggerating. It did mess up my face slightly, which I think I can address, um, going next time and I'll get to that. But look, overall I'm super happy with the results here that you're looking. And I did this with Wan 2.2 Animate. so this is exactly the workflow that I used. right? So you've got, uh, so basically everything we covered before with this workflow on Comfy Cloud, your reference image is the image that you modified with Nano banana. Yeah. Scroll there, Yeah, so this is the reference image with Nano Banana. and then you're driving This is my. okay, so with this one, did you do the transfer, or did you do the Complete. Complete. okay. And so let's show everyone how you would modify the default workflow to Yep. switch the mode. It's really easy. I want to give a shout out to whose workflow this is real quick. this is a YouTuber named MDMZ, whose workflow that I downloaded. So shout out to MDMZ. so you're not using the default Wan 2.2 workflow in Comfy. No, the default Wan, 2.2 workflow is just. A little too excessive and too crazy. So both MDMZ and another YouTuber that I followed, they took a lot of the extraneous stuff out and just made it much simpler. Okay, so what is different about this workflow? Okay, I thought you were on the default one. So what's different about this So this workflow is specifically dialed in for the entire frame. Uh, which, uh, which is not the mask mode that I showed you earlier. So it removes a lot of that feature out, and the way it does it is actually really simple. Uh, let me show you is this workflow just built for a full, uh, transfer? Like is there an option on correct the pose, transfer, or No, this is, this workflow is just, you want your video drive the performance of a completely new frame. that exactly what you said. And the way to do that is really easy. You can take the default workflow. Or this workflow. And if you just remove the connected objects to the background video node and the character mask, then it, the model defaults to animating the frame. Yeah, It there, it does. It no longer does masking. yeah, yeah. If you are in the default workflow, what Addy is showing, the background video, the character mass, it's a little, not the most intuitive 'cause um, you have to like, kind of trace the nodes, but you basically have to like delete those connections. To the node in order for it to bypass it. You can't just it, and it says in the nodes, you can't just bypass the nodes. it. doesn't work. You have to like delete the actual connection. So it's a little, That's it. it's a little bit annoying to switch modes. a little hacky. Yeah. It's not the most, I wish there was like a nice switch for it, but there isn't. but this workflow that you downloaded, by default, it's already set up to do the full, uh, character performance driving a new frame. Correct. Yeah. And actually, uh, I can run this locally and just kind of show you. It does some really neat things. So one of the things that it does is it takes my performance as a video frame. It'll do some sizing, upscaling downscaling stuff, which is all built in here. And then I have. Pose estimator note here, which is detecting my hand, body and face. You can of course turn off your hand if you want to Right. Turn off your face like you pick the part of your body that you want tracked. Wan 2.2 Animates magic node is really this, which is essentially running a mocap on you to determine what your body pose is, what your face poses, and then put that into the diffusion model so it could generate the character in that pose. So I'm gonna quickly just run it here. Now you could see it, it's going through the nodes. Whichever node it has, that green box around it, that's the one that's running in real time. Yeah. So you could see the pose estimator is running completely, and then once it does that it, it'll do something really cool, which is that it'll create a frame by frame estimation of your facial performance that you see here, as well as your body performance that you see here. And it's using this to then Animate the character in the diffusion model. Yeah, that's cool. yeah, you can hear my GPU spinning up in this room and you could look at the, uh, the green task bar on top. And also, if you look up here, my GPU's at a hundred percent, the temperature of the GPU's going up um, rans. what resolution were you running this at? Great question. So this just like the last model, this has a global resolution dial. Seven I set it to 12 87 20. Okay. Okay. This then propagates into the entire workflow from these parameters that I set? And how long are you, how long has it been taking to make one? And what's your, what do you, what's your, what are you running on, on your Yeah, great question. So I am running an RTX 30 90, not the best GPU, but not too shabby either. And for this is like a two to three second clip, very short. I think we're looking at maybe five to seven minutes or so. Okay. Oh, also other question, did you keep frame rate? I think the default it leaves at is 16 frames per second. Yes. The frame rate is, uh, no, actually I matched the frame rate to the incoming frame rate. So it's gonna go track at 30. Did you, uh, you changed that? Yeah. So let me show you on this particular workflow, uh, where is it? It might be on the output where it say like before, say, video. Oh, right here. Okay. Oh, So a note here called Get original frame rate. This will go into the input image input video that I gave it, and I shot that at 30 FPS. This show is shot at 30 FPS. That's the standard thing that I use. and that went into the create video node, so that that is gonna override the 16 frames per second that I had before. That's a nice improvement because the default one, 2.2 workflow. It's a just a text field and it defaults to 16 frames per second, and then you just have to manually change it. So that's a nice, that's a nice improvement that it grabs the frame rate of your source video.'cause obviously you'd want it to match.'cause if you run the default Comfy, uh, if you, if you run the default one, 2.2 Animate, and then you play the video and you're like, why does it look like it's plain, choppy, or slow? It's probably because the frame rate is 16 frames per second and it's gonna look a little choppier. The main reason I wanted to do that is because, um, this side by side video that we're gonna show you, and I wanted that to be frame by frame accurate Mm-hmm. because. I was just putting my VFX artist hat on and I was like, you know, I want to do background replacement, costume replacement, and then have it feed back into my VFX pipeline. So of course it has to be the same resolution and the same frame rate as the incoming video. with this workflow,'cause I'm curious what else this workflow improves offers. What's all this stuff on the bottom that's turned off? Yeah. So this is the video extend. So Comfy purple means bypass. So none of this is actually in, in play right now. Um, actually this should be more like this. And what this just means is, um, one, 2.2 Animate node that's built in. And shout out to someone named Kja who built all of this and cajas hugging faced repo repository is what all of this is based on. So the way Kja built the ComfyUI plumbing is it's, uh, at most 81 frames long, and you know, if it's 30 frames a second, then you know however many seconds you have. In order for you to get past 81 seconds, you have to chain multiple inference together. And the way you do that is using these nodes, you can actually cut and paste and have more. So it'll generate the first 81 frames. Take that last frame, generate the next 81 frames, take the last frame, and so on. Would this pretty much basically be determined if the source clip that you're trying to modify is longer than 81 frames, you would turn these on? Yes. Exactly. Yep. Okay. Got it. That makes sense. Yep. So that is the, uh, Comfy local workflow in a nutshell. yeah, I'm, I'm really happy with the results. Look, it's, I'll tell you, it's not. Usable for current day feature film, TV production. Having said that, if I was to build a short film or a YouTube channel or any sort of short form content around, you know, putting myself in a post apocalyptic world and then doing a podcast or a broadcast from this world like this is totally usable for that. Yeah, and I think, I mean, just the per performance recognition and pose transfer, uh, it's been the best that I've seen out of all the options out there to really kind of I agree. the source fidelity Yeah. In, uh, in a quick note, I know some of our viewers are, uh, very advanced ComfyUI users, so maybe for them, this is perhaps not the most advanced video. For most of you who are just coming into Comfy, you've done some basic image generation stuff, and now you want to get into video, you want to do animation. Well, I think this, this is a great resource for you. So again, uh, watch this video, then go to the Comfy video that we'll link to, as well as Mdm Z's video. And, uh, I, I think it's a, it's a great way for you to get acclimated with Comfy in general. Yeah. And like we said at the beginning, like a lot of you might load the workflow up and see a bunch of nodes and lines and it might look intimidating, but there's really, with a lot of these things, only three or four things you need to change. Uh, and everything else is, is pretty much set up for you. I'm curious if you experimented with. Different angles with Wan 2.2 because one issue I had with Wan 2.2 Animate was I had the reference image I was trying to give, it was a character, but sort of like a character profile shot. And the driving performance was something like this where I'm looking straight to camera and the outputs, it would always keep shifting the character reference. To face forward to match what my driving video was. Uh, did you mess at all with different angles at all or anything? I have not done it in one, 2.2. I've been trying it with Nano Banana now that they have the camera control thing. is that an out of banana thing or is that a Freepik thing? yeah, it's in Freepik, but it is powered by Nana Banana. So let me quickly show you, and it's actually a picture of you, Joey. So this is what I generated the other day, shared with you. you. So So right away, to me that doesn't feel a hundred percent like Joey, like it automatically messed some things up. uh, buffed up. So I'm, I, I prove it. The reason I wanted to try it here is because like, that, that's not my face, you know, it's, it's, it's kind of like my face, but not really. So I wanted to give it a front performance with a 45 threequarter view of me as a reference image and try to generate that. So this would be next. This is something I try next. I took a screen grab of our last episode where I was wearing this t-shirt. Okay. And you rotated the camera, the, the character. So yeah, that's what I'm like. If you gave it that image as the source at the first frame reference image and your character performance video, what would happen? I'm wondering, like, the thing I haven't experimented with is like if you record your character performance from the same perspective of the image you're trying to give it, if that, um, if that solves, if that solves the issue. So yeah, if anyone has done that or experimented with different angles where stuff that isn't just like straight onto camera, uh, I would love to know. So like, let us know in the comments. Yeah, and in general in the comments, if you have been playing around with Wan 2.5 or Wan 2.2 Animate, let us know what your thoughts are, what the strengths, what some of the weaknesses are. We'd love to hear that. So thanks everyone for watching. Um, like we said, let us know the comments, what you thought, and if you've got any other questions or anything else you wanna see with Comfy, we'll uh, try to figure it out and post about it. Links in for everything we talked about as usual. Uh, down in the show notes or over at denoisedpodcast.com. Yeah, so if you wanna see another Comfy episode, just the best time to let us know 'cause we're, we're doing this, uh, so let let us know what the next. Subject that you want in Comfy, and we will dive right into it. Uh, quick shout out to Mariana Promise who I forgot to shout out when we were talking about Promise Leadership the other day. She's the head of product and she actually invited us to the event that Joey and I went to. Um, fun fact, her son is sick of our voice because she listens to us in the car when she picks him up from school. Mariana number one, Faye. Thanks, Mariana. Keep playing us for your son. Yes. Uh, all right. Thanks everyone. We'll catch you in the next episode.