Denoised

McDonald's AI Christmas Ad Pulled After Massive Backlash

VP Land Season 4 Episode 75

Disney invests $1B in OpenAI, allowing Sora to use Disney characters in videos. Plus, we dissect the McDonald's Netherlands AI ad controversy, analyze OpenAI's new image models codenamed Chestnut and Hazelnut, and explore sync's react-1 tool for modifying character performances.

--

The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.

All right. Welcome back to Denoised. Uh, in this episode we're gonna talk about the new open AI Disney deal, the McDonald's AI commercial that was a fail, and a couple other updates. You ready, Addy? Let's get into it. All right. Welcome back. Good to see you again, Addy. Good to see you too, Joey. So, yeah, as we were kind of running down this briefing, I think there's stuff here that was news to you as well. So this'll be, I am gonna be reacting live to the audience in a very authentic way. It'll be a journey. All right. The first story, 'cause we're talking about this now because it's already kind of gone, came and went, was another AI ad from McDonald's, Netherlands. And, uh, it got so much backlash that they pulled the ad. Really? Yeah. It's dot ai. Spoiler alert. Well. You know what I, I, I got some, I got some thoughts about, about this in general, but here, let me, uh, 'cause you haven't seen the ad, so let me, uh, play it on, uh, Twitter, which has still kept it up. Let me, it's the most terrible center. No space. And the dream, your place. It's the most terrible time of. The most burn charcoal and the cookies burn. Burn. Too fricking chaos. It feels like a zero. It's the most terrible time of year. So you flee from the madness, the lights, and the. And McDonald's till January year. All right. Uh, where do I start? That's so that is the ad. Yeah. I mean, my general thoughts by myself was like, it's just a terrible ad. It's just a terrible, okay. That's exactly what I was gonna say. It's a terrible message. The world is going through a lot right now. This is the last. Negative ad that we need as we go into the holiday season. Yeah, I mean, this is sort of like the type of just like fun and Schiller and hope and like take it easy and, you know, get a break at the end of the year. Who wants to hear an ad about it the most terrible time of year? Like, and also it's so not on brand for McDonald's because they're, they're not generally playing in this risky, adverse area of, uh, brand messaging. They're, they're very safe, right? They've been around forever. It's Ronald McDonald and like a lot of happy stuff. Yeah. And like, I mean the, the, the McDonald's and the sadd supposed. Be the respite from the Christmas chaos and go hide out at McDonald's until January. But that's like the wrong positioning. Oh yeah. No, that doesn't, and I was trying to nitpick like the AI part of it and honestly, the AI generations look good. They're edit and cut together. The sound design's. Great. Yeah. And this is a concept that would be, I'm gonna guess, you know, too expensive to like practically shut down a street and have Santa like with actual reindeer. You know, in the road and it, yeah, it's not, it's one of those things where it's just like when you tell people it's ai, it just triggers everyone and yeah, I think when this plays on TV and people see it like they're not gonna blink twice. The ad itself is bad. It's a terrible message, but the, yeah, production quality is whatever. Yeah. It just, AI happen to have sucked in that negative energy. I mean, I think has gone, yeah, yeah. AI was the cherry on top that just gave, you know, everyone their pitchforks even more reason to, you know, revolt against this, but. It's a terrible ad and it's made with AI get on here. Yeah. Which is just like, forgot the flame torch. So yeah, they um, I think that launch when that launched Monday or Tuesday and then, mm-hmm. Yesterday. McDonald's or everyone's so not on their brand. Like I, I mean, I mean, you and I both grew up on McDonald's, right? Like Yeah, they're in like the same kind of, I would put them in the same category as like Coca-Cola of just like, you know, those wholesale or Ford or Walmart. Like, they're very safe. It's a safe brand, all American. Wholesome brand. Now this is McDonald's, Netherlands. I don't know how their corporate structure is and how much autonomy they have, but obviously no one picks up on that. It's McDonald's. And McDonald's. McDonald's, whoever directed that ad needs like a, A deal with shutter. The horror streaming channel or something? Yeah, different. Somebody really wicked over there. They either work at a different genre. I don't know what studio it was. Other quote people latched onto was them saying that they hardly slept for weeks while writing AI prompts and refining the shots. AI didn't make this film. We did. I, I think people latched onto that line too. Uh, when they grabbed their pitchforks. Yeah. I could actually see the, the amount of time consumption that goes into each of these generations and playing around with some stuff now on the video. I mean, you're, you're literally waiting minute, even if you're running multiple things, you're waiting minutes. How, you know? Mm-hmm. Hours go by. Before you know it, your whole day's gone by. So this is a quite a lot of shots and the quality's pretty good, you know, putting the creative side, of course. Uh, yeah. I mean, we know it, you know, people like, or making fun of the AI didn't make this film we did. And, uh, hardly slept. Writing ai prompts, refining the shots. Yeah, it does take a lot of work to do these, you know, it's not just put a prompt in a box and proof you get an ad, but in this case, you got. If they got a bad habit. Okay. Well thanks for bringing it up. Yeah. One thing, what did Mike, Mike McDonald say? Uh, McDonald's Netherlands removed the video, adding in a statement that the moment served as an important learning as the company explored, quote, the effective use of ai. Oh my God, they didn't get it at all. No, it's not an AI issue, just your concept such it's not, yeah, I, I would say it's a secondary AI issue in that it's so easy to generate bad content. Mm-hmm. That you still blame AI for making it do the thing that you would just. Right. It, it feels like, like, um, does that make sense? Yeah. It feels like, uh, uh, uh, Jeff Goldman Jurassic Park, like, you know. No. Everyone wanted to know if you could, no one stopped to ask, like if he should. If he should. Yes. Well, I was gonna say, we can make this concept super fast and easy. Should we? Exactly. Yeah. I thought you were gonna say, uh, life finds a way, as in bad content finds a way bad, bad content will always find a way. We don't need a, we don't need AI for that. No. All right. Next up. This one just broke this morning as we're recording it. Disney and open AI have announced. A partnership. Disney's going to invest 100, not a hundred. Sure. They wish it was a hundred billion. Disney's going, Disney's going to invest $1 billion, uh, into open ai. And it is part of a three year licensing agreement where Sora will be able to use Disney and all of the Disney Universe characters, uh, in mm-hmm. Sora videos. So memes like Sora app. Yeah. So there's a couple of things to this. Uh, people will be able to generate. Use Disney characters in soar generated videos. Okay. And then there's some link where some of these videos have the chance to then play on Disney Plus, which that is not surprising because in Disney's quarter, whatever Ernie's report, they Iger mentioned something that they were exploring. Consumer generated ai, content appearing, and Disney Plus. So this is probably, this is, this is part of what that meant, but the things I'm a little fuzzy on. You can run Soar as a consumer, you can. Call up Disney characters. It's not clear if you can use the cameo function.'cause we had talked about when Cameo first came out where it'd be like, oh, this would be a great use if they license IP from other studios and you can, you know, be in a Marvel film with Iron Man or you can be whatever and, uh, frozen or Encanto or something. Mm-hmm. That's not clear. That's not quite clear from the statement if you'll be able to put yourself in these, or if this is more like some sort of elevated. Fan fiction where you can kinda just make your own episodics using Disney characters. It's an interesting avenue for them to go down on. I see. Where you're like, uh, use gc, choose your own adventure type thing with Disney ip, perhaps using Sora engine. I was just thinking the the most sort of rational thing, which would be of OpenAI could custom develop. Video and image generation technology or even audio technology for Disney productions, is that part of any of this? I don't think that's part of the public part, but it was part, it seemed under, under reading between the lines, I can't find the exact line, but it was something to the effect of like, they're not just investing a billion dollars into open ai. So people can make fan movies. It was too, it was for the tech, it was for to figure out how to integrate the technology across the board. I mean, I, I, my personal opinion on this is I, I think, um, Disney undersold themselves here, uh, they honestly don't need, OpenAI should pay Disney a billion dollars to use their ip, honestly, because Disney IP is probably some of the most recognizable on the planet for sure. I mean, I'm thinking, you know, the studios might be trying to develop or figure out how to make their own. AI models that are good, that's pretty hard and they're kind of behind. They can't really invest in Google. Open AI is probably out of the big ones that have good video models. One of the options that they can, you know, kind of sort of like going back a few years with Microsoft's an open AI and Microsoft getting a foothold and, and chat GPT that they're able to, like, out of all the available models that have good video models, open AI is one where they can latch onto and put some money in and partner up and. Get a leg ahead, a, a jumpstart in developing their own models for whatever behind the scenes uses that, uh, are probably gonna come into play that aren't in this agreement or in this announcement. Yeah. Yeah. I, I, I think the. There would've only been a handful of companies big enough to serve a customer as big as Disney, right? Like mm-hmm. You're absolutely right. It's either OpenAI, Google Meta, and maybe a few other companies. What I, what I thought was gonna happen was. Disney was gonna make multiple partnerships in different avenues of their business. So for example, theme park would have a very specific need for ai, maybe in the realm of physical AI with robots. Right? So that's an NVIDIA partnership perhaps. And then entertainment, because it's I image and video heavy. We'd have a partnership with perhaps Google so they could use VO and so on. So I thought they would segment it out better. But a blanket open AI endorsement across the board, across all the business channels. To me that seems it's just too broad across the board. Yeah. And the other, the other wild card is, and we talk about it on the show all the time, is. Some of the best models on the planet are Chinese models, Alibaba models, Tencent models, Kai show models. Disney has a great relationship with Chinese companies. Why not exploit some of that? I'm, they're an American based company. I don't know the optics, but that might not be the best maybe. Yeah. During the current administration, perhaps not. Yeah. Yeah. So I mean, I think maybe just if there's an opening and they can invest in one of the biggest US AI startups that has a decent video model. It, it's a good opening. Uh, but yeah, I'm curious what 'cause that it, it can't just be, this can't just be billion dollars to make fan videos. There's gotta be other uses or under, under the hood opportunities or maybe OpenAI, uh, gave Disney a sneak peek at some of the artificial general intelligence stuff that they're working on. And Iger mm-hmm. Was so impressed. It's like. I can replace a thousand people with a thousand of these AI agents. Let's go. Let's go. So maybe it is, we'll see how that plays out. Yeah. We saw how the Lionsgate, uh, Runway Lionsgate partnership went. So yeah, we'll see. Uh, I think, yeah, you know, maybe there's grand visions of like everything you could automate, but then reality hits. Um, but I mean, there's still a lot of efficiencies that you could. Improve on or open up new possibilities of just, you know, like maybe it just makes, enables making a bunch more episodic IP a lot faster. One thing I, I did think was interesting as far as kind of the intellectual property and likenesses, especially relating to real people, the so a deal, uh, as far as for the user set, they'll be only be able to draw from a set of 200 animated mast and creature characters. From Disney, Marvel, Pixar, and Star Wars. Mm-hmm. So no characters that are tied into a human actor likeness. So like, not, not like a Hawkeye or somebody like that. Uh, not like yes, Ironman, but here it says it on the bottom two iconic, animated, or illustrated versions of Marvel and Lucas Film characters. Mm-hmm. So not the, you know, not Harrison Ford, Han Solo. But animated Han Solo, not Robert Downey, Jr. Tony Stark, iron Man, but like animated Iron Man. So none of these deals it seems like, relate to actual SAG actors who have, uh, portrayed these characters. They're either animated characters from the Stark, you know, like Frozen and the Disney stuff, or the animated version of the Marvel characters or Star Wars characters. Yeah, I, I mean, it's, it's. I could see it on that side of it as well, on the actors and their likeness. The other side is it doesn't dilute Disney's primary output, which is theatrical and the stuff that goes to streaming. Yeah, I think that quality bar, and that's something that'll segment away from. Whatever this is gonna generate and this is gonna be a lower tier stuff. Yeah, and I mean, and that and that realm too. So like, okay, you're a fan, you make some fan video with Sora, with these characters, it ends up on Disney plus. Do you think there's like an appetite where like people are gonna wanna watch No, uh, videos, people still go on to YouTube? Uh, I don't think Disney will ever build A-A-U-G-C oriented platform. Well, actually, I mean, you said you're, you, I remember when you were talking about like months ago, you're like, uh, you're a big fan of the, uh, the storm trooper vlogs. Yeah. I love those things. Yeah. What if someone made, you know, the version of that? I don't know, for like the Avengers Tower. Like vlogs of like worker in the adventures and then that ended up on Disney. The problem is Disney plus when it, when it gets to Disney plus, it's so moderated and curated and by the time something publishes, it loses all of the appeal, all of the sizzle versus somebody that genuinely loves Star Wars just makes the thing that they think is the most relevant to the lore. Puts it up on YouTube, it's raw, it, it hits on all the notes that a fan wants. So I think that's the difference with like. Disney Plus is this big regulated safe machine and any user generated content just goes in there and dies, in my opinion. Yeah. I'm just, yeah. I'm curious if like, I mean, we might be the wrong audience for this, like I'm curious if kids would pick up on or have, I, they're, they're not gonna know or care, but if they're like in the Disney plus ecosystem, they've run out of. How many rewatch is of frozen they can do, but then there's like a whole frozen expanded universe of fan made frozen films. Will they watch that? Will that keep 'em around? I think, uh, in the short term, it'll certainly generate buzz. And you know what's important for Disney is if they can tie it to an ad tier revenue model. Then that's an easy sort of almost passive income for them because they're not really making any of that content, but they're charging for ads and making money off of that. I mean, that's all you know is paying money at ads and stuff. That's also another question that wasn't really addressed here, but like, if you are a fan that makes a film that ends up on Disney Plus, is there, is there a rev share? Are you getting paid? Are they gonna like license that for you? Are they gonna do any type of YouTube ad share split with you? Uh, that wasn't really clear. So are you just making it for the love of the game and bragging rights that you got a. Thing that you created on Disney Plus, or Yeah. Will this turn into some like actual industry? That's a, that's a really good point. One of the reason why YouTube is so successful as it is, is because it actually allows creators to make a living and make real money, right? Mm-hmm. So Disney has to almost build an economy overnight for that. And then the other thing, I mean, it's kind of good research for Disney because it's like they can see, well, what. Characters, what storylines, what kind of things are people gravitating towards that they wanna see? How are they using this, which could, could inform like future stuff that they actually produce or make because they get better audience data and research. You, you talked about this. Before, um, we talked about this when it came to showrunner, I think, where you, you make a thousand shows and then the studio just looks at the top 10 that's performing really well and turn those into its own IP and so on. Yeah, yeah. Potentially, but to, I mean, if they're savvy enough to see that loophole and use it, I think, uh, but yeah, this, this feels like a really broad stroke, like almost taking an ax to where a scalpel would be better. I don't know if this is the right move for Disney and certainly in the image and video generation world, I mean. So many other models are better than what SOA is today, or the image generation model is, I know they have other stuff coming out, but open AI's main focus is really LLMs and artificial general intelligence. I think the image and video stuff is just a side hustle for them. Yeah. But also we know. Sort of what it's capable of. I mean, the IP that they're focusing on is Animated Illustrated, so that's a little bit easier to do. And we know that they drastically watered down Sora like after the first week and when they got hit with like ev, all of the content owners being like, stop. So there is, you know, I'm gonna guess that this is like a more powerful under the hood version that Disney. Can unlock because they're giving it their blessing. So, and plus who, who knows what's in the pipeline or what they haven't released. So yeah. Maybe, obviously they're not Sora from 2024, where it just kind of like blew everyone's mind of what was possible. That is like a distant memory compared to everything that we have now between VO and Cling and uh, Seedance and. All the other models. Did you hear about the code red at OpenAI? I did hear about the code Red. Explain to our listeners what that is. I mean, may I, I just saw the headline, so maybe you know more of the details, but basically that they felt like. Google. I mean Google is definitely caught up to where more Yeah. And surpass them in a lot of facets. Uh, yeah. I mean basically that they've got to, they can't, they're trying to handle the truth. They got, they gotta catch up. They got beat, they got their asses. Whoop, man. Yeah. Yeah. Maybe they, they were so far ahead of the game and then maybe fell asleep at the wheel a little bit. Now they gotta, yeah. Speed it up again. Right? Is that accurate? Was there anything specific? Yeah. Gemini outperformed Chad GPT in a lot of, um, intelligence testing. Obviously, VO 3.1 is a better model than SOA two. Google's Nano Banana Pro is better than whatever Chad GPT has nowadays. So. Yeah, Google's beating OpenAI on all those fronts. On top of that, I think OpenAI is in financial trouble at the moment because although they have like billions of tokens that's called every day, right, like their API is bonkers in the usage that it has. Having said that, their actual revenue from that is not covering the cost of. Basically running the company. Yeah. I, I know there are like, kind of these circular deals that are getting announced of someone invest in OpenAI and Open AI invests in partners up with a server farm, uh, data center. Yeah. As sort of like a circular partnership deal. Data center invests in Nvidia chips, Nvidia invest in OpenAI and this whole rounded round. That's it. Yeah. The, the whole like, I, I don't know if our audience is, um. Interested in the AI bubble and what would happen if Open AI went away? What would happen if Nvidia went away? Like, these things are highly unlikely to happen, but if it does, comment and let us know. I would love to do a deep dive episode on that. Yeah. If that's of interest. Yeah, I don't, I, I think we've maybe skirted with like how wonky we get. And also it's not, we're not industry. Yeah. Like experts, but we can, I mean, I'm just more curious, you know, from like, you know, obviously a Hollywood perspective and all of the crazy stuff happening in this industry. What would relate to this industry if there is a bubble burst or crash? In the tech and AI industry.'cause it's not like if that crashes, um, this industry is insulated from the repercussions of that. I'll totally do a deep dive on it. If, uh, you guys want me to just so this hit, hit some comments down there. Yeah. Let us know. Yeah. Uh, speaking of open AI and new image models, you saw this one, Addy. There is rumor to be a new OpenAI model popping up in LM LMArena. Uh, this is how we all found out about, uh, anatomy banana before it was publicly released. It showed up in, uh, L LMArena and started blowing everyone out of the water. So there's a new model or two new models. Uh, code names are chestnut and hazelnut key observations, uh, from this post, from Marma Duke. 91, uh, world Knowledge similar to Nano Banana Pro, which makes sense.'cause I mean, they've got chat GPT. Yeah. So they, it has that world understanding can generate celebrity selfies with very similar quality to Nano Banana Pro can write code and images very well. I don't know. Why do I do that? Okay. So I mean, yeah, so there's some pictures, you know, possibly generated on with these models that's bunch of celebrities. That's not, that's, I don't think that's good. Yeah. It has a pasted, like the lighting conditions are different for everybody. Mm-hmm. They're all hitting the same weird pose with their teeth except for the, the duck face on the right. Yeah. This is not real at all. I mean, it's sharper and better quality, I would say, than the whatever the current. Chat two BT image models called four. Oh. But I wouldn't say I, this still seems behind out of an aro. Yeah, this is like a year back. I mean, I think we would've been happy with just having everybody's face replicated correctly, but now we're looking at how it all melts together, how it integrates into the same environment, and um, does it invoke an emotion from the prompt and all those other things that I don't think it's there. I mean, also, I forgot someone pointed this out online too, like. What happened to, to having more blockage, to being able to generate likeness of celebrities. Like, it kind of seems like most of the models have stopped blocking that, and there wasn't really like a explanation or announcement of just like, we're not gonna block, you know, doing celebrity likenesses anymore. It's pretty much. I think not have been out a pro lets you do it. Uh, pretty much all the models that, that used to be a big blockage thing before. Now they kind of us all let you run wild with it. Yeah. Maybe the celebrities just want their faces out in the public more so they don't care. I don't know. I don't know. Uh, and this was a, this was a sample demo of text on a board. I mean, look, a sharp text, but it looks like a printed text. It doesn't look like. The hand-drawn demos that we saw from Nada Banana Pro when you asked it like, Hey, make a whole like flow chart board, right. Of like how the system works. Yeah. Uh, this just looks like it's kind of spitting out info. It's catching up. It's catching up to N Banana one. I would say, you know, NN Pro is still a little bit ahead and uh, Z image is up there to me, cense four is up there. Uh, so yeah, it's nowhere near those like elite image models at the moment. So you're saying they gotta keep that code red going like still Code Pro Code Brown. Do you know that joke code, code Magenta? No. What's Code Brown? Code Brown is when you have to go to the bathroom. I mean, okay. Yeah. That's from, you're saying that's from a Thingworks movie? I, I think it's The Penguins or something. Which, which Dreamer. All right. What? Wait, what? Penguin movie. You never saw the Penguins movie? They had their own movie. No. Oh my God. Benedict Cumberbatch was in it. It was great. It was so well done. All right. I'll have to check it out. No, it's that one. 10 years later. I'll have to check it out. I'll put it on my letterbox queue. I think I have the DVD somewhere. You can borrow it. If you have a DVD player. I, I was gonna say, I don't, I don't have a DVD player anymore. Yeah. Okay. And then last update, uh, this is a new model announcement from Sync, uh, called React One. And it is basically a character performance modifier. Right. It looks really promising from the videos that you're showing me, Joey. Yeah. So we're watching their announcement video. And so the interesting thing, I mean the UI is interesting because it kind of. It says you can both, it works, obviously it works on generated AI videos, but also you could upload videos and again, I know we talked about this, I don't even remember what the model name was last week, but Kling, we had something similar last week. Cling, was it Klingle one? I thought it was another model that was sort of about performance changing. Maybe it was Kling. Yeah. Anyways, putting aside all of the legal and ethical issues of modifying a real life actor's performance. Mm-hmm. Without their consent, uh, just focusing on this tech of like what you can do as an option so you can upload footage and change the performance. I mean, look at some of these. I would say yeah, it, it, it's breaking a little bit. Has a little bit of a canniness. Yeah. And that demo we just saw that was both changing the emotion, but it was also changing the, the language dialogue. Dialogue. Yeah. Yeah, so obviously dubbing and having the lip sync match for the dubbing is a big industry. This is what, uh, flawless does. Flawless the company. I was gonna say flawless. Yeah. This is coming for their lunch, which I think as far as I know, flawless is flawless, kind of enterprisey, like I don't think is flawless self-serve or like Oh, I'm sure called custom and bill per movie or per show. Yeah. Yeah. So like you can't, as a average Joe, go to Flawless and be like, Hey, I wanna use your software to like modify my videos. It's, uh, yeah. So this. With sync, you can, it's self-serve. I think it's on fall already, uh, with the API. So you can call us up and modify generated videos, which obviously there's a good use case for that, where you're generating, you know, AI performances or characters or whatever. And maybe a lot of 90% of the video is good, but you don't really like how the AI actor is delivering the line. I don't, after the Tilly Nord crap, I don't wanna keep calling them AI actors anyway. Just don't use the word actor. Just put AI beings, I don't know, ai. The ai, ai, AI avatars. The generated performance, yeah, the AI avatar. You know, having more control over the perceived delivery is good. Yeah. I think the Leonard DiCaprio example is especially hard 'cause it's such a fast moving performance and so exaggerated already. So maybe they just didn't pick the, it's just so like, dude, he is one of the best performers ever to live. You're not gonna replace his performance. His mouth in the shot looks like the mouth. Blowing up in, in Fight Club. Yeah. But I, I do think it's a look, it's a, it's a promising, somebody would have to do this sooner or later.'cause there's such a big market in dubbing, transcoding, you know, into different languages. Um, Netflix alone, I think, serves movies in like, I don't know, 70 different languages or something like that. It depends on the show and the movie. But yeah, I mean, if you, if you ever watch the actual, if you let the credits. Of a Netflix original or whatever play, and you don't go to the watch next. The credits go on forever. And then the credits changed to like production for the France translation, the Spanish translation. And there's just like an endless role of credits of like every team responsible for the, uh, the local dubbing, uh, in that way. Yeah. And now us Americans we're on the receiving end, right? Like now there are movies that are. Just made in Korean that are really natively Korean. Yeah. That we have to consume that are here. Yeah. Right. So it's coming back this way as well. Yeah. And you know, having subtitles is on, is hit or miss with some people with general audience. Don't like that over having the audio change. But then if you have the, uh, voice. Dubbed voice. You get the weird lip sync issue. Yeah. So yeah, biggest addressable market is dubbing and changing the mouth to move. Mm-hmm. To match the local lang or the dubbed language. Secondary, I think is like modifying a performance of. Someone, yeah, I, I gotta say using Cling O one for some facial performance stuff. And I sent you a text about Ajovy it, it just doesn't know what to do with the face, like in general. And that, I think that's lingo one's biggest weakness without, if you don't have facial heavy performance, but you give it like body language performance or non-human things to do, it does it extremely well. But it just struggles with the face is such a complex thing for a AI model to comprehend and then modify. Yeah. And we're obviously as humans tuned, tuned to faces and, uh, can very picky about when things, uh, shifted. Uncanny values. Exactly. I will say the other thing I like, uh, not having played with the software yet, but in this demo, this sort of popup UI interface where it sort of does map the face and it's not just a. Text prompt to kind of change what you want, but also this sort of heads up display with, uh, different options for emotions. Uh, kind of different the, the, an actual like 3D mapped face. So I do, I like what they're demoing here with the ability to control and sort of giving more tactile controls and not, uh, just a text prompt to be like, make them more happy in order to change their performance. So I like what they do with that. All about simple user interfaces. This is great. So yeah, I'm curious to see, uh. What applications people This Well, yeah, yeah, I heard, heard it here first. Folks. Sync. Sync, react. One model. Alright, good place to wrap it up. Alright, just a quick couple of shout outs to our viewers on YouTube for some really positive encouragement we could use it. So thank you to William Reed, 74 and Ivy Web 44. We thank you. Thank you, uh, links for everything we talked about@denopodcast.com. Thanks for watching and uh, we'll catch you in the next episode.