
Denoised
When it comes to AI and the film industry, noise is everywhere. We cut through it.
Denoised is your twice-weekly deep dive into the most interesting and relevant topics in media, entertainment, and creative technology.
Hosted by Addy Ghani (Media Industry Analyst) and Joey Daoud (media producer and founder of VP Land), this podcast unpacks the latest trends shaping the industry—from Generative AI, Virtual Production, Hardware & Software innovations, Cloud workflows, Filmmaking, TV, and Hollywood industry news.
Each episode delivers a fast-paced, no-BS breakdown of the biggest developments, featuring insightful analysis, under-the-radar insights, and practical takeaways for filmmakers, content creators, and M&E professionals. Whether you’re pushing pixels in post, managing a production pipeline, or just trying to keep up with the future of storytelling, Denoised keeps you ahead of the curve.
New episodes every Tuesday and Friday.
Listen in, stay informed, and cut through the noise.
Produced by VP Land. Get the free VP Land newsletter in your inbox to stay on top of the latest news and tools in creative technology: https://ntm.link/l45xWQ
Denoised
We talked to Freepik's CEO about their pivot to AI, going unlimited, and more!
Freepik CEO Joaquin Cuenca Abela reveals how the platform generates 4 million AI images daily while serving over 100 million users. In this interview with Addy and Joey, Joaquin shares the company's swift pivot to AI in 2022, the strategic move to unlimited image generation, and how they're capturing the professional market from marketers to VFX artists. Plus, insights on AI legal challenges, output moderation, and why Freepik filters which AI models make it to their platform.
--
The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.
Everything that we did in my mind became obsolete. It was gonna be obsolete like in couple of years. I saw to me was at that point AI crossed the threshold of having creativity. In this special episode of Denoised, we're speaking with Joaquin Cuenca, CEO of Freepik, one of the largest AI powered creative suites serving over 100 million users. Enjoy. Alright, Joaquin, thanks for joining us. Really appreciate you coming on. Thank you, both of you. I'm very happy to be here. So why don't we just get the backstory of Freepik. Uh, you know, how it started and, and a bit of your. Backstory. Well, the backstory is a little bit longer than the usual backstory of an startup. Sure. we, we actually started in 2010 at the end of 2010, a little bit before free peak, if I can go farther down. Yeah. I made a, a little startup in, in Spain and. Actually was acquired by Google. It was the first startup that was acquired by Google in Spain. That was I was working on Geolocated photography. I went to Google and, and, and I spent three years there. And then I grew as an engineer. Kind of, that's where I learned a little bit how a big American company works, all the titles, you know, how different people, how it's structured. Then I came back to Spain and together with two friends, I started a new startup. That's Freepik. Okay. And at the very beginning it was only a search engine, so that's why we call it Freepik. Okay. It was, we're doing websites, finding images. It was. Kind of hassle. we were looking for like free assets because we were working just, you know, on a budget. Uh, so we could, we did a, a website that was like Google, but to find free images on the internet. Okay. That was big one. Okay. Then, you know. One thing led to the other, like we started expanding soon enough, the quality of free images on the internet was not great, and we started making ourselves images to improve the library. And then we did icons and we did more things. You know, we keep moving on. And the business was doing very well. It was expanding. Everybody was happy. I guess there were UIC unicorns and butterflies all over the place. You know, everything was nice until 2022 you can guess where this is going, right? Yeah. So 2022, I got like, the first claims was DALL-E. DALL-E 1, DALL-E 1 really blew my mind. You know, before DALL-E one, there were a few things going around. You got gas, you got some other technologies, but it was always like quite niche. We you know, it was not just working, really working and in my mind, honestly, I didn't feel it was gonna work. I thought it was always gonna stay a little bit, you know not as good as humans. Like the AI image quality would never, would never like be as good I thought, and that was my mistake. I thought it was gonna be like a barrier between human creativity and. A machine produce images of something. I mean, when you look at how rich is a real photography, you know, with makeup photos that are, that made be sold to, to have an impact. And then what you were getting back in the days it was, to me there was a big divide. Okay. But DALL-E one, that was the first one that show like. My gosh, how can it be, you know, you start picking up, like from words you pick up like compositions that it was like quite complex. Okay. To my, at least at the title, so that that reli about, they say, okay, we, we should start working on this. Maybe this is going somewhere. We started, like, I started hiring some, some people that were working on AI initially doing, just improving our certain gene, doing tasks that we knew that could be solved, like no blue sky research. Okay. Just, okay, let's, let's start working. So to say, let's just start learning a little bit about ai and then a little came out and that completely blew my mind at that. That was, I have a very vivid memory of what it was. I was with COVID isolated in my house for one week. A little bit with fever, and so the only thing that I could do is look at Twitter and look at the things that people were posting. It was blowing in my mind. I wa yeah. I was completely people, some people don't like me saying this, but I was completely panic. Like imagine like at that point the company, we were like 500 people in the company, Aputure that. You know, it was kind of you. You couldn't see the progression of the company. So we knew how to do illustrations that we did icons, and we did photography slides. We got the motion figure out, now we're gonna do 3D videos, you know, more things. And now this thing was like. Like it was a big reset. Like the future is not gonna look like the past. Joaquin, what was the cause of your panic? The cause was that at that point there were two things, like everything that we did in my mind became obsolete. It was gonna be obsolete like in a couple of years. I saw to me was at that point. AI crush the threshold of having creativity. Some of the images that I saw was like, oh, this thing, it really took some imagination to, to do this thing. Okay. And that was to me, the big mental barrier. So once that fall, once that fell down. You know, I didn't saw any other limit. It's like, no, this is, this is just gonna keep improving until it becomes as good or better. It was clearly superior to the old method. And you saw that back in 2022? Yeah, yeah, yeah, yeah. That was, to me it was like when it, yeah, when it crossed, I saw some images that really took imagination to make them, and to me that was the big, the big barrier, you know? And the second thing is that I even saw. How could I add value? It's like, okay, this is gonna make images. I don't have a AI experts, so what's next? You know? What about the 500 people that are working with us? Like how do I deal with this? Uh, I think it took me up until DALL-E two to realize that this was the future because of the situation where I was in, I had to cope. With something that was, to me, I think, you know, looking backwards, it's like why did it took me so long? Kind of, I think was in part because of where I was against, like having to deal with the company. Just not being able to accept the future. In a way, I think I still realized before some others in the industry. Kind of definitely. But I think I could have figured out like one year before, if it was not because of where I was. Did you get pushback or do people think you were crazy for kind of pivoting to AI and so early investors? Early? Yeah. shareholders, yes. I mean, our company is bootstrap, so we, we started in Spain. There's no, there are no VCs, no. Private equity. We, we were just like three friends. I was calling someone was doing design, but in 2020, my co-founders, they were quite stressed by the company. It was also their first exit and they say that they wanted to get out and we got a private equity, the boat. Part of their stake. So they got a controlling stake in the company. Okay. So that's why we have like shareholders, but we didn't have an investors. They thought it was a bit crazy at the beginning. I mean, they didn't, so they, they were strong analogies like, you know, but self-driving cars, it was supposed to come like in one or two years and it's not yet here. Like maybe this is the same thing. And in my mind I was the, okay whatever. No, this is really coming, but. People at the company. We have many founders in our company, so we, we acquired some other companies. I've, most people saw it. Okay, this is changing. This is really changing. The big question was, okay, but what can we do in this work? How can we add some value? Okay. And the answer came actually just by working, just by iterating in a way. You know, instead of I can asking the bus, the big question that makes you feel a little bit is like, I don't know what the future is gonna look like, but I know what's the next step that I can take today. Okay. The next step became pretty ambitious. Like, okay, we don't have the in-house jobs to do a foundational model, not today, but stability came out. The stability initially with stable Fusion 1.4. It was an open source model. Say, okay, that's something, you know, we can take that and we can make a website just hosting it. It's step one. Okay. And then people are starting using it. And they told us that it was a piece of crap because they could not easily select a visual style. You know, people were adding to the prompt in the style of, you know, some random guy that is an artist and not that. And of course, most people didn't know how to craft their prompts. So V1 we just added some visual styles and we're just appending text to the prompt. You know, it's, it's very basic stuff. People like that. And then some newer faster models came out and we made like a fast version, and then we did a Sketch three image and doing websites, that's what we do. That's something that we could do. Okay. And we just ed, we just keep working. And we got like more needs from the users. Some of them we could do them. Some of them it was not in our hands. So we, we just send that feedback back to the model providers. And I think the only, you know, the advantage that we had compared to others that we had already a huge traffic. We were one of the most, we were one of the top hundred most traffic websites in the world. Okay. So that's amazing. It's a pretty decent traffic. We got over 80 million monthly active visitors in terms of stock websites. We were the most trafficked one in the world. Yeah. Like do the traffic and shutter stock almost. So at least we had jaff, we had people that were interested in images and we had many people that were paying our subscription. So when we made some AI photo studio, the very beginnings, we get plenty of people using it. So that gave us the ability to negotiate a better deal with the model providers. We the, you know, get very cheap inference and we pass those savings to the users that give us more users. It was a combination of that early distribution with an eagerness to improve the product that led to more people, cheaper prices and, you know, and to keep the ball rolling. Yeah, definitely. In the early days, I mean, the model manufacturers weren't really making money and you had a revenue stream. Yeah, we, we actually, you know, we've never been margin negative. So the margin, the margin on AI is worse than the margin on stock. So margin on, on the stock, you create the image once, and then you sell it. You know, you can sell by, by zero, almost like you, you get a, the gross margin on the stock, on our case, it's 70% roughly. That's great. Okay. We have to pay the out author of the image, and then of course, below that you have to pay salaries. You know, everything in the company, but it's 70% on ai. Initially it was very low. It was, you know, we're giving it at cost almost. It was from zero. But then models started starting to be much cheaper very, very quickly. So in image generation went down from 3 cents, started going down to 1 cent below one and below 1 cent. It, it got down to C 0.10 cent at some point. Now more like C two because we improve quality. It came down in price very dramatically, very quickly. So we went, I remember gross margin went from zero up to 95% on image, you know, in a few months. And now we are at 20% on video. It's starting to go up. So there's, as technology improves. Of course, humans, we, we don't improve or, you know, we, we just need to keep our margins. But with technology, you can get an, an increasing margin over time. And what are some of the numbers now? I think, was there a tweet recently of like the number of images or videos you're, you're generating a day or a week? Yeah, I mean, this has been changing over time. Actually in the past we generated more images because at some point we enable automatic generation of images. When you scroll. And that generated many, many images, but they were of low quality. We started pushing in higher quality, even if the latency is a little bit higher. And now we are at three, 3 images per day. And on videos, we are getting close to 200,000 videos per day, which is big for, it is one of the top three worldwide that's in, that's a lot of volume. Yeah. That's, that's a lot of gps. Crazy. Did this go up recently too? From running the, the, the free video generation promos, like with different models. Yes. Yes. So video generation, it increased. that's interesting. All the, so far, all the unlimited images and unlimited videos thing that we have done, we have only done it to people that subscribe to Premium Plus or Premium Pro. Okay. That's a subset of the subscriptions. premium is the bulk of the subscriptions that we have, so we have. On Freepik above 700,000 subscriptions. And then we have other projects in total, so where 800,000, but on Freepik.com is 700,000 and roughly 85% of them are premium. Okay? And then you get 10, 15%, which are premium pro and plus. And. We offer it to premium A plus and pro because A, they're paying more. So it is only fair that they get first any advantage. And B, it's, it's a way to ab test how things are gonna behave. Okay. When we expose it. I mean, my intention is to bring a unlimited to everybody eventually, but I need to calculate what's gonna be the cost, what's gonna be the reaction. So. On video, we introduced recently some unlimited video weeks. It, it's not yet. limited forever, like with image, and since we introduced it, the total number of videos created in the platform increase by 50%. Five zero. Right. That's pretty amazing because unlimited videos are only exposed to 10 to 15% of the users. Just as users, they increase 50% the total video generations. So there are clearly some super users that are just all the time generating. There is people that came to the platform to generate video disproportionately, they go to Premium Plus and Pro. Because when generate video, that's pretty expensive. That's pretty intensive in credits. So it's, you are serious about video. Almost all of them are in, in Plus or Pro. Yeah. Lemme just back up for a second too because just in case people weren't familiar with, I realize you kind of jumped into the credit thing, but just in case people weren't familiar, it was like a month or so ago, you. Sort of made this move that was unique in the AI space where you said if you're on premium plus or pro you're getting rid of the credit model for AI image generation and it's just unlimited image generation. Go crazy, make as many images as you want, you don't have to worry about credits. and then you've been running a weekly ish promo with different video generation models for like a week or limited time. People could generate as many videos as they want on a specific bottle. And then that kind of. Rotates. That is, that is right. Of course. Like, you know, the universe not being unlimited every unlimited thing it, it always has some kind of limit. Okay? So we have not put any. Explicit limits, but there will be a limit. I mean, we have a, a limited capacity at the end. So if things, yeah, if someone has their computer hacked and running, of course images 24 7. Exactly. At some point computers become slower. You know, it's gonna be a queue, it's gonna, but so far we have been able to handle. I, it was very interesting because when we're thinking about how people use the AI suite is like, how will you feel if every time that you use Photoshop? I always use that example. If every time that you click somewhere it was like, this is 2 cents, this is 5 cents. This, it will piss me off. I, I wouldn't, it wouldn't make me think twice before I do anything. Okay. And it was that feeling that, you know, people were kind of constrained by, by that it's. It's not so much the price, but knowing that you have to pay every single time. you know, it just, we didn't like it. Sorry to interrupt. the flip side I was gonna say was. The Red Lobster example, if you remember, the endless shrimp strategy backfired and the company lost a lot of money and a new CEO had to come in. I mean, how do you balance your cost with unlimited? And I, you know, something that I learned on my time at Google, that there was a story when I joined Google that is good at the early days, you know? They, well, they knew very well their math. They knew very well how much electricity they will consume. Google initially was only a search engine. They work in memory, so they consume an awful lot of memory. You had this story where they came in into data centers, they signed a lease, and electricity was for free. Additionally, you pay by the square meter, so they packed that rack with servers. It was crazy, and they knew, you know, when they plug in, they had done the math. Those guys, they knew that the data center will go bust in six months because by contract they had to pay for electricity. It was unlimited electricity, you know, and at some point they were getting with the computers and the racks, they had wheels because they knew that we have to take them out in six months. Uh, it was, it was that bad. Okay, so I am, you know, that thing really put me a, a warning always on unlimited when you go for unlimited. So, first of all, I, as I mentioned, if somebody starts abusing the system, like there is always an an asterisk against unlimited. Within reason. Okay. If somebody start using it from 150 different countries in the same day, you know, it doesn't work. We have a limit of, you can only use it in three different devices at the same time. Okay. We did the math and we thought that worst case, the, the number of images generated will double. And we knew that because there is a model that we are using where we've reduced, we divided the price by 10. That was Flagstaff, I think, because our costs had been keeping down. So eventually we just adjusted our prices and when we divided by 10, usage increased, but not that much. So we feel, we thought that it was just, you know, psychological barrier that people will use more. Maybe it was gonna be double the previous usage. And in our platform. In total image generation is like 25% of the cost. You know, video generation is the lion's share, and we were only offering unlimited to premium plus and produces, which is like 15% of the users. So it's 15% of 25% of the cost. That's something where you can take a bet, you know, and, and. And learn from that. And if it works, great, if it does, I, I, you know, I became convinced that that was not gonna kill us. Uh, it's something that we'll be able to, to stand by, to keep it, and eventually to expand it to other users and. We were pretty close to the reality, you know, that's roughly what happened. So, you know, that's amazing. It's important. Yeah. It got me to upgrade from premium to premium plus and it's, it's been nice to not have to, because yeah, you always have that psychological barrier or even if you have a big pile of credits, you're like you know, it's something. So, but it's been nice to just be like, well, we could just generate and explore and not worry too much about all the images we're making and just kind of. Go at it. And I'm sure even as many as we're making, it's still not really like a dent in the big scope of how of, of, you know, crazy usage or, or, or being too much. When you're also thinking about the, you know, just AI creation in general. You know, Freepik has sort of become this central hub, a variety of AI tools, models image video, lip sync, editing, modifying the images you met, you generate. How are you thinking about. The interface and the interface evolving with ai, image creation and sort of ai video generation. Some people think about free big as a marketplace of models, and I don't like that, that mental image, because I actually think that the UI that we build is the real product. Now, my, my mental image is just like when we started having personal computers, you know, the deep technology thing was the CPO. It was something that was very difficult to make, and all the layers on top of the CPO. That looked like a matter is stuff that anyone could do. So that putting together a computer, it was something simple compared to making a CPU or writing the software that run that computer was something very, very simple. It was just, you know an application level on top of it. Eventually people realize that that's, that's the actual product, like the interface between the user, that is the thing that the user actually uses. And what is. Behind the hood. It's almost an implementation detail. Okay. And that's how I think about freebie is I'm not. We are not in our team, we are not just throwing models to the platform in general, we test the models. We want each model to add something new. So when we add it to the platform is because it's better at anime administrations, it's better at something specific that we know that our users are gonna thin find valuable. There are some models that we don't find better than. Everything that we have in any dimension, and we don't add them to the platform. And so the goal is not just to add everything on ultrasound. And eventually, I think that the, even the idea of selecting the model, it may, that thing may even disappear. Okay, so ideally the ideal interface is you have a conversation and you know, you explain what you want and, and maybe that conversation includes not only the text voice, but also images, sketches you, you dump your ideas and your past, your history. And it kind of understands what you wanna do and helps you, you know, in your task. What model is using at every particular time is, is not really that important. You see that over, like in the system for example. We don't tell you what is the underlying element under the hood. And people don't care. They, they just want the assistant to understand what they're saying. So to me, free big. It's the product, it's the model is not, it's not the product. It doesn't make sense. And you're saying too, that you kind of filter out what models you have on the platform, just not to have every single model, but just to have the best in class at doing a specific something, have those models on the platform. The thing is that today. You still find many models that are good at some particular thing. And that makes the life of the user very difficult. And that's why I think that today free big is more for professionals than for. Individuals are just, you know, getting in because they want to make some something funny. So my, my intuition is that just like the generation of images and the generation of videos is gonna be all over the place, like yeah, you will have it on your iPhone, you will have it, you will have it everywhere. Eventually we are building a product for professionals that want to do something. They have an idea and, and they want something that really is like their idea, not, not in, in general. When you use a more a ish tool, they generate something beautiful, but it's not always what you wanted, okay? So if you're join this inspiration where anything kinda works, that's great. That's, you know, that's a stage of creation and I respect that. We are trying to serve users that already know what they want and we are trying to help achieve their vision. Okay. And, and that's a different part of the creation stage in, in that vein of thought. You clearly have a product or a suite of products that's really geared for professional use. Amongst that giant group of professionals, what sort of demographic is the super user, you know, where are you seeing strengths in amongst that entire marketplace? First of all, let me, let me start by saying that we are biased. Okay? Because before a a started, we had plenty of people coming to freebie that were. The, the two big groups is marketing people and designers, and very often designers working in marketing departments. So that's our core. And of course, many people are still from that core use case. Okay, so marketing is for us is number one. Okay. Marketing use cases. There are a couple of new use cases, big ones like the first one is visual effects. people working in Hollywood, people making films working visual effects. That is huge. And it's, today, it's exploiting, it is growing very, very quickly. And the third one is people that want to do product placement. So that's typically companies that have a physical, a physical product. They used to fly people all over the all over the place to take gorgeous photo photo no photo sessions that used to cost between 20 and 50, even a hundred K. Okay. Depending on where do you have to go and the complexity of the object. And now more and more of them are transitioning to AI photography, kind of. Okay, so those are like the three, the three big groups that we see. There are some other professions that are starting to use ai. For example, photographers fo back in the day, they were not AI users and they were not big users, certainly. And now we, we have more and more photographers using the platform because of our appscale. Which, that's the only acquisition that we made. Like it's magnifi, it's acquisition that we made last year and we have many, many photographers using it. Then. You have like more like smaller pockets of different use cases. So you have architects and interior design people that are using freestyle. They, they, they take a sketch, you know, and, and they get an image or they take a photo and they change the furniture. They and the style. So it's, they, they. We, we track like 10 different professions. All of them are growing. Initially we were only on a single one design and and marketing, and now it's just growing over the place. But if I, if I have to pick three, it will be design slash marketing visual effects. I'm pro investment. Amazing. Thank you. Do you see expanding sort of the tool set as well? And I know we sort of DMed about this and I get questions from the AI community and even my team asking more sort of the, the project management side of. AI generation of like being able to track like better position shots, sort of track versions of shots storyboard sort of the. Kind of nitty gritty project management part of it, more so in the AI filmmaking space. Absolutely. That, that's actually where I see like the next term value that we have to deliver. And honestly, it's a bit that we still do such an awful job of that. and the reason is because we were not that in the past. Okay. So that's initially we integrated AI and as I mentioned, I see free big as the product, so. We better deliver value at that layer? No. So we have we absolutely want to show like versions, help people to collaborate, to comment. I mean, there, there are more things that we are doing but a big one is that, is uh, is adding that layer of, you know, backups, security, integration on enterprise storage. versions, et cetera. Uh, and I got one other question from one of the AI communities asking about some of the details with like the, you have the Enter Enterprise plan and that has some, like legal coverage or for like commercial use. Like how are you thinking about that and what kind of protection you're having with people using, you know, AI and commercial space? Yeah, it's a great question. So first of all, let me say that in our opinion. Well, like everything that is in the platform is kosher. Everything that is in the platform, it's, it's legal, it can be used there's not gonna be any issues for enterprises. We put together indemnification. So if you sign an enterprise deal then we give you an indemnification that by default is 1 million $1 million. If the, if the deal is bigger, we can go higher. Okay. I would say that the legal fear, I mean, you tell me if I'm wrong, but I think the, the most common legal fear that people have is that if any images were used to train a model, you can get sued by the, by the corporate holders of the, or the people that own those images that were used in the model and that in general, I don't think it's going to happen in general. In most cases, there will be a. Probably a few exceptions. There are a few products that didn't do it ly and are still in the market. Okay? But in Europe, actually, what is legal and not, it's now pretty clear, okay? In Europe you can actually use images that you find on the internet, with the exception of images where the copyright holder opted out of the image. Or, you know, if the copyright holder explicitly say that they don't want that image to be part of a model. Then you have to take it out. But if the copyright holder didn't say anything, which is the case actually for the vast majority of the images on the internet, then you can use it. And the reason why you took that view is because if you ask for explicit permission to use an asset, then you cannot train a mobile and. I'm not even thinking of images here. Think on text, like to train an LLM, like JT animal, like that you need to use the whole of internet and then some. It's even more than that and, and you cannot get a license for a whole of internet. So the only model that kind of rewards. offers and a lowest existence of, of modern AI is okay. You can take in anything, but if the corporate holder complains, you have to pay them or you have to take it out or your model. In the US actually the first, there is not a specific law, but the first rulings are accepting the content. That gets used for models as fair use. So, so far, if it was, if it was purchased right, that was that Anthropic decision, if they. Purchased the book if it was purchased or legally acquired. By the way, that the same exception happens in Europe, like you cannot illegally acquire the content. Even if the offer didn't say anything, you cannot illegally acquire the content. You cannot just use a a to rent or. you know, make an illegal copy and use it. If you do that, then you're liable. But if you got legally the content and the author has not opposed, then you can use it in Europe, in the us you're absolutely right. If you got legally the content, it seems to be accepted as fair, used to use it in a mobile. Okay? So the models we are, that we are using. We believe that they are all compliant and that doesn't happen in the whole of the industry. Like there are some I don't want to mention them, but there are some competitors that, at least to my knowledge they are not compliant with European law and they're not. And I know it. Because our images, we opt out. So you, you have to pay us and our authors to use our images to train those models. And we found our images, our watermark in some of the motors. So we know that there are some models that are not on our platform that are not compliant with European. We believe that the models that we have, they're compliant. And we have talked to the model author and as far as we know, we think they're compliant and we feel safe giving that indemnity. To the enterprise customers. What do you think about, and we've had this debate, Addy and I on the podcast a lot about, 'cause we were talking about just now focusing on the inputs and the training data, but do you think more of the focus might shift to the outputs? Like absolutely. Preventing to say you can't make an image that says Mickey Mouse, like riding a bicycle, you know, in Yeah, absolutely. Epic, universal or something like that. The output would be the focus. Yes. I, I mentioned that more. Most people are worried by the inputs, but actually the real thing is in the, in my mind. So, because in the al output is crystal clear, you cannot do that. It's, it is, like, I, I often use the example of, listen, if you draw a Mickey Mouse with Photoshop, you cannot use it. Okay. So it's for a little bit about the, the, the tool that you use to create the image, if the image is infringing on the copyright of somebody, you know. That's illegal and that's like in our indemnity. The only exception is if you on purpose. Try to ate it, copy some. You say, I want Mickey Mouse that's banned. Kind of, you know, that's, but there are ways to say, I want a mouse that is very popular that, you know, you can work around it and you can get a Mickey Mouse. If you get a Mickey Mouse, you have to know that that's, that's copyright by somebody. You're doing it on purpose command on, you know, and that's not legal. And that's actually behind the. Behind the, the demand from Disney and Universal against major Yes. One of our competitors. Because people generated images mm-hmm. That clearly collaborated by them. It's in the output and, and sadly, apparently they use that. I. I'm not privy of any details, but apparently they use that output on their gallery, which can amount to using that output to get promoted. We read the, the lawsuit and it seems to be extremely well written, and, and this is one of those that, you know, I fear that they can actually, well, yeah, the Midjourney one's interesting because yeah, I think that's one of the first ones that's focused on. The outputs that it generated and not necessarily the, the, the training data. I think my, the opinion of, of my lawyers is that the Disney lawyers were very smart on this one and they have a very solid case. The, the mouse usually wins. Yeah. It's also, I will not try, I will not try to find that one. It's also very easy to build alignment into your model, to not generate those things. Right. Are you finding that you are providing feedback back to the AI companies that build the models? It's not that easy. It's, it's not that easy. It's not that easy. No, it's not that easy. Okay. Let me like. Even the, the thing that most people don't understand is that even as you take all the images that you use for training out of the training data set information, leaks in the text. So the text tower, actually the text tower has been trained with the whole of internet, you know, on all the image generators. Maybe we are getting a little bit technical. Okay. But in general, most people understand that as you type some text, you get an image. Okay. How that works is that the text. It goes through what we call a text tower, and it con, it converts the text into what we call an embedding, which is kind of the meaning, you know, it's, it is like the, the concept of what you want. And then it goes through an image tower that text, that concept that embedding, which is you know, an array of numbers and it generates an image. So I find it incredible that the meaning is so good, you know, representing anything almost, but the text tower. To, to actually understand the user, you have to train it with the whole of internet, even even models that are like supposedly covered safe by their own definition of what it means to be copyright safe like Adobe Firefly. So they only use the dataset for images, but for text, they use the whole of internet. Okay. And in the whole of internet there is this concept of Mickey Mouse and there are descriptions on how is Mickey Mouse. So even if you don't use their images, kind of understand how it looks like it is not that good, but it, it can sometimes do a, a decent job in general. It's it's a work in progress to, to get models that and, and. I Sometimes it's not even desired because some of those companies are actually using our products and they want to generate those characters and they have the legal right to do it. Yeah. And there are older situations where it's perfectly legal. Like if you want to use it for personal usage, that's legal. You know, you don't need copyright access for that. You want to work on, on Sari or on on, you know there are some exceptions to copy where it's actually legal to do it. Like if you wanna make a birthday card for your kid with Mickey Mouse, if it's only for your kid and it's not a commercial right thing, then it's legal. It's absolutely legal or just like you can draw in my hand or a parody. The parody law is usually protected in the US so it's like, where's that line? Exactly. That falls under fair use in the us Like we every country has has different exceptions. Yeah. But, but they are use, that's why Photoshop chicken, you know, Photoshop doesn't try to, doesn't try to guess what you are drawing. Like it does apparently have protections against if you scan US currency, I think that's, it. Does the one thing that'll like flat out be like, Nope. Start, nope, this is dangerous, won't do anything. I mean, yes, yes. There are some exceptions like that. I think. I think those things, they, they don't really work, you know, you can use another software to do the copying. It doesn't really work to, to try to embed that production. So you are more dependent on, you know, the person that is using it. He's trying to create something that is clearly inspired by somebody else that is protecting their ip, then that person should get in trouble because it's, no, it's very clear to everybody what they're doing. I think we're trying to model a little bit, you know, the thing by by thing about AI or, or not ai, but in general I think it's on the user to understand that they're doing something. Violates the somebody Where do you see predictions? Kind of next few years. Yeah. Or even just one, one year. Yeah. You're at the forefront of such a large AI platform. Yeah. We, we are relying on you to give us a north star of where we're heading. Well, big task. Some very high level. Prediction and, and maybe I will be wrong on, on this one, okay? But I see that there are some products that are going towards taking the full creation end to end, okay? And that works when AI can figure out if what it's doing is working or not. If it can iterate on the creation. Uh, let me give you an example. You're doing an on an online marketing campaign, okay? You can launch that campaign in like a hundred people and see if you get any reaction or not. Okay. And then scale it up if it works, you know, a thousand people and then 10,000 and, you know, and you can keep it a rating, even adjust the images to, to different different people depending, you know, depending on the user, maybe, depending on, on the, on their journey or depending on what their country or whatever. Okay. So there is this component embedded that you can measure success and you can iterate and launch to more people game, but you're doing a film. You don't have that luxury and you don't want that. I, I don't, you know, people that say that films are going to evolve into like, generation to the viewer, that's not what they want. It's like going to a restaurant and it's like anybody's gonna be able to get anything. No, I, I want what they're doing, you know? Yeah. Surprise me. I don't want to be deciding. So in a film, to me, a film least today, and I think it will be in the, in the future, it is a social artifact, is something that we, we watch and then, you know, we, we talk to each other and, you know, did you like it or not? What, what did it tell to you? What, what story did it resonate? When you create something like that where you have one shot, it better be good. I think at that point, you know, it has to be your vision. Somebody is like understanding the world where we live in, it's understanding everything that is going on and it's taking one bed, say, this is my story, you know, this is when I, and, and I like to think that that's gonna stay, like, you know, people that have something to say and then a, the AI tools and that's what we work on. They really have to help him or her tell his story. It's like giving, giving the that person control over the generation so that they, it really comes out as they want. So I think this big divide when people are talking about, Hey, we are gonna get an an AI agent that takes over marketing, that's fine. I think this is gonna be. Part of the solution space. And I think part of the solution space is gonna be give more tools, give more control, understand better, more precision, more quality, what ease wants to make and be more. Yes. But in general, I think it's, it's almost impossible. Like listen, imagine we get like the first hand calculators at the very beginning, okay? And I ask you, what's gonna be the future for this? So you can forecast that accountants, they, they will have to adapt. In some way. So accountants, they will not be doing the math by hand anymore so that, you know, they will have to work on higher level programs, but it will be, to me at least, it will be not impossible to predict that we will get video games. Where we are getting millions of metrics multiplications per second to generate virtual words and it will be absolutely impossible to predict that there will be billions of multiplications to generate text like we we are doing today. Like it's just too far away. I think we can, we can always kind of smell like the next year. And even that is becoming difficult, you know? So my prediction, you know, my, I don't think my prediction will be good in, in three years. That's where you see things are moving in the very near future. It's very difficult to, to create Yeah. Three years in the AI world. Could, might as well be 30 years, right? Yeah. It's, it's too long. Yeah. That's a wild time. So I, I watched your episode when you talk about us, so thank you for that. Anyway, I remember, I think you mentioned that maybe we got some help from this party. Government was rock. let me take you that sadly, as far as I know, that doesn't exist. Yeah, and I apologize because I, I am, the Spanish startups that I'm familiar with have all received government funding. And so I think, I think that's pretty popular. I mean, don't get me wrong. Yeah, it was, it was a good guess. Make, it's because I'm horrible at, at doing any kind of official paperwork or anything like that. We just started ra it, we didn't got any help. It's been because of our past as a stock image company that we got some volume and we able to transform that into kind of no prices initially and then improve the product. Yeah. Are you doing anything with de. Oh, I don't know that. It's, it's one of the big VFX houses. I believe they're in Madrid. Oh, I, I, you know, you. that's how little I am. Yeah. How, how little familiar I am with the, with the industry. We have been approached. We, I cannot mention names. there, there are only like a couple of people that I can actually mention, but we have been approached by most of the very well known brands in, in Hollywood. But right now, to be honest, our enterprise of force, they have been more like dealing with the inbound interest that we are getting. It's, it's kind of a slow process to go through legal explain the, the master service agreement. We put that together. So it's been more like managing inbound, rather doing actual outbound so far. So I'm not familiar yet with the big even el Yeah, no problem. I, I'm sure Freepik solutions are gonna be used for Hollywood film soon. They have the news, I, I can mention here like a film I, where that stars to Hank. They use our upscale for the background to upscale it. So sometimes it's like little thing like this, okay, it's getting deeper and deeper. Like now it's getting more and more into visual effects. Sometimes it's still like internal. It's more like a story for the storyboarding because, you know, the, the end result is not yet of the quality that they need. but it's getting there. It's getting there. And I feel like more and more every time I see new ai, short films pop up online and the credits and stuff, it's usually like Freepik just keeps popping up. Well, that's going crazy. Yeah, that's going crazy. Like I met many of those guys at, and for them it's amazing because they are really, really people that have like a few hundreds of dollars of budget and able to do things that before it was it, it was not a big production, but it was maybe like in the hundreds of thousands. Financially not doable. Yeah, we're big fans of free Hick. I mean, you're enabling a whole suite of artists that were never able to do these things before now and anywhere in the world as well. Well, thank you. Thank you so much. I can tell you that three, three years ago it was a rough position. now we actually, you data point is that now we get more users signing to freebie because of AI than because of stock. It took us like more than 10 years to build this stop site, so that that's how quickly AI is taking off. That's good. It's a, it was a good pivot. Early, smart pivot. Very early on. Very nice. Nicely done, Joaquin. Thank you. Thank you. Cool. Well, thanks so much, Joaquin. We really appreciate it. Thank you. Thank you, both of you. And that's it for this episode. Thanks again for watching. All the links we talked about are available@denopodcast.com, and if you haven't done so yet, please leave a five star review on Apple Podcast or Spotify. Thanks for watching. We'll catch you in the next episode.