MEDIASCAPE: Insights From Digital Changemakers

Question Everything: How Anatoly Kvitnitsky's "AI or Not" Is Fighting Digital Deception

Hosted by Joseph Itaya & Anika Jackson Episode 55

The battle between beneficial AI innovation and malicious exploitation is intensifying daily. Anatoly Kvitnitsky, USC Marshall alum and founder of AI or Not, takes us deep into this crucial conflict where digital trust hangs in the balance.

Drawing from his extensive background in fintech, venture capital, and fraud prevention, Kvitnitsky created a company with a deceptively simple name but profoundly important mission: distinguishing between human-created and AI-generated content. What began as a tool for businesses has evolved into a shield protecting everyone from increasingly sophisticated digital deception.

The stories Kvitnitsky shares are both fascinating and alarming. A woman spent six figures on supposed human-created art only to discover it was AI-generated. Another lost $850,000 and her marriage to a deep-fake "Brad Pitt." Medical professionals discovered AI-generated x-rays submitted for insurance fraud. These aren't distant threats but present dangers affecting real people.

Most unsettling is how accessible these deceptive technologies have become. Voice cloning requires just 20-30 seconds of audio—approximately the time it takes to answer an unknown call. Image generation has progressed from obvious flaws like six-fingered hands to photorealistic perfection indistinguishable to the human eye. This is precisely why we need AI detection technology working at the pixel level to identify patterns invisible to us.

Despite these dangers, Anatoly maintains optimism about AI's positive potential. He recommends Claude for writing, Perplexity for research, ChatGPT for reasoning, and shares valuable insights about when to trust these tools with sensitive information. His company's freemium model ensures everyone—from businesses to students—can access protection against AI fraud.

Whether you're a digital media professional, concerned about online safety, or simply curious about distinguishing truth from fiction in our increasingly synthetic world, this episode provides essential knowledge for navigating the AI revolution. Question everything—your financial and emotional wellbeing may depend on it.

This podcast is proudly sponsored by USC Annenberg’s Master of Science in Digital Media Management (MSDMM) program. An online master’s designed to prepare practitioners to understand the evolving media landscape, make data-driven and ethical decisions, and build a more equitable future by leading diverse teams with the technical, artistic, analytical, and production skills needed to create engaging content and technologies for the global marketplace. Learn more or apply today at https://dmm.usc.edu.

Speaker 1:

Welcome to Mediascape insights from digital changemakers, a speaker series and podcast brought to you by USC Annenberg's Digital Media Management Program. Join us as we unlock the secrets to success in an increasingly digital world.

Speaker 2:

Anatoly Vidnitsky has one of the most brilliant names I've seen for a company AI or not. Literally that is the name totally of your company, but it's so simplistic but it tells us so much. So, first of all, thank you, fight On, fellow Trojan, here I'm so grateful that your PR firm reached out to put you on the show. And really, before we get into the whole AI discussion which is a huge one of course we have, whether it's in the world of business with clients, or whether it's with my students at USC I'd love to hear about your background, because you've been at the forefront of some great companies unicorn startups, amex, Ventures so let's talk about that. What took you into this world? Was that an area that you were always interested?

Speaker 3:

Yeah, one thanks for having me on and fight on. My journey into tech started actually at USC, so I went to grad school at Marshall School of Business. I had an amazing introduction there from professors who were VCs, who were entrepreneurs, so it was the exact background I was looking for and it's the exact career that I had post-grad for Marshall. I've worked at largest credit bureau in the world and SoCal, ended up at a unicorn startup and then, like you mentioned, a VC at a leading fintech firm and.

Speaker 3:

But what I saw was this new world of AI, specifically generative AI, and given what I know about how fraudsters use technology and all the previous stops that I had, I really thought that something needed to be done to really fight against the dark side of generative AI. So I started this company AI or Not, simple in name, hard in what we do to try to protect exactly that. And it's to keep the good of generative AI for people who want to use it, for people who want to enhance their creativity, but then stop the bad actors from using it for the dark things that they want to do. So that's our mission at ARNOT.

Speaker 2:

Yeah, that's such a big mission and a complicated issue, particularly with your background. One of the things we do talk to our students a lot about is data privacy, the different restrictions and laws that are around the globe, and how secure our data is or isn't. We've seen a lot of breaches in the world of fintech. Even the companies like Equifax, that are supposed to be holding our data very securely and helping us understand our credit scores and things like that, have had breaches. And then this new topic with AI is something even more complex, perhaps another entry point for those bad actors that you speak of. What are some of the lessons that you learned along the way that you're taking into AI or not?

Speaker 3:

Yeah. So for the record, I was not employed at Equifax.

Speaker 2:

That was just an example.

Speaker 3:

Listeners, don't be mad at me. I was one of the half the US population who was affected by this as well. I got all the letters. So the generative AI side of our data is actually I hate to say it in a lot of ways we did it to ourselves. We've posted online all of the videos that we've made, all the pictures that we've shared and all the things that we've written on forums. All of those things are being trained to use these AI technologies. Like recently personal opinion, it's the best like AI video generator that I've ever seen is created by TikTok and they even say like hey, we used about 2 million TikToks to train on this. So thank you, world right.

Speaker 3:

So on this in that way, on the generative side, the training data is a lot of ways user generated content that we've produced over the years as just active internet users, active social media users. So this one is on us. The credit bureau stuff, that's on them, for sure, but this one's on us. But data privacy still plays a big role. There's a difference between using it to train, but not using it for harm, versus using it on a one-to-one basis of like here's the things that Toli did, or here's the things that Anika did, or here's the things that can identify a person on a one-to-one basis, just like a social security number does. There's where you should draw the line and, like a lot of the you know, like DeepSeek, the new Chinese open source model, that's a lot of the concerns, because they're tracking you on a one-to-one basis and sending it to foreign countries versus a lot of the policies that other AI companies is.

Speaker 3:

it's used for general training but not to track you on a one-to-one basis and then at AI or Not. We actually don't store or use any of our consumer content, so any of our hundreds of thousands of users that have run millions and millions of checks with us, we actually delete it right on the spot. So I treat everything as if it did have personally identifiable information, even though most of it is like anime and like AI art and stuff like that but I treat it as if it did and delete all of it. So some of it on the training side, we are a part of it for better or for worse. We are part of those. Newer parts of us are in those neural networks in a lot of ways. But on the data privacy side, there are things that AI companies could do, should do and hopefully are doing to make us all safe.

Speaker 2:

Yeah, and what you just shared about the difference between DeepSeek and other chatbot type companies, that's something that I hadn't considered as a perspective yet. So I appreciate you sharing that because they have very clear, they have their policy. It's really easy to read where others. You have to kind of maybe dig through a little bit. I do love things like Anthropic right Because they have their constitution and I know that they're not going to use my data where other platforms do, and a lot of these platforms have been using our data before we even knew that they were using it and, depending on what country you're in, you can opt in or opt out. So those are all things that I know cause concern for me for a lot of students that are in a digital media program, who might still be new to digital media from this perspective and aren't comfortable using AI tools yet. So can you walk us through a process that people use when they come to AI or not?

Speaker 3:

Yeah, sure. So I think everything you said you're so accurate. There's so many perspectives of it and a lot of times, a lot of the tools and forums and whatever even like word processors that we write in, are by default. Our data is used for AI training. So you should, you know, check. Even on social media and I'm not just talking about like the meta platforms, even like LinkedIn, is AI training on by default.

Speaker 3:

I'm not saying anything controversial, it's just stating facts and in the case of DeepSeek, I did go through their terms and conditions out of curiosity. I still use it. I would never put in like a tax question in there, but I still used it. I would never put in like a text question in there, but I still use it for research. I would never download it on my phone either, but they will track you on a location basis how you type, the exact things that you type, the prompts that you make, and they do track it down to your location. So you're right, there's a lot for that.

Speaker 3:

At AI or Not, we have a few hundred thousand customers about over 250,000, including businesses and even governments who use us, and the use cases range so broadly. So a lot of people are just curious in what they see online. One of the slogans we have is question everything, and a lot of people have that. You know, we don't believe anything that you read was kind of the slogan of the internet. Now, don't believe anything that you read, hear, see video. You know any, almost all the senses. You really have to question everything.

Speaker 3:

So it's users who want to make sure like, hey, is this actually happening, is this real or is this a scam? So there's a lot of that. A lot of artists, actually a lot of people in the digital media and creative space. There's a lot of that and they want to know is a piece of art, ai or not? And on the business side, it's businesses protecting themselves against risks coming from generative AI, whether it's fake IDs being generated with artificial intelligence, whether it's fake IDs being generated with artificial intelligence, whether it's deep fakes of their CEOs or bosses trying to get a wire from them or from an employee, and the list goes on and on. So those are some of the breadth of use cases that we see, and we'd love to hear a little bit more how your students are thinking about it, and I'm happy to share anecdotes I have for them 100%.

Speaker 2:

Would love some anecdotes. That's always, I think, the most fun because it sounds like there's such a variety of clients that you work with. A lot of the questions are around security, right.

Speaker 2:

How private is my data going to be? We walk through in the class I'm teaching now how to go through your LinkedIn settings. We go through websites and cookie tracking and the differences in websites in different countries and how you really do need to pay attention instead of just clicking yes to everything and letting it track unless you don't care. But most of us care at least a little bit right.

Speaker 3:

Yeah.

Speaker 2:

And I think they have concerns around people misusing it, the difference between using it as a tool and AI potentially being a job replacement. What jobs are going to be available? And then I also talked to him about the ethics behind. Who are the people who are actually labeling the data?

Speaker 2:

from the Internet, even the stuff that we don't want to see or hear, so that we don't have to go onto a platform and see really disturbing images and how much or how little they're paid, what implications that has for their own mental health in many countries in Africa where the people who are doing a lot of the training yeah, so I get deep into it because I think that I want everybody to understand, from an ethical perspective, these are things that are being done, but what can we do better? How?

Speaker 2:

can we make the future of generative AI better.

Speaker 3:

Right, yeah, I think all of that is really accurate. There are some concerns. I love that you take your class through some of the privacy and terms that you sign up for. In most cases, it's fine, it doesn't matter. In most cases, it's fine Like it doesn't matter. Like in most cases it doesn't matter. But like for like I'll keep picking on the DeepSeek example it's the number one app in the app store. It's not fine. Like if you start asking it personal things you start asking about, like decisions that you want to make, or like in the case I had questions about taxes.

Speaker 3:

It's that time of year for all of us and I probably wouldn't put any of that into a model like that. You know where the privacy, like you said, like you compare and contrasted one versus another, like that's where it really, really matters. As far as the ethics, I think there's a lot to it on both sides, like what training data is being used, but how is it being spit back out? Like, for example, in the world of art, if you're an artist and you have a very distinct style of art, like I'm a huge fan of Shepard Freire, I have his book behind me. Actually, right here I have one of his pieces of art behind me on my desk wall.

Speaker 3:

He's in LA and if you're using his art as training and then spitting it back out that same type, the same exact art that's not right. If it's used, maybe mix in with others and just like as concepts, I can see it being OK to try to produce something net new. But if you're reproducing an artist that was trained on, I think that's where the line, I think, kind of gets crossed a little bit Like. I'll use another example In about a week, christie's is doing an AI one of the largest art dealers in the world and they're doing an AI-only art auction.

Speaker 2:

And.

Speaker 3:

I think that's really cool. What a cool opportunity right for new emerging artists to create net new art using these amazing technologies, and all of it is unique. All of it is incredibly innovative, you know, using all different techniques of prompting, in some cases, robots making the art, but it's really quite amazing. But it's all net new. Versus if an AI is just spitting back out, regurgitating in a lot of ways, an art that was previously made by a specific artist. I think that's where we're really crossing a line on AI ethics, if that makes sense.

Speaker 2:

Yeah, and not to get even more complicated, but is this where things like blockchain come in?

Speaker 3:

I think there is a possibility. So, funny enough, you mentioned that the Christie's AI art auction is actually you're purchasing an NFT, so that's the way that they're doing it, because there is some funny things with AR art, like, if you are, if you just use a prompt, it's not considered copyright. So, like what do you even own? So, as an artist, I think there's gonna be a lot of changes. Personally, I do think it should be copyright, because I think a prompt is like code, it's your IP, it's how unique you are, but you're doing a blockchain and NFT on this specific piece of content, because even an AI won't produce the same exact thing twice.

Speaker 3:

You know, just like you writing something handwritten, you're not going to write it exactly the same every single time. Same thing with an AI. The same prompt will produce two different pieces of art or whatever you're trying to produce. So I do agree when you think about like there's something very unique. I need a trustworthy. It really does scream blockchain use case blockchain use case Wonderful.

Speaker 2:

So what are some of the ways that people new to AI students at USC or not right what are some of the things that they should feel comfortable using AI for? Or do you have recommended tools that you love? Because I know I have, you know, some that I love.

Speaker 3:

Yeah. So here's the good part we're all new to it, because the whole world is new, right? This isn't something we're all playing catch up in. It's something we're all learning as a whole. So if anyone tells you, you know they're an expert with like five years, or like this world it's been around for 12 months, essentially, like this whole space has been around for so little amount of time. And when you get into a lot of the newer use cases, it's even less. So the best way, I think, is actually just to dig in and just to play with it, like use it as a partner. Don't use it as a crux, but as a partner.

Speaker 3:

If you're thinking through something, I have this funny experience where just the process of prompting the AI gives me ideas, because you have to speak to the AI super clearly, yes, so sometimes just writing my own thoughts clearly to a bot, it's like, oh, like I'm almost, you know. It's almost like the process of writing as, like clear thinking actually helps itself, and then you have a partner to do it. So the ones that I recommend, I call it the big three for general AI use cases and I'm happy to go like which ones I like for other ones. So the big three for me are Claude Claude is an incredible writer, thinker, ok, but as a writer really really great. Perplexity I love as a research tool. It's essentially like AI-enabled Google, so incredible for research, like if I want to get smart on a topic or you know some huge report, or like something that happened in the news, perplexity is my go-to. And then finally, the OG ChatGPT. It is really incredible at thinking and like the reasoning models are best in class. So if you're really trying to think through problems, it is the best way. And actually I think they have a very underrated tool and it's their audio, their audio tool. So I've actually I have a, we have an ARNOT podcast and I did one with Chachi PT's audio tools, my co-host. It was actually really a fun experience. So those are the three big ones.

Speaker 3:

Some other use cases, depending on where your students or people in general are in their process. For images, midjourney is incredible. It's the most realistic. I think, the best for art as well. If you're trying to be creative and trying to create really cool pieces, midjourney is the best.

Speaker 3:

Ideogram, I think, is a quite an amazing image generator as well, very realistic and it does do real people. So it can create some funny things, but it can create some not so funny things as well, but it is incredible at producing realistic images and people and even text in an image form. So I recommend that one. Some other use cases, I think, that are really emerging and something that, even in AI or not we are investing in heavily is AI in software development and coding. So we use Cursor at our company, which is an AI powered, basically a text editor for code, ide as they call it, and funny enough, it's actually Claude in the background, anthropix, a Claude model in the background, and there's a lot of other tools in that space. But I think that's one of the highest value adds that my business has seen for AI. So sorry for the long answer, but hopefully that's helpful for some of your users and students.

Speaker 2:

Absolutely. And when you were conceiving this idea, how different was this than being an investor at Amex Ventures, than being an employee at Trulio, you know, and all the other things that you did before?

Speaker 3:

Completely different because it's my company, for better, for worse. So every problem is mine. You know, at Trulio I was an early employee, but you know I had a very specific set of responsibilities. I covered a lot of the business side, business development side At Amex. My job was to find amazing founders to invest in.

Speaker 3:

So, also completely different, different line of work from scratch is quite a different journey than showing up at, you know, a Fortune 100 company, you know, with HR and amazing dinners and all that at a startup where you have, you know, an incredible founding team, investors already back in the company. Also a completely different versus you're starting from zero, like you're starting starting from scratch. Quite a different journey, for sure, so, but one that I was really implored to do because I was seeing the things that was happening with gendered AI, like we're still in the world of like six fingers, seven fingers, like really wonky things. When I started this company late 2023, I knew it was only a matter of time.

Speaker 3:

If anything like the history of technology is, it changes so, so fast. Like we underestimate what can be done in a month. We overestimate what can be done in a month, but underestimate, like, what happens in six months in a year and AI, I think, has accelerated all of those things. So I knew it was only a matter of time before. These tools are so good you can't tell the difference and because of that, it's going to be used for bad and, frankly, I did not think that there was enough being done to try to protect people and businesses for that and given my experience, like spending a decade and trying to find, fight fraudsters and KYC crimes and things like that I was like this is the next wave of this and I think I should do something about it.

Speaker 2:

With your business case? Was it hard to find the right people to work with you, the right people to invest? If you were taking early investment, the right clients to start onboarding.

Speaker 3:

Yeah. So a lot of our employees are really mission driven, so a lot of the people that we have a team of AI researchers and they're really passionate about the problem One. It's a super challenging one. So if for them it's like incredibly interesting problem set that we have and we do multi-modality, my co-founder is someone I knew since third grade, so it was amazing to shout out Johnny John Nelson. He's my co-founder.

Speaker 3:

So it's amazing to do, you know to, to have a start a company with a with a good friend of mine, and then on the investor side, early on, you know, the pre-seed round was definitely investors, that who knew me personally, like ex-founders and like seed stage VCs who've known me for a long, long time. And then we recently it was just announced a couple of weeks ago raised a $5 million like institutional round. And then you know, we showed progress and also people who knew me for quite a number of years. We started to show like really incredible progress and like here's the things that we're working on and building and the roadmap is even more exciting. So it was. You know, having the advantage of having the network, like from my background, was definitely helpful, but then you still you got to do the work. There's no way around it, yep, to continue to show progress and continue to, you know, fight the good fight. So there's no way. There's no way. No amount of slides will save you from that, and that's the fun part to me.

Speaker 2:

Did you have a particular customer avatar when you started? Did you know? Okay, we think this kind of business and this kind of business and maybe this other one will really need our services.

Speaker 3:

I did and it was a lot of the same types of groups I used to sell into for majority of my career, like risk fraud compliance teams I did. What surprised me the most is the scope of the problem. I underestimated actually like the amount of avatars and almost like the personas that I'd have to work with. Like, for example, this was a week ago someone in the medical field can't mention the company, but they're like we've been getting AI generated x-rays for insurance fraud and they tested. I was like, well, I've never personally tested x-rays in my own product, but have at it. And he's like, oh, you guys detect it, which is a really cool feeling. Like you know, like fake x-rays for insurance fraud, dental x-rays too. I'm like, okay, so just an example of like the far reach of the problems this has caused, like the devastating fires in LA caused, like the Hollywood sign being on fire, which was completely AI generated, and I can't imagine the amount of insurance fraud that's being created for AI. So we've seen use cases like that.

Speaker 3:

On the audio side, it's voice like fake phone call scammers. Like recreating someone's voice. It takes roughly 20 to 30 seconds of someone's voice, so like you pick up the phone, hey, who is this? I'm sorry I can't hear you. Like it's over, like Do you already have enough? Just that, and they can recreate your voice to say anything, which is an incredibly terrifying thought. And then even AI music as well, protecting artists from that as well and protecting people. There's quite a bit of YouTube channels and other content being posted on the streaming platforms of people claiming an artist put out a new song when it's 100% AI, and I don't think that's fair on a number of different angles, like from monetary, from brand and just you know someone's likeliness. It should not be impersonated. So it's the scope of everything from like kind of funny, kind of silly, to yeah, that's really serious and not okay. So, and everything in between.

Speaker 2:

Yeah, wow, well, and I just think about how, for instance, we don't have enough radiologists, so radiologists, computer vision, helps scan x-rays for people to then find right. Or you know, find things in scans or say, oh, you're actually clear, and then of course the doctor is there to codify it. But so that's one use case, but on the other side, fake x-rays for insurance fraud. So the number of things people dream up using this technology are unlimited and unfathomable really.

Speaker 3:

I agree. I agree. I don't think it's specific to AI. I think, like from all of technology, there's always a good and bad side, like from the start of the internet. You know there's a good and bad side, like the sharing of information, but also the sharing of misinformation. You know, when you get to streaming like, you can stream really cool, funny, entertaining content, but also the worst things that you can possibly imagine.

Speaker 3:

You know blockchain and the cryptocurrencies that are on it has created new digital assets and new ways to move those digital assets, but also created a new way to launder money and for fraudsters to deal with it. So AI is no different. It's a technology that someone in a digital media program will be like wow, this would be so cool for the work that I do, me with the things that I do. So it's really in the eye of the beholder and, you know, to how the user sees it. So AI is no different than any other technology is going to be a good and a bad. I think the difference is of just how far ranging the scope is and how impactful it can be for both.

Speaker 2:

Yeah, thank you, and totally. You've given us some great examples and a couple of anecdotes. I'd love to hear what has been the most surprising use case for your company.

Speaker 3:

Most surprising use case. The X-ray one kind of took me by shock, but I have one. Actually, within a couple of months of me starting the business I think it was like three, four months and I had a very active user and she started emailing. And I encourage all my users to email me free, paid, whatever the case, like I love speaking to my users and seeing what they use it for. And we got a phone call and this woman bought a piece of art for six figures and it turned out to be AI.

Speaker 2:

Oh.

Speaker 3:

Yep, and she was working with law enforcement and we detect it as like, yes, it is AI, like here. We have the proof right here for you. It is indeed AI. However, like AI art itself, I'm not sure if it's a scam, it's a misrepresentation, but AI art is still art and it's definitely a misrepresentation of it. But, you know, the legality of it is a little weird and I really felt for her because there was really no good outcome for her, like law enforcement isn't going to do, you know, isn't going to do much.

Speaker 3:

You know, we did confirm what her suspicions were, but this one just showed me early on in starting this business, cause I'm like, oh, I'm going to help companies, I'm going to help government, it's like no, early on in starting this business, because I'm like, oh, I'm going to help companies, I'm going to help government.

Speaker 3:

It's like, no, I'm actually going to help people too, like just normal people, where it's like our own lives can actually be affected by these technologies in a very negative way, and that's one of our users like, and it was actually really eyeopening experience and you keep seeing and it really pains me like, keep seeing use cases like that. Recently, there was an AI-generated Brad Pitt, who scammed a woman out of $850,000 and even had her divorce her husband so her life is in absolute shambles because of an AI-generated Brad Pitt and voice and video. So that, to me, was the biggest surprise, or biggest shock. Just months into starting the business and pursuing you know, businesses, I'm like here, here's what we can protect you against this new tech. I'm like, oh like people can actually be very affected by this and feel very hurt by this as well, and that actually, you know, made me very passionate to help that group as well.

Speaker 2:

Yeah, wow, yeah. I've been reading a lot lately about different fake profiles of celebrities big celebrities and how that has turned into a scam, and dare I might even say that I know somebody who thought they were speaking to somebody famous and was scammed and said well, this person knows about this and this and this. How would they know if they're not? You know really this person that? It just shows you the lengths that people can go to to you know. Maybe that person's phone was being tracked without their knowledge, maybe they have had an assistant that you know. There could be any number of situations. What is and I don't know if this is something you can answer or not, but I'm curious about when you were able to identify that piece of artwork as AI artwork. What were the markers that you looked for, or were there things that you could identify? Was it pieces of code?

Speaker 3:

Yeah, so none of the reviews that we do are actually manual, so it's nothing like I couldn't tell you. Like, if I look at a piece, I look at a picture, like sometimes, honestly, I can't tell anymore. And I look at a piece, I look at a picture, like sometimes, honestly, I can't tell anymore and I look at this stuff all day what our computer vision models do and we have dozens by now. They look at pixel level patterns, like they really zoom in to a level that a human eye cannot. Just, like you mentioned, your anesthesiologist is using computer vision for the same thing. We're just looking for different. He's looking for anomalies in the x-rays or things that look off that could indicate something. We're looking in patterns that indicate AI versus indicate.

Speaker 3:

No, this is an iPhone picture or this is, you know, just a standard Canon picture. So very similar process of really, you know, chopping up the image and I don't mean us, I mean the computer vision models and really zooming in and identifying the pixel level patterns of what constitutes an AI generated image or a real one, or if there's anomalies, if there's changes made that show like hey, there's some things being flagged here. And this all happens in seconds, like usually less than a second. So we do this all in near real time. But, to be honest, I look at it, I can't tell anymore. Like the tools I mentioned for AI image generation, like I see those tools, I'm like wow, Like I can't tell, I can't tell.

Speaker 3:

Like I made this incredible image recently of the Rock as Darth Vader. I mean I thought it was really cool. It was like for the dark side of AI and I made I was like wow, it'd be really cool. I was like if you told me this was the next cast thing, I would have honestly believed you. So it's really you need AI to detect AI nowadays as it is so realistic. I feel for your friend. I really do Because, like you're reading something that might be playing in your emotions, you're seeing something that looks real. Now you're hearing something that sounds real and now you know it's. You almost need like safe words for you know, outside of in real life interaction.

Speaker 2:

Wow. And as we're moving to more digital society, whether it's digital currency, whether it's I don't think I've had to show an ID at an airport the last several times I've gone. It's all biometrics. Is that going to be the next level of concern when it comes to fraud detection and people stealing your identities?

Speaker 3:

Yeah, I think this is the new synthetic ID. I think recreating someone's likeliness, you know, generating fake IDs with that same person I think this is the new synthetic ID. You can either try to impersonate someone who's already in existence or create a brand new person you know that's not in existence and that's. I think this is the new level of synthetic ID. You're right, we've gone very digital of late, especially, you know, post-pandemic. Like everything is no person required. I think there'll be more requirements of in-person examples, like I have this.

Speaker 3:

One of the scariest examples I have is a publicly traded cybersecurity company was fooled into hiring a North Korean operative because it was an entirely remote process of hiring and you know they created a fake ID, ai generated resume, like all the things, and they never met the person in person, right, and they even got like a company, issued laptop and immediately tried to inject malware onto this company's network. So you know, if you require some in-person things at times it could solve for some of this, but it is quite difficult. So you know, for your average viewing experience, you might want to just be with a, you know, watchful eye, question everything. If someone's speaking to you, you should really be like you know, maybe ask a friend like, does this sound crazy? And then, if there's actual money on the line, you really want to take extra precautions before doing anything.

Speaker 2:

Yeah, yeah, I'm going to. I'm going to have to bring you back in six months to a year to learn more about, okay, what's the next thing? What's going on right now? Because it's changing every day.

Speaker 3:

Absolutely.

Speaker 2:

Truly Now on your website, which is, of course, aiornotcom. You have a freemium version, you have a base version and then, of course, you have your enterprise clients. Well, I love this, first of all because this means this is a tool I can share with students and other people I know who are interested in AI and they can test out and maybe take back to their companies. But how have you found because you said you have, you know, a few hundred thousand users? Have you seen a lot of transition from that freemium model to, let's do, the base model? And also, how the heck do you have such a low base model price?

Speaker 3:

Yeah, I appreciate it. All good questions. So, to be honest, I would love to give even more away for free for all consumers, because enterprises, like when they use this, they really use this. So that's actually my goal to continue to provide value for everyone. Like whether it's your friend, you know is like, hey, is this person real? Or someone going on a date, and it's like, is this person photo, you know? Is this person you know? Do they even exist? Or is this like a catfishing attempt? Or someone trying to buy a piece of art, while businesses, there's actually a lot of money on the line and there's no reason why I feel like I should be. I should not be helping one versus another. So that's really the premise of it.

Speaker 3:

So the free one I created just to you know for anyone, whether you're a student, you know, and I used to be a student too I wasn't exactly signing up for SaaS products either. I completely get it To someone a little bit more serious and we found that you know, about a hundred checks per month is like what a user needs. Like you know, you're not there checking while a company is going to do tens of thousands to millions it can end up, but always testing with it. But the premise is still. Early on, just a few months into this business, I realized what an impact I can have both people and businesses and I was like I'm never going to ignore either one. The majority of my business is B2B. I'm still not going to ignore the consumer aspect. I think long term from business building and brand building, I think it'll play out well for me.

Speaker 2:

I have no doubt. I have no doubt and also it comes through that you are so mission critical and that people can really put you know. I completely trust you and I just met you that, based on this conversation and the good that I see you putting into this world, where there's not a world without AI, especially with generative AI, you know, eventually a lot of AI experts tell me we're going to have our own, we're going to have agentic AI, we'll have our own agents for helping do everything for us, and who knows what that's going to cause if agents are talking to agents. And then you know there are so many levels of complexity that I can see on the horizon and we're going to need products like yours more than ever.

Speaker 3:

That's why I started this company, like in 2023, which sounds like ages ago, when you know this before agentic and before photorealism I was like, oh, it's coming.

Speaker 3:

I don't know how and what shape, form or timeline, but I know it's coming.

Speaker 3:

And exactly that I was like, maybe because I've seen so much of bad actors throughout my career for the things that I used to do, whether it was client side or vendor side or whatever the case I was like this is just going to be the next tool and I think this might be the scariest one of all, because you can mimic and impersonate and do all of those things.

Speaker 3:

So, whether you're, you know, just an average user on social media seeing the next crazy news story and you're like, hey, maybe that's not the case and we've actually had news stories taken down because we told them, like you're reported on AI generated content and we actually did delete it. Like because reporters are not quick to report, you know, on a picture, so, let alone an individual, like coming to conclusions too quickly to a business, being like I really cannot let in a synthetic identity, I really cannot let in an AI generated person, because of the harm they're going to do to my business or my platform. So, whatever the use case is, underlying theme is there definitely needs to be a new layer of trust and I hope to be that, what they are or not.

Speaker 2:

Yeah, fantastic Gosh. We've covered so many topics. I really appreciate you going on this path with me and being willing to speak about so many different things when it comes to the world of AI and which tools you like best, and you know what privacy concerns we need to be thinking of and how we can use your tool. Is there one last message you'd like to leave the audience with today?

Speaker 3:

Yeah, I do One, or any students at USC or any former students at USC. Take advantage of everything that the amazing program has to offer. I'm wearing my USC shirt now. I put it on right before this. It was a really eye-opening experience for me. Take advantage of being a student. Take advantage of meeting the people that USC lets you offer. Pick the professors that you think have the most ability to open your eyes eyes to new, to new experience, like I'm sure that you're doing.

Speaker 3:

How to go with all these new tools within ai? I think that's really important. As far as everyone else not affiliated with usc, I think ai is actually a really amazing tool and it's good. I know I cover a lot of the bad and that's kind of what I work on, but I think it is an amazing, amazing tool. I think we're in this incredible period where we're all beginners, so might as well just get on board.

Speaker 3:

The worst thing you can do is head in the sand, ignore it. Oh, this isn't going to affect me, because it is. I don't have to know what you do to know there's going to be some kind of effect, and I think we're all deciders of whether that effect is going to be a positive or a negative one, because if you're the one who figures out how to use these things in a positive way for what you do and I mean that generally, like whether it's personally or work related, or creativity or whatever the case you will be ahead of many, many, many people and companies actually, and that's the opportunity we have ahead of us. So I do think, like what I work on from my business aside, I do think we're in this incredible period of innovation and we all have a really great opportunity to be leaders and thought leaders and innovators as well, using these tools with the things that we know really closely. So that's what I love to leave your listeners with.

Speaker 2:

Fantastic Well, Toli, thank you so much for coming on the podcast. It's been a pleasure. I always love speaking to AI experts so that I can learn more and I can try out new tools and I can impart this wisdom, hopefully, on other business owners and professors and students.

Speaker 3:

Thank you, Annika, for having me Fight on.

Speaker 2:

Yes, fight on. And to everybody who's watching this episode or listening to it, thank you. Leave us a rating review and I will be back again next week with another amazing guest.

Speaker 1:

To learn more about the Master of Science in Digital Media Management program, visit us on the web at dmmuscedu.

People on this episode