
KeyBARD
Welcome to the KeyBARD Podcast, hosted by Artist/Educator Thembi Duncan.
In each episode, Thembi sits down with trailblazers, visionaries, and innovators who are shaping the landscape of our world. From distinguished educators to acclaimed artists and tech pioneers, KeyBARD offers a platform for thought-provoking conversations that transcend boundaries and spark new ideas.
Whether you're passionate about the arts, intrigued by technology, or committed to advancing education, KeyBARD has something for everyone.
Subscribe to stay up to date!
Want to be a guest on KeyBARD? Send Thembi a message on PodMatch: https://www.podmatch.com/hostdetailpreview/1740803399472257afce75768
KeyBARD
S2.E9 | Dr. Ilke Demir | What Makes Us Human in the Age of AI?
Dr. Ilke Demir, Senior Staff Research Scientist, Intel
What makes us human in the age of artificial intelligence?
In this episode, Thembi speaks with Dr. Ilke Demir, a pioneering research scientist whose work explores the intersection of AI, digital integrity, and creative technology.
From detecting deepfakes using biological signals to defending artists’ rights in an AI-saturated world, Dr. Demir offers a grounded, compelling perspective on authenticity, innovation, and the emotional labor of being a woman in STEM -- and much more!
Explore FaceCatcher
More on Dr. Demir
Want to be a guest on KeyBARD? Send Thembi a message on PodMatch: https://www.podmatch.com/hostdetailpreview/1740803399472257afce75768
KeyBARD is produced, written, and hosted by Thembi Duncan.
Theme music by Sycho Sid.
Listen and Connect:
- Spotify: Follow, Rate, and Share
- Apple Podcasts: Subscribe, Review, and Rate
- Instagram: @Keybard_IG
- Facebook: The KeyBARD Podcast
Thembi (02:29):
Hello everybody. I am here with Dr. Ilke Demir. I'm so happy to have you on keyboard. How are you today?
Dr. Ilke Demir (02:36):
I am doing fine, thank you. Thanks for the invitation. It's great to be here.
Thembi (02:41):
I'm so happy to talk to you today. Let's just jump right in. What has your journey been like as a woman in STEM?
Dr. Ilke Demir (02:50):
Mostly lonely? Oh, no, no. I had very good friends on the way. Very, very nice colleagues. But when you have special things to share, like your own journey, your own struggles, sometimes the voices, either the voices or the ears are not from the same perspective as you are, and then you feel a little bit more, you are inclined to feel isolated and you have this continuous, at least I have this continuous battle to, no, I need to be proactive to not feel like that because I know that probably it may be in my head or it may be just because these seeming different, seemingly being different, but not actually being different, et cetera. So I was constantly trying to be conscious and do something by myself about that. And I know that that may not, and I'm also not a really extrovert person. I'm more of an introvert.
(03:49):
So it's like an additional effort on top of that. But being a woman in STEM, you are one or two people in the class. You are one or two people, one or two women in the lab, in the meeting, in the workplace, in your organization. And it's not only about how you interact with your environment, but it's also how you're role model because there are so less of you around that you always carry that weight on your shoulders that I need to represent. I need to speak out, I need to make everyone aware. I need to be the perfect version of myself so that I can pave the way for others, et cetera. So it's a little bit responsibility and friction at the same time. But if you can balance that, if you can be aware of who you are interacting, how you're interacting, where are you in your journey, how you can elevate others to be more, then it actually you can turn it into a journey with sparkles instead of journey with terms.
Thembi (04:55):
Do you find yourself having to do extra labor as a woman in STEM in terms of pushing back against maybe lowered expectations or exclusion in the field, unconscious or conscious?
Dr. Ilke Demir (05:08):
Absolutely. For some reason, to me, emotional work comes as default. It's something that I need to do, like making lists or tracking things or taking notes or having that continuous record of things, et cetera. These, for me, they are not explicitly written in any job description, but it's always needed to be done. And that doesn't come as natural or as usual to other people. And the more you get used to that load being your load only the more you are just doing that and everyone expects you to do that. And I really found it a little bit later in my journey that no, everyone needs to do that and everyone is not aware of that is also their responsibility, not only my responsibility. So pushing back those invisible work, pushing back on some admin duties, pushing back on being the only representative person in a committee, in a jury, in a selection process in some that, because they couldn't find any other woman to be there. They couldn't find anyone with accent to be there. They couldn't find anyone young enough because I mean, I was a corporation, so they couldn't find anyone young enough to be on a committee, et cetera. So all of these increase your workload a little bit too much sometimes, and I learned a little bit. I should be able to say, no, I should be rejecting some of them because I cannot handle everything at the same time.
Thembi (06:49):
So did you learn that you should be rejecting that additional workload from other female mentors or, I know you said there weren't very many. Yeah. So were there men who you had to learn that from?
Dr. Ilke Demir (07:03):
By experience, I would say. Oh, you learned on your own? Yeah. When things get super overwhelming and I see that, well, why am I being so much worked out? Why am I'm feeling burned out, et cetera, then I realize that, oh, all these extra A, B, C, D, E, none of my old white male colleagues are doing all of them. I'm doing. And at some point, if there is an equal recognition or equal acknowledgement of that work, I will be okay doing that. But no, they're just additional things that you are expected to do without even being recognized for doing that. So at that point, I'm like, no, I understand that I'm your only option right now. I wish I was not your only option, but I need to say no to this one, et cetera, and then move on.
Thembi (07:56):
That sounds like a really inspiring way to exist in that space, to be strong and brave enough to stand up for yourself. So that's great. I mean, even got you interested in the field, did you find your way to it or did you know that you always wanted to be in STEM?
Dr. Ilke Demir (08:13):
I think I always wanted to be in STEM. Even when I was a kid, we were doing surgery on electronics with my dad.
Thembi (08:23):
Oh, nice, nice. I love that.
Dr. Ilke Demir (08:26):
Opening radios, opening telephones, opening old missionary and trying to understand how it is working, can we do modifications and make it work better, et cetera. My dad is a pilot, but he's also an electrical engineer, so he always had that curiosity coming with him. So we were always doing that. And then in high school I was selected for computer science, olympiads, and those olympiads, you represent your school, it's like a national Olympia cetera. And at that point, we didn't have coding classes in our high school, but because we were preparing for olympiads, then our seniors in high school was teaching us how to code, and I started coding in c plus plus, sorry, C first. And then I was like, yes, I want to do that. I want to conquer the space. I want to make computers do what I want, et cetera.
Thembi (09:19):
Okay, great. And then that led you to go to undergrad just with that on your mind?
Dr. Ilke Demir (09:25):
Yes, yes, yes. Undergrad, computer science. I also have a minor in electrical engineering. I never left that world either. Yeah.
Thembi (09:33):
And then you just kept going back, right? And now you're Dr. Deir.
Dr. Ilke Demir (09:38):
Yes.
Thembi (09:39):
And you are very passionate about deep fakes.
Dr. Ilke Demir (09:43):
True.
Thembi (09:44):
Can you talk about what a deep fake is and why you're so passionate about it and why should we be concerned?
Dr. Ilke Demir (09:50):
Right. So right now, we are talking with you here on Zoom, but how can you make sure that I'm real, maybe I'm not me, maybe I'm just my voice and my visual, but maybe someone quoted me to be like that.
(10:09):
So these are deep fakes, basically those fake videos where the action, where the actor or the actor action of the actor is not real, and I'm real just to get rid of any hesitations. And it's not only visual, it's like voice deep fakes, audio deep fakes, sometimes audio visual, like multimodal, deep fakes, et cetera. And these are really bringing this content integrity down. Whatever we see or hear online, we always hesitate nowadays that if it is real, if this politician really said, so, if the celebrity really said so, and this is more on the hesitation and trust side, but on the deeper darker places, it is really used for adult content and without the consent of anyone, and they are getting so realistic. There are some websites that you cannot even imagine things happening. And they are so alistic because defects are getting so good.
(11:11):
All these AI based approaches like face webs, yas, diffusion based models, all of them are getting so good that you cannot even say that, well, this can't be real. So because of that, we need to be, first, we need to use our judgment. Can this person really say, so would this person be a little bit of research, a little bit of background, a little bit of knowing the person could have said, so could they have done so, et cetera. And then we also have detectors like defect detection methods, and these are getting more popular nowadays. So any of those defect detection methods, if you can just try a few, and if you use your judgment with the support of the defect detectors, that would be one of the best courses right now.
Thembi (12:03):
So when you started off in computer science and just got really excited about it, kept getting educated, got your graduate degrees in that, got your PhD in that, were you imagining that such a thing could be possible? Did you have a vision for what area you would go into, or did you just find your way into this? I mean, it's such a rapidly changing field and there's so many elements to it. It's so broad. What got you excited about going in this direction?
Dr. Ilke Demir (12:30):
Absolutely. Great question. So believe it or not, I was working in generative models starting from traditional generative models for maybe 15 years now, before it was
Thembi (12:42):
Cool. Oh, so you've been in the game. You're an OG in the game.
Dr. Ilke Demir (12:49):
So even in my thesis, we were looking at image based and 3D based generative models, how we can use language like structures or procedural models or parametric models to create 2D and 3D data as photo realistic as possible. So that's already my background. So when you do your PhD in something, you cannot unsee everything, unsee everything that you are used to see. So all these research that we did in 2D, 3D data, that actually shaped my vision to see how we can see how we can find priors in data, how we can find hidden rules, hidden signals in data. And because DeepFakes and those spaces, audio, et cetera, those are different types of data, different types of signals. You can actually look at what are still hidden unseen priors that we can catch so that we can say whether it is real or fake. Because fake data has its own signals, own structure on priors real data as us have still faked parts of us being under the skin, under the surface, no pun intended, because of the space. So yeah, we can still depend on those signals to find whether something is real or fake.
Thembi (14:10):
In your particular work, based on your research, you all look at biological signals to determine deep fakes. Is that right? You're looking like you said, under the surface for certain real biological signals that say, oh, okay, that is really that person versus the fake information of, for instance, somebody putting out a video of me saying, I just found the cure for cancer. Send me $50 a person, and I'll tell you what the cure for cancer is altering my face and my voice to make it seem like I said those things. How do you then find out if I really said it or not?
Dr. Ilke Demir (14:53):
So we have many different approaches for that. The very, very first one called fake catcher is looking at the blood flow of signals under the skin. Yes. So when your heart pumps blood, it goes to your veins and your veins change color. That color change is called photo discography, PPG, for short. So those PPG signals have been used in medical domains and remote patient monitoring, looking at newborns and see whether they are still breathing, looking at really low heart rate situations and understanding whether they're still blood pumped to their veins, et cetera. So we actually took those PPG signals, photo, the mammography signals, and we collected everywhere from the face, and then we look at their correlation. So for example, for fake videos, it's like you have multiple hearts throughout your face, one heart here, one heart here, et cetera. So those signals are not correlating anywhere on the face. Moreover, they spectrum, their frequency distribution is also not matching, but if you're a real person, because it comes from one heart, they are mostly matching. Even if there is some visual artifacts, you can still see the most of the signals are matching each other.
Thembi (16:20):
That is incredible
Dr. Ilke Demir (16:22):
Detector.
Thembi (16:24):
So really, I mean, I'm thinking, oh, computer scientist, but you're also dealing with biological issues as well. So you're going into other areas of science and biology and probably chemistry. I would say physics it sounds like too. Just figuring out actual human qualities. And if those qualities aren't present, then it's not that person. Okay. And is that something that's proprietary that your corporation created or is that something that you're still researching and figuring out how to design?
Dr. Ilke Demir (16:55):
Yeah, that is actually my and my collaborator, Omar Chichi from, he's a professor in Binghamton University. So my and his baby project, several companies and corporations are actually planning to use them. Hopefully when they use them, we will hear about that. Some of them are around, some of them are public, but I won't say that right now.
(17:23):
So yeah, so fake catcher is the very first one. And then after that, we looked at other things. As you just said, we looked at physiological signals, we looked at physical properties, for example, like eyes, right? If I am looking at this point in 3D, my gaze are converging, but for defects it's like go eyes. So it's like there is this inconsistency in look. Then also look at how our voice and how our visuals are matching. Because if I'm moving my head and talking, you will hear slight differences in the variation of my voice, et cetera. So can we actually find that variations in the video to verify whether it is reph? So there are several hidden signals that that we are still looking at.
Thembi (18:13):
What I noticed is that deep fakes, people using generative AI to generate images in video to maybe commit fraud, maybe just to tell a story that's not necessarily true or that's imaginative and not real. They're becoming more and more sophisticated at doing this. So do you think that detection technology is always going to be able to keep up with that
Dr. Ilke Demir (18:36):
Up until now? Yes. So for example, even for fake catchers, the PPG signals that we have been researching hasn't been faked yet, and it has been more than five years now that fake catch. We first published fake catch. So I think the more the generative models are getting more photorealistic, they also have that danger of conforming to the data around there more. Not really photorealistic, but more conforming to the beauty two standards more, c, g, iLike, more flawless, more like, oh, there's no imperfection of my face, all these filtered faces, et cetera. So I think it is going into a high photographic quality area instead of really high realistic area, which is those hidden signals being replicated, imperfections being replicated, all of our mimics or the circles under my eyes, et cetera, these are what makes me me. And the more advanced defect generators are getting, they're actually trying to make them more beautiful because the data, underlying data is actually inherently biased towards more beautiful data. All of us are putting on Instagram our best faces, the most makeup version, et cetera, and no one is like, okay, this is me when I wake up in the morning.
(20:14):
So that's the trend that we are seeing right now, and that's actually good for detectors because the real things, the real signals are still not being replicated, being vital enough, important enough to be replicated.
Thembi (20:28):
So the filters are good. That's what we're saying here. They're helpful. Okay. That's so interesting. You talked a little bit about the elements of that that make us human, the imperfections, the things that happen just underneath the surface of our skin that you can detect. So are there other things that help make us human and allow us to not be deemed fake in terms of AI and distinguish us from ai?
Dr. Ilke Demir (20:58):
So on one extreme, there is the biological and physiological signals, the heart rate, the eyes, the gazes, the consistency, the voice, the intonation, et cetera. On the other hand, there's also the behavioral and emotional cues. So even though it is not really from the pixels of an image, it's not the color, it's not like the imperfection, but it is actually how much human I am and how much me I am. So the very first paper, not from us, but from a lab that was looking at that, I don't want to imperfectly paraphrase them, but I think it was the case that they looked at all the Obama videos and they found out that whenever he says hello, he has that known intonation in hello in a realistic way. And when they looked at the spectogram of those hellos, they were really matching. He was really consistent. So it's really hard to actually replicate that in a more complex, manifold, complex data manifold. It's not just the hello, but if you take a paragraph that I say, making everything in that paragraph behaviorally and emotional the same as me, is really, really still hard. Even if you have thousands of millions of videos of me that me still will be there. Wow. So these two ends of spectra, all the way from human to all the way to Pixar based methods are still giving some hope that we can actually do deep detection.
Thembi (22:37):
That's great. It's interesting because we use AI in a lot of ways for creativity, for time saving to create humans that aren't really human, not so much for DeepFakes, but I would say for creative reasons. And it does, like you said, it gives us hope. It makes me think about how humans will still be necessary for on-camera talent and things like that. I even think about it in instructional design and other kinds of fields where now it's like, oh, we don't have to worry about hiring voiceover talent or on-camera talent. And it's like, yes, that does save time on one side, but it also eliminates this whole field of work on the other side, which is interesting. And it also does kind of take away that human aspect. And I wonder what the long-term effects of that will be. I mean, we don't know yet, but time will tell. Of course. Also, I'd like to ask you, what keeps you up at night when you're thinking about ethical concerns around DeepFakes ai, just generative ai, especially with image and video, what are your concerns about the ethics of this work?
Dr. Ilke Demir (23:40):
Well, I have been asked this question before, and I think I heard some controversial responses to my response, but I am insisting on not changing my response. So
(23:52):
AI is not, what is keeping me or advances in AI is not what's keeping me awake at night. Uninformed people in power are keeping me awake at night because those uninformed and unintelligent, or with maybe huge imaginations, but zero knowledge, if they are in places in power of if they have the power of decision, if they can actually assume that they can have a word in AI or how AI funding will go, or how AI regulations will go, or what is responsible ai, what is not responsible ai, et cetera, then that is actually really putting us, forcing us, pushing us into a really dystopian future that I don't want to imagine. Because we have seen some cases, right, about really disturbing scenarios, oh, AI will take over. Oh, AI will be in the White House, et cetera. It's like, well, if you put only the emphasis on the dark side of the ai, then first you are just discouraging everyone that is working on the bright side of the ai, you are cutting the funding to all responsible ethical AI or equitable AI approaches. Or if you only use AI in places that is completely separated from humans, there was all these news about police departments using AI for finding criminals based on face recognition. And then if those forces are not looking at the actual description and the AI retrieval and not making a human match in between, of course they should not be using it, right? Because
(25:43):
It's not the image that is not real. Anyway, so powerful, but uninformed people making decisions about AI is what is keeping me at night, unfortunately.
Thembi (25:54):
So you see AI as a tool, and if it's in the hands of people who have the information they need and have a positive intention, then it can go all kinds of great places versus wielding it with ignorance and ill intent. That seems completely reasonable. I think that makes a lot of sense.
Dr. Ilke Demir (26:13):
So if you have that person that you are working with or you are having a conversation with, et cetera, you wouldn't blindly believe everything that they say, right? You wouldn't say that. Oh yeah. Even though it is even against my best judgment, you are saying that I will blindly believing. You wouldn't do that for a real person. Why would people do that for ai? Because AI is actually, without intuition, without reasoning, just repeating out what the collective all human beings are saying. And all human beings don't know the truth. They lie. They have other motives to not say the truth as true as possible, et cetera. So it's just treat it as another source of information. Use your judgment if that's an area that you don't have information about, just do research about it. Go look at the sources, et cetera, and then act based on that. That's basically it.
Thembi (27:15):
So taking it with a grain of salt and not just, yeah, that makes a lot of sense. So now you talk about engaging in curiosity driven research. What's the value of curiosity in our world today, and what does that look like? I mean, you talked about doing your due diligence and doing your research, and not just taking anything that you see at face value and running with it. What does it mean to be valuing curiosity as a form of research and understanding and exploration?
Dr. Ilke Demir (27:44):
So I think everything that we do starts with curiosity. If you are in research and you are saying, no, I'm doing research, but I'm not curious about things, then you are not doing research. You're doing engineering, you are in coding, you are doing programming, you're doing management, you are doing something else. And it's not research. Research starts with open-ended questions, which is curiosity, right? So you need to be curious about, oh yeah, there are deep defects. What is hidden under deep defects? If it in real videos that is not replicated in defects, or there is all these AI agents, what happens if, I dunno, I'm just making up open-ended research questions on the fly. What if we put several AI agents in a constrained scenario? Would they cooperate? Would they fight, would they, I dunno, share the resources? Would they just battle each other to get the resources, et cetera? Or there is so much multimodel research that is not done yet. So when I say multimodel, people understand, oh, image and audio or image, Imogen text or text and audio, et cetera. But no, there is so much different forms of data. We just talked about physiological signals, right? Like the blood flow or my breeding pattern,
Thembi (29:04):
Right?
Dr. Ilke Demir (29:05):
I'm doing this when I'm talking, but does in a super multimodal scenario, does my breeding pattern really match with my words or my blood flow or my vocal chords, et cetera? So all of these open-minded questions, open-minded perspective, actually brings you the most grounded research. It gives you a little bit more flexibility to go from the big picture to the actual smaller question that you can solve in one paper versus you can solve in one whole program. And sometimes curiosity driven research is having old questions, but new hammers and new tools, sometimes having a old tools to new questions. Sometimes it's just trying to understand like, oh, if A works for B, because these were completely different monsters back in the day. Now can they play with each other? Et cetera. So all these bigger perspective questions actually enables really strong research that it's not just a little bit incremental, like, oh yeah, it's not 95%, 96%, yay, but it's actually, oh, we can do this brand new field of research opening the door for others that are coming behind you.
Thembi (30:36):
Curiosity, what about creativity? You've contributed to short films, you've contributed to animated features with your work? I don't know how many computer scientists are involved in this, maybe I'm just unaware. I think it's really cool. Can you talk about some of the ways that you've contributed to these creative projects?
Dr. Ilke Demir (30:58):
Absolutely. Absolutely. So you probably see Volley behind me. Yes, yes. So I didn't work in volley. Volley was much, much older, but I was in Pixar. I started as an intern, then I continued as a software engineer, and I was really lucky to work on the from build system to the bugs to everything that we can do, the little things that we can do in finding Dory in cocoa, in the good dinosaur toy story for all of them. So we have very little things, and mostly I have the build system interface, which is making all of these software packages talk with each other so that they know their version, they know that how the build cycle, build dependency cycle is built, et cetera. So that's in Pixar. In Intel Studios, I was the research director. So Intel Studios is a completely different place. So Intel Studios was a 10,000 square feet dome for 3D movies, 3D AR movies, 3D VR movies. We were doing wall metro capture. There were so many different productions. So one of them, a K-pop band actually came to the studio. They did their music clip. Yeah.
Thembi (32:29):
Oh my gosh.
Dr. Ilke Demir (32:30):
And it's all captured in 3D without markers, without sweat, without anything. They come as they are, and they just dance there. And we capture everything in D one of the newest ones. So that started in Intel Studios. Have you seen Here we are Here, we FX, Tom Hanks is in a short movie, which is actually the story of a place throughout many, many, many years. So that actually started in Cal Studios.
Thembi (33:02):
No, I didn't see that. Is it out now? I mean, are we able to,
Dr. Ilke Demir (33:07):
I think I saw a trailer of it at some point in YouTube. Yeah, we had musicals like Greece musical for its, I dunno, 13 anniversary, something like that for its anniversary. The original choreographer actually came to the studio with new dancers, and they actually did this whole music piece in the studio so that we can actually capture it in 3D. And after that, you can actually play it everywhere. So you can play it on a stage and then just dance with the people because it's in 3D, it's not just one view. You can just go inside and dance with them, et cetera. So
Thembi (33:49):
Anyway,
Dr. Ilke Demir (33:50):
That was the most futuristic thing that happened in ar VR space. Still, there are several stages. There's metal stage, there is some light stages, et cetera, but they are nowhere as big as the Intel studios.
Thembi (34:07):
And so how do you see the arts playing a role in AI research and technology development? I love what you described in terms of how the technology supported the arts. Is there a way that can be the other way around? Is it already the case?
Dr. Ilke Demir (34:21):
Yeah, so we also work a lot in 3D and modeling and reconstruction, and how we can basically bridge the gap between traditional 3D content creation with the deep content creation. And I think the next solid step, next solid ladder that we will be climbing is that how we can make it more seamless for creators, artists, visual effects, personnel, painters, sculptures, et cetera, to use the tools that they are used to use, but integrated with AI approaches, deep learning approaches, et cetera. So maybe very small example, what we were doing in the studio is that when there is this 3D construction process, if it comes as a point cloud, the point cloud is very dirty. The point cloud is very noisy. So instead of having this nice surface of faces, there's point, point, point, point, point, point, right? And it's very easy for deep learning approach to actually clean it, create a surface, resample those points and keep everything there.
(35:39):
So an artist doesn't need to go and clean point, point doesn't need to do those 3D operations, but they can actually do it in a very seamless manner. Deep learning can actually say, okay, this is like a smoother surface, smoother 3D model, et cetera. So same for diffusion based 3D mesh creation, right? For example, you give one image of a teddy bear, it's just an image, but suddenly you actually have the 3D version of the teddy bear. Same for texturing, for example. You give a textless mesh, you give just one sample of a pattern, maybe some carpet, maybe some, I dunno, some shades, et cetera. Suddenly the 3D object is actually textured in a realistic manner with that texture, et cetera. So using all of these traditional workflows, traditional modeling, traditional texture, and traditional reconstruction workflows with integrated AI modules is what will, I think, make everything much more usable.
Thembi (36:43):
I love that. I feel like that will be super useful in gaming, right? For sure. Just in terms of realistic textures in avatars, and like you said that when you talk about the teddy bear, and I'm envisioning the texture of the teddy bears fur and just how things like that can evoke sense memory, and I can imagine that just adds to a much more richer experience and art. It just seems really interesting. What is the most exciting project you're working on right now? The most exciting,
Dr. Ilke Demir (37:19):
I wish I can talk about it. It's about,
Thembi (37:22):
Can you give us a hint?
Dr. Ilke Demir (37:23):
Yeah. So we have this line of research where we are trying to enable artists to protect their own content. The very first one was not even for artists, but for everyone. It was called my face, my choice. So if you don't want to appear in a photo, we find the most dissimilar face to your face. And it is, yes, your face is deep fake with that most dissimilar face so that you are not in the photo anymore. But the expression, the smile, the gaze, everything is there. So that was how we were doing anonymity with deep fakes.
Thembi (38:02):
Wait a minute, hold on, hold on. That's interesting. You're like, okay, so we don't want the DeepFakes if they're, yeah, I guess it's your permission, right? You're giving permission to deepfake yourself. So that's a great example of DeepFakes being used for a good cause. I like that.
Dr. Ilke Demir (38:18):
Yeah, yeah, exactly. So just very, very simple example. You are in social media. And in social media, someone uploads a photo of you, maybe your favorite cousin, but the photo is not your favorite one. And you don't want to be in the photo to your cousin, for example, and to you it'll be your photo because you know that it was really you. But let's say your second chamber of friends, you don't want to appear in that photo. So based on your privacy settings, your photo is actually changed so that the person in that photo is no longer you. It's someone else,
Thembi (38:54):
Basically. And that's my face, my choice,
Dr. Ilke Demir (38:57):
Yes, my face, my
Thembi (38:58):
Choice,
Dr. Ilke Demir (38:58):
Yes. We published my art, my choice. So this is especially for creatives and artists. So there are so many lawsuits against generated AI platforms because they actually collected all that data from the internet and without permission. So if you go there and write, I dunno, create me a teddy bear in the art of, in the style of Vango, you'll actually get a teddy bear in the style of vango, even though vango doesn't allow, you should not allow you to do that, for example. So for very big artists, of course, maybe that is not a problem anymore, but it's valid even for smaller artists, recent artists, and there was huge campaigns against that and lawsuits going on, I guess. Anyway, so my art choice takes image and learns to generate a protected version of that so that when you give the protected version to any generate AI model, any diffusion model or stable diffusion or whatever, the output is very noisy, doesn't look like the input at all, doesn't look like the style at all, et cetera. So we actually break fusion models
Thembi (40:12):
So that
Dr. Ilke Demir (40:12):
Artists' work cannot be replicated.
Thembi (40:15):
So as an artist with my art, my choice, any image that I place of my work online, I'm going to want to use this tool so that it essentially scrambles the beauty of the art. And so would the generative AI model then just ignore it, or does it just pull it into all of the other stuff that it's gathering and it just can't make sense of it? It can't replicate, I guess.
Dr. Ilke Demir (40:43):
Yeah, it cannot replicate because the way, so my art MI is generated model itself, but it is trying to learn how it can best break the defusion model. So it is putting all these hidden things inside the image so that it pushes the image as further as possible, the defusion output as further as possible.
Thembi (41:03):
I see. That's fascinating. So as an artist, you would just want to make sure that anything you're uploading, you're putting it through that model first. So that the version is the scrambled version. I shouldn't say the scrambled version, but the version protected version. The protected version. Thank you. Thank you. Okay. That is so interesting. That sounds really exciting.
Dr. Ilke Demir (41:22):
Our latest research is around those, more about content integrity and content protection, and it's coming soon.
Thembi (41:29):
That's fantastic. I've spoken to many artists who very uncomfortable with that. They have a lot of their work online. It's their livelihood, it's all of their effort, and they're putting their heart and soul into the work, and the work just gets pulled. People even said they've seen work that resembles their work online, that people created through generative models. So that sounds like a really big need that you all are addressing. I love that. So you've got all these great things that you've done so far and things that you're working on really exciting projects. I bet you're working really hard. What's your self-care regimen?
Dr. Ilke Demir (42:04):
Oh, well, I dance if I,
Thembi (42:09):
Oh, wow.
Dr. Ilke Demir (42:10):
Yes. I have been dancing for 18 years, and that is where I disconnect everything about work, everything that happens during my day, and I just do something social with music, with movement, and that's my battery. I just recharge during dancing.
Thembi (42:36):
Yeah, I love that. Are there certain styles of dance that you like in particular, or
Dr. Ilke Demir (42:40):
Just everything? Yeah, salsa and bachata. So I have been also teaching salsa for more than 10 years now.
Thembi (42:47):
Oh, that's wonderful. I love the salsa and bachata scene because it's so diverse. Age, ethnicity, background, everything. It just seems like so many people just love to do it and come together and everybody's there dancing. It's not like people are sitting around watching everybody's involved and engaged. I love that.
Dr. Ilke Demir (43:10):
Absolutely. Absolutely. From everywhere in the world, from all ages, from all social or economical places in their lives, et cetera. So it's like really, it's such a universal language. I also travel a lot, right? I was Korea a couple of months ago. I was in France, et cetera. So wherever I go, salsa machata doesn't change. We still speak the same language the moment I'm on there. Yeah. So
Thembi (43:39):
Everywhere you go, there's a scene where you can go dance. Oh my gosh, that's wonderful. That is absolutely wonderful. Yeah. Okay. Well, wow, I am so glad that I got to speak to you today, and I learned a lot. Now I have to go and sit down and kind of sit with this and think about this. It's like DeepFakes can be bad or good, but I love how you talked about it depends on whose hands the tools are in, right? All these tools can be used for whatever. So I just hope that we will follow the better angels of our nature as we move ahead and not create the sort of dystopian future that I think in a lot of ways, we see if we continue down the road that we're on. So is there anything else that you want to say before we head out?
Dr. Ilke Demir (44:21):
No. And thank you for being an incredible host and making so easy to talk and all these very interesting questions. So thank you. Really.
Thembi (44:30):
Oh my gosh, the pleasure is mine. It was such a joy to speak with you. Thank you, Dr. Demir. Until next time, this is Thembi. This is KeyBARD. Have a good one.