
AI Proving Ground Podcast
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast
Deepfakes for Good? How AI is Powering the Next Era of Personalization
Can deepfakes go from dangerous to delightful? In this episode of the AI Proving Ground Podcast, Adam Dumey and Chris Roberts explore one of the more surprising — and controversial — topics in generative AI: the use of synthetic media and deepfakes in personalized customer experiences. You'll learn: what deepfakes really are and why they're no longer just a security threat; how AI-powered personalization is reshaping the customer experience; and the ethical and technical challenges of using AI-generated content responsibly.
Learn more about this weeks guests:
Adam Dumey is a tech executive with 20+ years of experience leading AI, Autonomous Retail, and Cloud initiatives across sectors. He advises Boards and C-suites on digital transformation, driving major efficiency and revenue gains. At WWT, he leads Global Retail Sales, helping clients grow revenue and optimize operations by aligning strategy, tech, and partners like Nvidia, Dell, Crowdstrike, and Cisco.
Adam's top pick: Navigating the Future: Three Emerging Trends in the QSR Industry
Chris Roberts is a technology expert at World Wide Technology with extensive experience across aerospace, AI, adversarial AI, deepfakes, cryptography, and deception technologies. He has founded or collaborated with multiple organizations in human research, data intelligence, and security. Today, he focuses on advancing risk management, maturity models, and industry-wide collaboration and communication.
Chris's top pick: Deepfake Deception: Can You Trust What You See and Hear?
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
Today on the AI Proving Ground podcast deepfakes, deception and user experience. You've heard the horror stories synthetic voices, stealing identities, videos that rewrite reality. But what if this same technology could be used not to trick us, but delight us? On today's show, I'm joined by AI veteran and global VP of retail for WWT, adam Dume, as well as AI and deepfake expert, chris Roberts, to explore the double-edged sword of deepfakes in the age of personalization, from AI's early days, locked behind proprietary walls, to today's mass market accessibility. Adam takes us through the evolution and Chris, well, he's here to remind us just how real the risks still are. But here's the twist what if seeing your own face in online ads or even in a dressing room mirror wasn't creepy but compelling? Could synthetic media actually become your most powerful marketing tool? Stick around, because what you hear today might just reshape how you think about identity, influence and the future of AI-driven commerce. Chris, our first repeat guest, congratulations. You and your beard made it back for round two.
Speaker 2:Yeah, I appreciate it. I got let back in the studio again, and this time obviously thanks to Adam. Much appreciated sir.
Speaker 1:Yeah, and Adam, first time, welcome to the show, thank you. Thank you. We're going to be talking about deepfakes and AI and retail today. We'll get into the meat of the conversation here in a bit. But, adam, I'm just curious from your perspective. You've been doing AI for more than a decade, from what I can tell, back to the IBM Watson days in a prior career of yours. What have you seen from that point to today and how AI has shifted, not only in the technological advancements but just how the industry and the public looks to consume it?
Speaker 3:Yeah, it's interesting. So back when I started, everything in AI was proprietary, so forget the software, but the hardware it had to run on this exact hardware and so the cost of that was so high it limited to just a certain number of riders across the world, so the adoption wasn't fantastic and great. So the great thing about the past two years in particular is all those challenges have mitigated. And also AI when I started was really driven from an enterprise level and now it's a consumer level, so the accessibility is much, much greater and so that unlocks a lot of different use cases and creativity, and so just the presence and availability and awareness of what AI can do every day just continues to increase and deepen.
Speaker 1:Yeah, absolutely. You talk about some of those challenges mitigated. One of the challenges not mitigated, perhaps even increasing, Chris, deepfakes level set with us, before we get into the retail components here, Um, you know what are deepfakes by your definition and and where are they at right now in terms of the landscape.
Speaker 2:So I think you've got a couple of different areas to look at. If you look at the stuff that we see a lot more of, it's the video deepfakes. So it's it's me taking a video of Adam uh giving maybe a keynote speech somewhere and changing the context of it completely. So instead of him giving a keynote, like I've done with a couple of folks inside here, he resigns from WWT and go raises koalas in Tasmania and it looks real, it sniffs real, it smells real and all this other good stuff. So that's one. The audio deepfakes is what we're seeing a lot of from the attacks against corporations, which is, again, I change my voice characteristics, I can use my only words, but my voice characteristics mask say maybe the CFO, I get you to transfer money, and then obviously we have text, we have social media and all the other stuff that goes with it.
Speaker 1:Yeah, so that's the state of deepfakes. Deepfakes can get that scary rap. We hear about all the bad things that happen. Deep fakes, deep fakes can get that scary rap. We hear about all the bad things that happen, but, adam, you see, opportunity here, specifically as it relates to retail.
Speaker 3:Yeah, If you think of retail, the holy grail is personalization, so understanding what Brian wants, when Brian wants it, how he could serve him up, and to do that you need a lot of data, right? And so once you have that data, the question is what do you do with it? And so back 20 years ago, it was OK. Here's Brian's zip code. Here are the attributes associated with it. Now we're moving to a world where we're not too far off of a deepfake being part of a campaign, so you can imagine what's your favorite airline, let's say.
Speaker 1:I'm a Southwest frequent flyer.
Speaker 3:Perfect. Imagine you got served a Southwest ad that had your face and likeness in the preferred seat on a Southwest plane, leaving you at a certain destination which you particularly enjoy and the excursions and the activities. That creates a very compelling emotional connection that should drive conversion. And so to what Chris mentioned earlier, that ability of taking a synthetic image of someone with high fidelity but using it more for good to have them visualize an experience. We're having conversation of that with customers right now and so we're not quite there. There's still kind of a data foundation that across the retail lens isn't quite checked, but we're not very far away.
Speaker 1:Chris, what are you seeing? You know, deep, fake for good. Might be a little bit of a 180 view for some. Are you seeing that starting to?
Speaker 2:take place? Yeah, absolutely. I mean it's the same thing Adam and I have talked about. It's the same thing walking into a store. I mean, if you think about it, nowadays, when you walk into a store, typically you're going in for reasons wise wherefores or logic. You've had an influence online through media or something else. But again, taking the idea from Adam, I walk into a 5.11 store, I walk into an REI or somewhere like that, and I see an image of me we were working on something for RSA and a couple of other conferences where you walk up to literally a dark mirror and you see an image of yourself and what we were going to do and have some fun with it was change it to be a male version, a female version, put some ethnicity in there, have some fun with it, but also show all of their social media. So how much of this data makes this person up? Because obviously we've put so much out there and, to Adam's point, so much of it's being used now to basically profile you.
Speaker 3:Yeah, and just think of the data presence. So you know, retail, there's a physical and there's a digital component. So Chris, hit on the digital, the physical side. So imagine you go into a mall, right, the second you park, the second you go in a store, the second you go in a hallway, the second you interact with a product, you hover in an aisle, that data is being captured. And so now you layer that on top of the digital footprint which traces everything from the second you put on your computer to the website, the sequence, the information, so now there's this really rich, rich picture and that data now, with some of the technology we're seeing in deep fake space, is being used to extend the reach of what AI and data can do.
Speaker 3:So I think it's fascinating. But it also is interesting as you think about new technology. Everyone's fearful, right, if you go back to the car, it was called the devil wagon for a while, right, you think of the printing press and the issues of Catholic church. There's an issue here about the concern. I think it originates from just lack of control, right?
Speaker 2:Yeah, it's lack of knowledge, lack of control, but I think the other part of it comes into as well. To your point with the car and all the other stuff it was tangible, that's right, whereas all of our data is so intangible.
Speaker 3:And at the core of deepfake it is you right, it is Chris Roberts, and so losing the sense of ownership of that is a dangerous idea. So I worked at an intelligence agency years ago and earlier in my career and I received a call four years ago from someone and he told me Adam, take that voicemail down on my mobile phone, because it just had 18 seconds. That was enough to take that and create a synthetic image, and it started getting me thinking what's mine right? Back then, four years ago, I lost my voice. Now I could lose everything else, right. So, fundamentally, I think there's something intrinsic about deepfake that concerns and scares people, and I get it.
Speaker 2:But I think, like any technology, there's another side, Thing that adds to that when you think about it. This is why we can't get rid of passwords. You know, no matter when we talk about passwordless and everything else, we can't get rid of them because if we lose them, we can easily replace them, just change the thing again, put different two factors and everything else.
Speaker 3:But if we lose our identity, it's gone. The fingerprint goes, it's gone. We can't easily and readily replicate these things and get new ones. So I think that was breached. Everyone was upset, but to your point it was just a swab. That's someplace in California, but seeing your image and your likeness and combined with a high fidelity voice characteristic that is, that is threatening.
Speaker 1:Yeah Well, chris, I think that probably speaks to just the need for secure, securing the entire solution to make sure those important data points are safe and sound from a consumer's perspective, to offer it trust.
Speaker 2:Yes but and that's what the yes. But I think that's where part of the challenge comes in, because on one hand, you want to secure that data but on the other hand, if I'm that retailer, obviously I'm going to maximize my investment by using a perfect example. If you ever look, I put a LinkedIn post out a couple of days ago and I'd gone onto a website and it said hi, sign here to share your information with 924 of our partners, no more, no less, no less. It was literally. It had the number, it was in little yellow and it was 900.
Speaker 2:That is, if I had said yes to that, that, or even if I said no to that stuff, they're still going to share that data with so many different people. So I think that's part of the problem is, I don't mind sharing something if I know that everybody is going to treat it the same way, I should share it. Or if it's going to get used in a for good reason. But if it starts and this is where it is really interesting with the retail sector I don't mind somebody telling me something that maybe I like, like you know these things, the hoodies, I like them, I enjoy them, but if I go from a size large hoodie to a size extra large hoodie. I don't want that information going to my healthcare provider going hey, chris has put on a couple of extra pounds. You maybe want to do something about that. So that's where I think you have to be really really careful with that data.
Speaker 3:Yeah, and your question. Obviously the data has to be secure, but we're having conversations with retailers now asking do you need that data? So we have conversations. We do a lot of work at Worldwide in the QSR space and so we're having a conversation with one QSR asking do you really need all of this data? Some of it is old, some is outdated, some of it's third party. What's the data strategy you should invoke to get more relevant first party data and just the right amount? Because, again to the idea of personalization, you want to get the outcome and historically it's just getting as much data as possible. But when you have that data, your attack factor increases. You now have to store that data where there's a cost and you have to secure it. So we're having conversations to start pivoting a little bit about the basis of what you need for that personalization.
Speaker 1:Well, you're talking a lot about a physical environment here in terms of personalization, and then earlier you mentioned malls and AI in the same sense. That was impressive, perhaps the newest, most innovative technology with with the state of malls these days. So kudos there. But is are these retail deep fakes? Is it only an environment of a physical store or is it extend, you know, omni channel, across everywhere a retailer wants to be?
Speaker 3:So it's, it's, everything is everywhere. Now I mean deep fake goes back 2018, where everyone's favorite supermodel, cara Delevingne, was, to my knowledge, part of the first ever deep fake campaign. So it was a European retailer that was looking to get penetration across certain areas of remote uh, remote regions, and so they leveraged her and her image and likeness. Back then it was a voice component to Chris's point about the emergence of audio, and so that campaign generated 100 million impressions, 180 million 40% of those were viral free and 54% of order increases, and so this thing has been around and this is really more of a digital manifestation Nowadays.
Speaker 3:Again, the prevalence of data means that wherever that deepfake is articulated just has higher fidelity. Right, and we've done deepfakes before. Chris has that. It takes 30 minutes and $12. Right, it's just super cheap to do it, and so it doesn't require an expert. That accessibility we talked about earlier, combined with the data and all these different channels, and, candidly, people's impressions of data now and people's impressions and trust in certain venues are so low that the consumption of that and acceptance is at a level that I think is dangerous.
Speaker 2:To your point on the acceptance. You think about it. For the last several years we've trained people, regular, normal people, with our phones phones. We've trained them to add extra filters. We've trained them to change voices. You go back to the days when we had little gps systems. My old gps system used to have monty python on it. You know we can do that with alexa, we can do that with all the googles and the voices and all the other stuff. We can have them speak different languages, different phrases, we can put different faces on. So we've trained people to do this and now, unfortunately, on one hand it's used, I wouldn't say for good, but it's used for betterment. But unfortunately we're also seeing a huge amount of undertones of it being used for bad and people aren't able to discern the good from bad these days.
Speaker 1:Yeah Well, and what can? What can retailers, or any organization for that matter, do to better protect against the good versus the bad?
Speaker 2:there's an element of education, but I think it's only a small element of education. You can only tell certain people so many times hey, guess what? For the last 20, 30, 40 years we've been telling you this stuff. Now you can't believe anything. That doesn't work. And that's, I think, where what we're trying to do a lot of inside wwt is work with a lot of the organizations to help them better understand the signals coming in, help them better process them, put the sentiment analysis behind them and a whole bunch of other things to try to help them and the consumers understand, or even maybe make it so the consumers don't have to know that it's happening behind the scenes. I'm basically protecting them.
Speaker 3:And going back to your question or your intro about my history in AI, there's going to be a long runway before deepfakes are at a scalable position. So started AI in 2013,. It's 2025, I'm enterprises are still implementing chatbots, so AI has come a long way. At the same time, from a scalable global enterprise perspective, that pace doesn't match, I think, what the consumer believes. And so, as we think about deepfake, to me it's a progression, and the progression of deepfake from retail lens is about hyper-personalization, and so, to get there, there are other elements of personalization that will drive more of an acceptance. So seeing on your phone, at the right time, when you're ready, an offer that's compelling. Eventually, once that starts to take more of a prevalence, all of a sudden, now you're accepting of using your data in a way that's permissible and eventually that's going to lead itself to more intrusive, more personalized manifestations of that.
Speaker 2:Another good example. I came out of the aviation side of things before I came here. We were looking at using AI modeling, not just inside the safety of the avionics, but also the passenger experience, everything from as the passenger walks up to the airport, recognizing it's that passenger. How do I personalize that experience with the passenger walks up to the airport, recognizing it's that passenger. How do I personalize that experience with the passenger? As the passenger gets onto the plane, how do I make sure that I'm greeting them effectively? How do I make sure, as the screen comes down, they've got their favorites on, they've pulled their favorites from either Apple or any of the other systems out there and said, hey, this is your environment, you're here for an extended period of time, we got you covered, but then you can also use it for the healthcare side of things as well.
Speaker 2:Not everybody likes flying. Everybody's used to flying. Maybe somebody's not feeling well. So if you've got a camera that's able to sense temperature or moisture or liveliness, can that system then talk to somebody inside the actual uh, the cabin crew and say, hey, passengers so and so in this area looking a little bit too hot, go check on them?
Speaker 1:Yeah Well, we've mentioned QSRs, quick serve restaurants. We've mentioned outfitters. Where else might this apply across the retail landscape? Is it everybody could benefit from doing this once it's implemented at scale? Can we even think beyond that, broader than retail?
Speaker 3:Sure To answer your good thing about it. Any segment Within retail, though, we're having some customers or some retailers to leverage deep fakes more as a new revenue generation tool. So you can imagine going to a movie theater, for example, and instead of ordering popcorn from Brian now there's a digital avatar that would take that, and so the benefit from a new revenue generation perspective, that avatar could be a local NBA player, it could be someone that's starring in the movie that you're about to go watch, right, and so that will drive sponsorship funds that they don't have right now. And so, as we look across the landscape of retail, it doesn't have to be on a purse level, it could be a character, right, think of all the cartoons and animated films, um, it could be a local presence, and so all these provide more compelling and localized experiences that we think again, it doesn't just cut across a qsr or a fitness chain or movie theater. It cuts across, uh, the broader retail umbrella, and then financial services and health care, the same way he described I think about it as well.
Speaker 2:I mean, we were down south a couple of weeks ago now talking to one of the box what do you call it? Yeah, home retailers, home, whatever the home improvement. We're talking to a bunch of the folks down there. Imagine walking into those places with your own home plans or a couple of pictures. You're already able to see some of this. But imagine actually putting an AI architecture in place that builds it out for you, that makes those suggestions for you. You augment that with some of the reality glasses All of a sudden. Now you can go hey, I need A, b, c and D. Here's the list, here's everything I want and here's how it looks. That's there and that ability. Then that's huge.
Speaker 1:So you're talking about how that's there. Where are we at in terms of readiness standpoint? We've talked a lot about data. Are retailers or any of the clients that we engage with, are they ready to put these things into action, and what's the progression for starting to get to the point where they can offer these to their clients?
Speaker 3:They're still struggling with stovepipe systems, kind of disparate data localized around, and so really aggregating that and creating that data foundation that's required, that's underway and, like I mentioned earlier, personalization on the digital side is that first step and eventually, from a deepfake perspective. But I do also think the consumer isn't ready, for the reasons we mentioned earlier. Right, there's a high level of distrust and unfamiliarity and also just confusion as to why I'm giving you my data to use, right, and all it takes is one bad actor, one inadvertent misstep, right, and so now retailers are using deep fakes in very sporadic ways, but more for fun. So you think of the Super Bowl, right, gronk? Right, avocados from Mexico. Do you guys see this?
Speaker 1:Yeah, yeah, yeah.
Speaker 3:So he was a deep fake and so it was really well done, very controlled. But you'd call a number and Gronk would come up and and give you a script about how to make avocados and guacamole, and so that's how they're doing. It now is more for fun, more for brands, instead of a true one-on-one engagement. So we're, we're, we're quite a ways away from that level.
Speaker 1:What other technical aspects might an organization have to think about to start putting these in place? We've talked about data already, but how do they integrate these systems into their current IT stack? What about where they're running these workloads, whether it's on?
Speaker 3:cloud? Yeah, and that's one of the things we do. Amazing here is that whole assessment, the kind of architectural assessment of what your current state is. Oftentimes our customers in retail don't know that. They don't know the legacy systems that are still active, they don't know the data stores that are still not interconnected. And so, from an assessment perspective, that is how we approach it and usually when are some pretty interesting, sometimes awkward findings, but it gets them to a position of knowledge we can start identifying one is this the appropriate architecture, whether it's on-prem or on cloud or some hybrid? And then also, getting back to my earlier comment, is this the right data assortment that you need to achieve your objectives? Is this the optimal set?
Speaker 2:I think the other one to add on to that one is definitely seeing, especially within some of the telco and some of the carriers, is they're willing to meet the consumer where the consumer is. That's right. There's been some fantastic conversations around that one specifically around protection. You know so many of them already have their own application stores and everything else. What they're looking to do in many cases is how can they elevate and enhance that. So we're looking a lot of the times at the solutions of you know how can we twin in capabilities on those systems but also have you know cloud necessary wherever it is. So yeah, some good conversations, very good, very good conversations.
Speaker 3:This episode is supported by Akamai's GuardiCore. Akamai's GuardiCore offers advanced micro-segmentation solutions to protect critical assets from cyber threats. Secure your enterprise with Akamai's innovative security platform.
Speaker 1:What about ROI? Everything with AI seems to always go back to ROI. Adam, do we have an understanding yet of what type of ROI these types of solutions could lead to?
Speaker 3:So we talked about our favorite supermodel earlier, so that was amazing. So we talked about our favorite supermodel earlier, so that was amazing. Roi, your favorite supermodel, my favorite supermodel I'm out, it happened. And so, with regard to deep fake, it's too nascent. There's no yet ROI for that. The ultimate ROI again is going to be that one-on-one conversion, and to get there, we still have some of those prerequisites we need to check.
Speaker 2:Yeah, no-transcript. So what we've now seen is, if we start looking at bolting in protections earlier in the call sequence, even before it gets to the call handler, we're starting to see huge gains in that one. So there's some fantastic stuff in there.
Speaker 3:And that's just the cost of time. Oh yeah, and then if it actually gets to Chris and you fall for it exponentially grows. And so, in the retail side, we're working with several retailers on this idea of threat intelligence and so the ROI it's kind of like insurance. Why do you buy insurance? What's the ROI every year? It's nothing, but when it hits, it hits. And so, as you think about threat intelligence, understanding what people are saying about your brand across social media, across the dark web, and understanding how quickly those messages are starting to come up and rise to the surface Is it a bot? Is it human? Is it a text oriented? Is it a video? Is it a deep fake? If it's a deep fake, how long does it take you to identify the deep fake before it's out there? And what damage to your brand is suffering during that period, is suffering during that period? And so very much view this as an insurance-like model, but from a pure ROI, from a retail perspective, like a personalization, still not quite there.
Speaker 1:So understanding that a lot of organizations want to have that pathway to ROI, is it just start small, build momentum, keep going, be iterative and then eventually you'll get to a state where you're ready to deploy this in more scalability.
Speaker 3:Yeah, I think it's all about setting the foundations and getting those rights. Like we talked about, it's about getting a strong data foundation to propel you to do other things, and so that's the first piece. And again, the on ramp to more of a deep fake is more the digital, something that's not as tangible, something that you can just look at and consume and see the value of, and so moving from that and eventually into some target deep fake and then more of a kind of a broad scale approach, because once the deep fake engine is on earth, it's not going to be cheap, right, and so you really want to make sure, from a retailer perspective, that you're implementing it at the right time for the right use case. To draw that, and that's going to be just like all this new technology is going to be a little bit of a learning curve, a little bit of trial and error.
Speaker 2:Yeah, the nice thing about it is, as well as we've been able to sit down with a number of clients and literally go through workshops. You know, I just came back from one up in Canada. We had 80 people, I think. Actually, you end up sending the data out to Within one organization. The 40 or 50 attended the workshop and it was fantastic, it was collaborative. We did three or four hours. We were up there collaborating on where they are, where they're going, where they want to be, how they're going to get there, and we talked about everything from, like, the data side of it, the identity side of it, the protection side of it, all sorts, and I think that's the nice thing about it. The conversations in that space are just so collaborative.
Speaker 3:And to your point about the idea of this being tangible, deepfake really gives retailers and other folks in other industries something aspirational that they get. They understand right. They see an image on the screen that's there, as they can appreciate what the end state is versus something that's a bit more abstract. Screen that's there is they can appreciate what the end state is versus something that's a bit more abstract. And so, from a on-ramp and from a learning perspective, the benefit of this is it really provides this North Star that shines very bright.
Speaker 1:Yeah, adam, help me understand this a little bit better. If it's hyper-personalization and there's a deep fake of my likeness say, and I'm walking into an outfitter and, chris to your example, I'm going from a large hoodie to an extra large hoodie how do I see that on the deep vacant under and you know, really trust that it's going to be the right fit? Is that like a latency thing or is that a just a data ingestion?
Speaker 3:Well, first, in your example, it would be conveyed, usually through a mobile app, right, you would have the conversation with some entity that's representative of you. So it's capturing not just your size, but it's probably have other indicators how often you've been to the gym, what's your step level, where are you eating right. So it gives you some high fidelity of what it is saying. And then, once you actually move there, it can potentially be your outfit or it can ask you questions about what the fit is. It could advise you to put your arms out and kind of take a little measurements of how the cloth folds and hangs, and so a lot of it is going to be this interactive element. It's not going to be the static impression of a thing. It's going to be this thing that knows you very, very well, based on all the data it's collected.
Speaker 1:And will that thing, you think, follow me around from brand to brand to airline to.
Speaker 3:It should conceptually. I mean if you think, follow me around from brand to brand to airline to it, it should. Conceptually. I mean if you think of all the challenges of the omniverse, right, the big concerns about a successful omniverse to do exactly what you're saying. But what happened? There are 50 omniverses, yeah, and so that interconnection is going to be a problem. So eventually this deep fake thing is going to have to have wide, widespread acceptance on the consumer side versus the retailer side.
Speaker 1:Chris, you're laughing a little evil-ishly.
Speaker 2:Oh, 100%, because that's where my brain goes. Yeah, I mean that's where my brain goes. But I love the idea of that Because, again, if you think about it, we all carry phones around with us. We all carry, basically, our fingerprints, our digital fingerprints with us, our fingerprints, our digital fingerprints with us. So, as you walk into store to store to store the ability for my phone as long as I give it permission to then talk to the system, to then go hey, this is Chris, this is who Chris is, and whether that turns up on a board, whether I see myself in something, whether I, whatever it might be. That's going to be. The interesting part about it is how much of that am I willing to hand over, and all of that side of it. But yeah, I mean, obviously there's room for all sorts of interesting areas to go through.
Speaker 1:Yeah, I think we can all understand the value of hyper-personalization for a retailer, adam. Are we doing any of this type of work right now and if so, even if it's in nascent stages, what are the lessons learned?
Speaker 3:So we are doing personalization across multiple retailers and what we're learning again is the importance and criticality of that data foundation. Number one we are increasingly having conversations again about that assortment of data and whether it's appropriate or not. And the third piece we're having are these retailers are still struggling with how to mess as a utility of this data right In order to achieve a certain outcome. On your behalf, I'm going to need X, y and Z, so getting the trust, and although I think I love your take, I suspect that consumers' reluctance to give data through the years has kind of oh it's changed, it's changed, it's absolutely.
Speaker 2:I mean, you think about it, even in the breach world is a perfect example of this one. You go back almost 20 years when one of the first breaches occurred. I mean it happened and the consumers wanted to tar and feather every single person that was there. I mean it's terrible, that's right. Fast forward to now number one. I don't know what the latest breach was. I have no flipping clue, that's right. I find the databases Normalize yeah, yeah, every day. Now, yeah. So at that point in time, I think people have also gone to the point where they know they have to hand things over. They unknowingly do it.
Speaker 2:We trained and I'll be honest even on my side of things, where the skeptic I am if I could walk into somewhere and go, okay, I actually like that on me, rather than having to go through the flipping hassle of gathering it up, getting into the changing room, faffing around in the changing room. If I can literally walk in and go, how's that clicked? Oh, you know what I like. That I'm done, I'm sold. Take it away, I'm willing to hand over.
Speaker 2:And again it comes back to a risk probability thing. If I trust that retailer, then this is again. This is back to the normal thing. I value certain retailers over others. I know who's going to sell my data. Same thing with any of the apps we put on our phones. Some apps sell every single thing they possibly can. Some are like, hey, we're going to be very careful with the data. So I think that's the other part of this is, retailers are going to have to be very, very careful how they message out, how they're going to use that data, who they're going to share it with, how well they'll do their best efforts to protect it.
Speaker 3:And that last part of our protection is kind of the fourth bucket we're learning is the level of measures put in place may not be as rigorous as required. And so, again, worldwide we have a very strong cyber practice and we also have relationships across not only the industry but also kind of within global intelligence organizations, and so it's been very obvious that over the past 18 months or so, the number of attacks on marquee US companies have shot through the roof. And it's not necessarily to collect that data it certainly is but it's also to do brand damage. You take a marquee US brand and you take it down. You take a marquee AI product and you have another product superseded. There is something to be said about that on a political stage, and so the conversation we're having with our customers is about that security side, and we have had multiple conversations over the past three weeks where we found very concerning gaps, extremely concerning gaps on the retail side. That would absolutely impact operations but also really damage a brand. Where are the gaps at?
Speaker 2:It's the basics, can't tell you that. It's some of the basics. It's like anything If you walk into an organization and you ask them where their assets are, be they physical assets or digital assets. Most organizations have an idea where all their assets are, but very, very few of them, unfortunately, could put their you know, put their hand on their part, put their hand on the table and go. We know where everything is. That's just simply data sprawl, system sprawl sprawl and the fact that maybe organizations haven't actually implemented everything as well as they should have done. So fan a few holes in a few areas and given a few people some things to think about.
Speaker 1:So you mentioned back to the basics. What else can organizations do, knowing that this is likely and probably inevitable to come down the line? What are they doing to make sure that this is a secure solution that provides value but also is secure?
Speaker 2:I think the biggest part is you know, the nice thing about working here is we have that team that can come in and go, hey, what do you want to build? How do you want to build it? Now, as you're building it, you need to trust that data. You need to go hey, I'm going to train the model. To look at Adam and go, hey, this is Adam and this is Chris. That data has to be as immutable as possible.
Speaker 2:So now we get to data handling standards. Now we get to data safety and security. Now we get to how is it immutable? Who has access to it? So we start talking about identity access, management, control.
Speaker 2:And once you have those conversations, organizations start to understand it's. Unless they have good data governance, handling, management, sanitization, they're not going to get the results they really want to actually see. So again to Adam's point earlier, a lot of it comes down to data, and then it's just best practices. Practices. It's also the considerations for who's built the model, what model of it? You know we talked about this off camera when you know, we're talking about all the questions people should ask and again, that's the nice thing about what we do we can go in and be the bad guys that ask all the awkward questions to go hey, you've got this learning model. How? How is it learning? Where's it learning? Where's it pulling from? How often is it learning? When it makes a mistake and it puts my hoodie on Adam, or vice versa, how is it going to know I made a mistake and how do I retrain it?
Speaker 3:And we've already established my favorite model is Cara Delevingne. So my favorite boxer is Mike Tyson and he's famous for the quote of everyone. Tyson, and he's famous for the quoted quote of everyone has a plan. So they get punched in the face. So no plans foolproof. So you also talk about in the event that a customer does have that vulnerability, that is attacked. Yeah, what are you seeing? You know what kind of capabilities are we showing here that we have for that recovery piece.
Speaker 2:We've got some. I mean, there's some really cool stuff that we're building. It's a ton of fun. So we've got I do and I don't like the digital twin word. It's got some good connotations, but it's also got some challenging ones. But I'm going to use it what it is. It is the digital equivalent of me and if you think about it as a human being, I like pressing buttons. I want to click on the next email, I want to click on the next button, I want to download the next thing. But if I had my twin that did it for me and it fell into the pit or it got its backside handed to it or whatever else it might be, I'm okay with that because it hasn't impacted me. So we're building out some fantastic stuff at the moment um, some really really cool tech where it makes those decisions. So you almost get into, you take it from a very reactive situation and you start getting very proactive and predictive, and the nice thing about the AI world is you can build some really really good predictive modeling.
Speaker 1:Yeah, it's interesting that you mentioned digital twin. Are we talking digital twin and deep fakes in the same kind of ballpark here? Should we come up with a new name for deep fake, so it's not as necessarily scary when it's deep fake for good?
Speaker 2:Oh, good luck on that one. Yeah, it's like hackers. I am a hacker and yet I do the best I can for good, but we tend to get blamed for a lot of things. You know, it's a tough one. I I, at some point, somebody somewhere is definitely going to come up with a better word for it, because it's an individual thing. I think this is where it gets interesting is, a lot of the learning models are across a population or an area and organization, but for me, what we're trying to build and what we're doing some really fun things with a couple of the retail, a couple of the areas, is very much an entity that lives on our device, that represents us and it pulls from all those different data sources and it pulls all the information and it can make very predictive knowledge as to basically what we're going to do next.
Speaker 3:And words do matter and if you look at some of the legislation, that's popular. So over half of the states have some legislation on deepfakes, and so if you look at the wording, you're starting to see a consistent deepfake. But now it's artificially generated images or synthetically generated, so there's already, I'll say, some softening of that. But to his point, someone's going to come up with a killer word at some point. I'm not that guy and it's not now.
Speaker 1:Yeah, and I know we only have a few more minutes left. But you mentioned, you know, policy. What else can we expect from a policy and regulation standpoint, understanding that all of this is moving so quickly?
Speaker 3:Yeah, I can take a stab from a positive chuckle. From a policy perspective, it really falls into three buckets. One is election interference, and so really notifying folks that this image was generated, and we saw that in 2024 across both parties at a state and local level using deep fakes for their purposes, so that's a challenge. The other one is with regard to revenge, pornography and illicit images of children, and so that is the focus. With regard to consumer protections, it's not really there yet. It's likely covered under some other fraud language, but there are other measures being taken, right from deepfake watermarking, which is kind of sort of iffy.
Speaker 2:I think there's 110 or 120 or so things going through legislative review at the moment, covering deepfakes in all sorts of different areas, so it'll be interesting to see what shakes out Some consumer fakers, some data fakers. Now, obviously, with a change in leadership, we'll see what actually goes through and what doesn't. I think, no matter what legislation't, I think, no matter what legislation says, I think the best we can probably do is not just educate the organizations, but definitely do what we can to educate the consumers and go hey, here's what's coming down the line. And this is what I love about doing things like this is we get to put this out and go hey, listen to this.
Speaker 3:And there's also an element of the actual technology providers, right and so listing out the responsibilities of those that houses information or distribute it, that there is responsibility there. And so, just yesterday, I believe, the first lady put together a deep fake celebration on Washington and it was passed in the Senate and President Trump indicated and so it is specifically around those three use cases election, child pornography, and reverse and revenge pornography the implementers or housers have a requirement within 48 hours to pull off their sites. And so now, similar we saw, you know, a couple of years ago with regard to content moderation, we're starting to see a similar practice rear up which, to Chris's point it is, there are many folks and entities that are going to have a role in containing this. Yeah, big time.
Speaker 1:Any lingering questions that you think organizations, whether it be retailers or anybody across the board from from any industry, that they should be asking themselves now as we head into this future of deep fakes and using it for good.
Speaker 2:Ooh, man, that's a broad one. That's a broad one, I think, for me. I think the first one is one of the ones we typically ask is why? Why do you want to use it? Are you using it because you're chasing everybody else, or what are you trying to gain from it? Because we've seen this too often in technology Everybody jumps on the technology but it doesn't really have a good business case, which is why it tends to fall by the wayside Again. I think that's where it's fun sitting down and talking with you, with the retailers, is we get to do the workshops. We get to sit down, we get to not just dig into the tech, but the business. What's driving it? You get to out and said where are you going to be? And I think it's nice because that drives the conversation to a success. It drives it to deliverables and metrics, that you can actually sit there and go hey, we, actually we did make a difference.
Speaker 3:If you look at just capital allocation within an organization take Gen AI they didn't grow their IT budget exponentially. They had to pull from other parts of the organization, so a massive reallocation and repriorization had to happen. It's the same thing with deepfakes or any other technologies. The why? Because that money is likely going to come from somewhere else, and so you have to make the case, and so it's not always necessary for every use case.
Speaker 2:I think, the other one thinking about this as well. This is where, again, it and security are going to have to come out of their silos. When you think about it, this, so much of our technology is behind the scenes, but when you look at this, this is interactive with everybody else. So we have to go talk to the sales and marketing teams. We have to go talk to legal and compliance. We have to have those conversations with the business to understand where they're going with it, which is going to force a set of communication, collaboration functions that we're not the best at doing right you add.
Speaker 3:You add the like you mentioned. There's a technology, there's a business, there's a marketing, there's a crisis control, there's a product. I mean all of these, particularly for the fake thing. All have to work together to get to a point and then react should adverse events happen.
Speaker 2:Yeah, great point.
Speaker 1:Well, lots of work ahead of us To the two of you. We're running out of time here, so thank you so much for joining Adam for his time, and Chris look forward to having you on a third time for the Hatchery, maybe sometime soon. Thanks again, we appreciate it. Thank you, thanks for having us. Okay, appreciate it, thank you, thanks for having us.
Speaker 1:Okay, that's a wrap on this episode of the AI Proving Ground podcast. Of course, big thanks to Adam and Chris for a conversation that challenged how we think about synthetic personalization and AI's evolving role in our lives. Here are three key takeaways from today's discussion. First, ai isn't just accelerating, it's becoming ambient, from personalized ads to interactive store displays. We're entering a new era where artificial intelligence plays seamlessly into everyday life, shaping decisions in ways we may not even notice. Second, that democratization of AI has opened new doors. What was once enterprise only is now consumer grade, and that shift is unlocking creative and commercial potential across all sectors, just like retail. And third, technology isn't inherently good or bad. It's about how we use it. Deep fakes for deception are dangerous, but deep fakes for personalization, well. That might be the next frontier in customer experience, giving brands the power to create moments that feel deeply individual and emotionally resonant. Thanks again for tuning in. If you found today's conversation insightful, don't forget to subscribe, leave a review and share with someone exploring the intersection of AI and innovation.
Speaker 1:This episode of the AI Proving Ground podcast was co-produced by Naz Baker, cara Kuhn, mallory Schaffran, stephanie Hammond and Marissa Reed. Our audio and video engineer is john knobloch and my name is brian felt. We'll see you next time.