What's Up with Tech?
Tech Transformation with Evan Kirstel: A podcast exploring the latest trends and innovations in the tech industry, and how businesses can leverage them for growth, diving into the world of B2B, discussing strategies, trends, and sharing insights from industry leaders!
With over three decades in telecom and IT, I've mastered the art of transforming social media into a dynamic platform for audience engagement, community building, and establishing thought leadership. My approach isn't about personal brand promotion but about delivering educational and informative content to cultivate a sustainable, long-term business presence. I am the leading content creator in areas like Enterprise AI, UCaaS, CPaaS, CCaaS, Cloud, Telecom, 5G and more!
What's Up with Tech?
Agents With IDs: Trust In The Age Of AI
Interested in being a guest? Email us at admin@evankirstel.com
Imagine asking your AI to book the flight, pick the hotel, and reserve the car—and it just does it. That convenience also opens a massive security gap when agents ask for your passwords, impersonate you online, and leave no clean line between what you intended and what was executed. We sit down with Peter Horadan is the CEO of Vouched, to map a safer path where digital identity, consent, and reputation tame the chaos without killing the magic.
Peter explains how mobile driver’s licenses bring cryptographic proof to everyday transactions, replacing fragile one-time codes and making phishing far harder. We dig into why physical IDs are increasingly easy to fake, how digitally signed credentials flip certainty in your favor, and what that means for ecommerce chargebacks, password resets, and high-value purchases. Then we pivot to Model Context Protocol, a fast-growing standard that lets websites speak “agent” natively, moving beyond clumsy screen-scraping into clean, auditable API calls.
The heart of the conversation is a pragmatic blueprint: redirect humans to grant permissions just like OAuth, scope what agents can do, log consent, and keep it revocable with one click. We talk about building reputation systems for agents so good actors rise and bad bots get blocked. Healthcare emerges as a powerful use case—patient access becomes smoother, and remote workforce fraud gets harder—while privacy stays intact thanks to standards that avoid phone-home tracking to the state. If your site has a login button, the future is knocking: agent-friendly APIs, digital ID checkpoints for risky actions, and clear rails that let assistants work for you, not against you.
Subscribe for more smart takes on AI security, digital identity, and the standards shaping how assistants act on our behalf. Share this with a colleague who wrangles logins or fights fraud, and leave a review to help others find the show.
Can't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.
Listen on: Apple Podcasts Spotify
More at https://linktr.ee/EvanKirstel
Hey everybody. Fascinating and important topic today as we talk about identity and trust in the AF in the area of AI and agentic AI with Vouch. Peter, how are you?
SPEAKER_01:I'm great. Thanks for having me today.
SPEAKER_00:Well, thanks for being here. Really timely and important topic. Before that, um maybe introduce yourself and what's the big idea at Vouched?
SPEAKER_01:Sure. So I'm Peter Hordan, CEO of Vouched. Vouched helps people identify themselves online. When you're doing important things like opening a bank account or seeing your doctor, you need to prove conclusively who you are, and we we help solve that problem. And it's really a fascinating time to be working in the world of identity because there's two big changes coming that are really going to upend how every business thinks about identity. And those are digital driver's licenses, which is more than just taking your physical license and putting it on your phone. It actually has a lot more functionality, which we'll talk about. And a gentic uh you uh AI, where the AI actually starts doing work on your behalf, logging in as you. So this creates a whole new set of identity challenges. So it's it's a really exciting time to be working on these problems.
SPEAKER_00:Indeed. And um we're talking about not just identity of people, but identity of agents. Right. So why do AI agents even need identity or reputation? What's the actual problem we're we're solving there?
SPEAKER_01:Yeah. Well, it's been interesting to see just how fast AI has developed. Uh when AI started, when chat Chat GPT and tools like it came out only just a few years ago, it's hard to realize that. They're terrific at doing research or doing analysis for you, but they don't actually take action for you. So, for example, you could say to the AI, I'd like you to plan a great vacation in Hawaii for me for this weekend. And it would know your preferences and it would find a great hotel for you, a great beach, uh, what flight you should fly on on. And it would just tell you that, say, okay, here's your vacation. But now it's up to you to go book that vacation, to call the airline, to call the rental car company, to call the hotel and everything else. And you have hours of work on your plate to go uh to go make all the preparations for this vacation. Well, what's happening now with the GENTIC AI is the AI says, would you like me to book this for you? And then what it does is it actually logs in on your behalf to the airline and books your plane ticket. It logs in on your behalf to the rental car company and books your rental car and so on to the restaurants and everything else. And the way that that works today, and it's very easy to try, if you use ChatGPT, you can just, when you're entering your prompt, you click the little plus sign, uh, you click agent mode. And then when you do that, what happens is if you say please book this flight, what happens is ChatGPT will launch a browser window right there inside your browser window. It's kind of amazing to see. You're you're in your browser and there's another littler browser window inside of it. And ChatGPT is automating that browser window. It will then navigate to the airline, find the login dialog, and then says, okay, I'm gonna give you the reins for a minute. I need you to log in with your username and password, um, and then I can continue. And so you type your username and password, and then usually there's some second factor, you know, like the six digits that get sent to your phone. So you would then go get those six digits off your phone, enter them, and then the AI now would say, Oh, great, looks like you're logged in. I'll take it from here. And you can watch as it manipulates the UI of the website to go find the flight and book the flight. It's it's really pretty amazing to watch. And, you know, it's stunningly useful. I mean, it's really an amazing development that, you know, now for the first time, everyone can have their own administrative assistant that can do a bunch of the drudge work of life. But the problem, answering the question you asked, what why do we care about agentich identity, is that we just cannot have a world where the AI, number one, gets our username and password, right? You're typing that username and password in the window that's controlled by the AI. So the AI can certainly see your username and password. Now, I have no doubt that the some of the tools we've mentioned, like ChatGPT, are trying their best to do a good job. However, it is not a security best practice to type your username and password into any third-party tool that's not the website that got it from you. But the bigger problem is if you think about the website, the airline in that case, um they have a world now where they can't tell the difference between actions that were performed by a human and actions that were performed by an AI. And so, for example, if the human comes back and said, that's a fraudulent ticket, I didn't buy that. Well, they have no way of knowing now. Well, did the human buy that or did the AI buy that? Um, it gets much worse if you talk about financial transfers and so on. And think about systems that we use at work. So, a lot of people, let's say you work in sales and you want to be a better salesperson. You might use this feature to you say, okay, I'm gonna give the AI access to my Salesforce automation account. It's gonna log in, it's gonna look at all my prospects, it's gonna tell me the next best move for each prospect. This is terrific. However, the problem is now the AI has access to all of your corporation's private data, such as your prospect list, your uh how well you're doing in sales this quarter. If you're a public company, you could go trade on that. Um, you know, it can take competitive information and so on. And a lot of us have really don't spend a lot of time thinking about the privacy of our conversations with AI. You feel like it's kind of a private conversation, but it really isn't. Uh, as a perfect example, right now there's an outstanding subpoena where the New York Times has sued OpenAI and ChatGPT and OpenAI are forced to retain all current chats, including chats you've deleted, for possible future production to the New York Times in their lawsuit. I think most people don't realize this. So even if you're not involved in a lawsuit, someone who you don't expect could wind up seeing your chat because of some other some other lawsuit. And so there's all kinds of problems here. First, there's this simple security problem of your username and password getting out in the wild to third parties. Second, of there's no line between what the agent does and what you do. And then third, the agent can see all your private data. And these are problems we need to figure out.
SPEAKER_00:Wow, that that's a lot to unpack. Yeah. Um how big, in your opinion, is the risk of fake or malicious agents impersonating people or companies or just at the beginning of this journey. Any idea of the impact in dollars or otherwise?
SPEAKER_01:Well, you're right. It it really is the early days. Most people don't even know this feature exists. And I think there's a lot of work to do to figure out well, how should how should this really work? But really the risk is enormous. I tell CISOs that we talk to that you should absolutely ban this practice inside your company's firewall. I mean, you you need to communicate to all employees that using any agentic UI other than what was provided by the company, you know, using some third-party UI that was not provided by the company in your work and giving your work username and password to any third-party AI is is a is a significant offense. And and nobody should do that. Um, and the company needs to take this very seriously. Because I think a lot of CISOs don't realize that right now, today, there's a bunch of AIs that have your employees' username and passwords to all of your corporate systems. I guarantee it. And what kind of a security risk is that for your company? You know, we've spent decades teaching users never give away your username and password to any third party. All of a sudden, users are doing it now. So the risks are number one, even a good intentioned AI can make mistakes, right? We've all seen hallucinations and so on. The AI could go in and accidentally delete all of your data. So that's that's one problem. Even a good intentioned AI might make mistakes. The second thing, though, is we're grooming all of our users that, hey, it's perfectly fine to give your username and password to an AI and let it do stuff for you. And by the way, it's very seductive. I mean, these are incredibly powerful tools and it's really useful and it's great to get your time back. But what happens when fraud AI comes out? Uh, you might be using a good tool today, and then tomorrow you find another tool that looks even better, but that that turns out to be a fraudulent tool, and it's sort of suckered you into thinking that it's going to be just as careful with your data as a good UI AI is. Um, and then it goes and does fraudulent stuff. For example, if you're shopping at an e-commerce vendor, at a you know, an e-tailer, and you use an agent to go into your account and buy stuff, there's nothing that stops that agent from going back tonight and placing a bunch of fraudulent orders on your account with your credit card. And you might not even catch that until that merchandise has been shipped and delivered. And so fraud could be pretty significant. And for the customer, the merchant's gonna say, Well, it was your username and your password that was used to place this order. Um, and by the way, the six digits to your cell phone. So, what do you mean you didn't place the order? And for the merchant, they might start having a lot of problems with their customers at scale where they're getting a ton of fraud orders that they have very little recourse because the customer is gonna do a chargeback, the credit card company is going to approve the chargeback, and the retailer is out that money. So the risks are really significant. And then we still, there's still the privacy risk where anything that that agent sees goes into the logs of the AI company. And even if you're not in a lawsuit, even if the AI company does nothing wrong, some third party could sue them and those things have to be produced. And so today, at least, there's absolutely no expectation of privacy with anything you're doing with any AI agent. And that's a place where you know regulation really hasn't caught up to AI yet. Uh, I think if you talk to a human lawyer, there's a strong expectation of privacy. And that lawyer is expected to protect the confidential information, even if the lawyer has to go to jail to do so. Uh, right? There's such a, we've all seen that on TV. I will not give up my client's name or private information. So there's a privilege there. Same thing with your doctor, your therapist. There's certain professions where there's a very, there's a strong expectation of privacy backed up by the law. And uh that's not the case when you use your software agent like a lawyer or like a doctor, which a lot of us do. I think we've all asked legal questions, we've all asked medical questions. There's no confidentiality to those things at all. And I think most people think there is because we're just used to this with humans, but there's not. So this privacy issue is a place where regulation really needs to catch up as well.
SPEAKER_00:Indeed. Enter vouched. Umgratulations on the fundraising, by the way. Tell us about the genesis of the company um and the journey so far to this notion of uh simplifying identity verification.
SPEAKER_01:Thank you. Uh thank you for the compliment on the raise. It's a terrific time uh for us. We're very excited about it. So our so our company was founded by John Baird uh several years ago. He was working in the online jewelry business, and he kept coming across the problem that he was shipping very high dollar merchandise, and it was very hard to identify people online and figure out how do you prevent fraud. So John created vouch to do identity verification with a physical ID. And we do that today. We're great at it. The process is very simple. The person is asked to turn on their webcam just like we have now. You hold up your ID to the webcam, you show both sides, you move around a little bit so we can tell you're a living person. And then we do all kinds of fraud checks from we know exactly what that ID should look like, we know what it looks like if it's been tampered with versus not, we know all the state security features. So we can use, we can get a lot from the camera to ensure that that ID hasn't been tampered with. We then also verify against third-party databases. We make sure that the face matches the ID, and then we produce a risk score that this person is or isn't uh who they say they are. And so, fast forward to today, and there's two sort of really exciting new developments coming. Um, listen, we saw we serve some of the bit largest financial institutions in the world, healthcare institutions in the world. Um, and it it's it's uh it's a great service because it helps people get access to life's most essential services. But there's big changes coming. The first is a digital ID. So a digital ID in the US, we call it a mobile driver's license. Seven states are already issuing these. I think most people aren't aware of this yet. You know, you might have seen a sign at TSA. Yeah, sadly.
SPEAKER_00:But yes, exciting.
SPEAKER_01:Yeah, same. My home state of Nevada, unfortunately, not yet. Uh, I would love it. Um, we're working on it. Uh, but California is issuing them, New York State is issuing them, West Virginia, Louisiana. So several states and 7% of the U.S. population who have a driver's license have a mobile form of the driver's license. The driver's license lives on your phone. And so what happens there is first of all, the state issues the ID and signs it cryptographically. So it is not possible to create an ID that that is signed by the state unless you're the state. So the first thing is there's very strong proof that this ID came from the state. Today, unfortunately, uh to get a fake physical ID costs about a hundred bucks. It's relatively easy. I think most college students know how to do this. And uh the ID comes in the mail. Uh, you order it online. Uh, it's really good. Uh the the holograms match, the barcode on the back scans. These IDs are uh extremely good. And I've even heard stories of people who've forgotten they have a fake ID next to their real ID, and they have taken the fake ID out and handed it to TSA and it has their real name, and they've actually been able to get on the plane because you know the everything, you know, kind of lined up except for except for certain things. So the problem is physical IDs are getting easier and easier to fake. And so, for example, when you're checking out in a physical retail store and you order a laptop and you take it to the counter and they look at your credit card and they'll say, Can I see a physical ID? You give them a physical ID, you know, it was it would only be 50 or 100 bucks to have a fake physical ID, and then you could go, um, there would be no protection from that at the store. And actually, people are using this to do a lot of negative things, including transferring property, uh, you know, with a with a uh what's called a synthetic ID. I'm not gonna explain more how to go do this, but there's been a lot, a lot of theft that happens because it's so easy to create a physical ID. So what happens is digital ID signed by the state, you cannot fake it. You cannot you cannot create a fake one. You cannot tamper with it either, because once it's signed by the state, you can't change it. And then producing it is really easy. If we're in a like you and I are in a browser session right now, recording this, the browser could just say, Would you like to provide your physical ID if you're shopping online or whatever it is, providing a plane ticket? You say sure. And what happens is your phone comes alive. Now you have protection that nothing gets provided without your permission. Your phone comes alive, says, Do you want to provide this ID? You must provide a biometric, either your thumbprint or your face print has to be a living biometric that to prove you're you. Because we as an industry don't want somebody to steal your phone and there, therefore they have your ID. So you have to have that biometric to use it. Once you use it, the ID is then sent back through the browser and to the provider. And it is absolute proof that you are who the state says you are. And so this is a sea change revolution because today, even when you show your physical ID, it's still a uh probability that you are who you say you are. We can we can get pretty close, but it's still not a perfect science. Once you have these digital IDs, it becomes an absolute certainty. And so this is gonna, and by the way, it's very easy. You just you went on your phone, you said yes, and then you put your thumbprint on, and you're done. I think that's easier than digging six digits out of your uh out of your cell phone or your email. And so our prediction is number one, that all password changes in the future, for any service that matters, are gonna ask you to show your ID. The number your digital ID, the number one source of digital fraud today is phishing, where you get a fake email that's trying to take your password that appears to be the site, you go to that site, they ask you to change your password. In the future, I think phishing will be completely eliminated because we're using digital IDs instead of sending six digits to your cell phone or your email, which you know is possible for fraudsters to intercept. The other thing is uh e-commerce will be changed by this. So that buying a laptop, if you are online and you're buying a laptop, now they might say, Hey, could I see a physical ID, please? You send your digital ID, and now that can't be spoofed. It's absolute certainty that of who's placing this order, which is going to be a major benefit for merchants and a great way to reduce uh order origination fraud down the road.
SPEAKER_00:Wow, what a great unlock. Very exciting, uh interesting stuff. Let's go behind the scenes about just a little bit for the techies and IT folks listening. Yeah. You're rolling out you're rolling out your own MCP server, Model Context Protocol. Right. Um, how does it extend you know existing uh implementations and um how is it used day-to-day? Run it, run us, run us through the scenarios.
SPEAKER_01:You bet. So MCP is really interesting. It's a protocol released by Anthropic recently, earlier this year, and it was almost immediately adopted by the entire industry. So MCP is a way to say if your agent, let's say you have a website that does something, let's just say you're an online retailer and you're you have a website, and the UI is optimized for human beings. Um it's it's meant to be pretty, it's meant to be very interactive. You know, you might hover over a menu and it slowly animates. It there's a lot of work that goes into that UI to make it great for humans to use it and want to buy things on your store. But a software agent showing up to buy something doesn't really care about any of that. It just wants to know what products do you have for sale, what do they cost, when can I get them, right? It's just it's just the facts, please. So MCP is a protocol that allows the website to expose its capabilities in a sort of AI to AI manner. What you do is you would rethink your website and say, well, rather than have this human UI, I'm just going to expose to REST API to expose the services that uh you can do on my website, such as get a catalog of my products, uh, get a product detail, put that product in your cart, do a checkout, whatever. That REST API then is exposed to an MCP server. Then an AI agent can come in through the MCP server, talk AI language, and access any of those services. So you so you rethink the functionality of your website to make it available in a very efficient way for an AI. But then what happens is you have another problem, which is well, wait a minute, how's that AI gonna log in through my MCP server? Um because it's shopping on behalf of Peter, let's say. So is Peter just gonna give the username and password to this AI agent? No, that sounds like a horrible idea. So had so what do we do? So that's what we solved at vouch. We actually saw this problem coming a year ago, last November, we started talking about this, about the need for know your agent in the future. And we released our know your agent platform in May. This received a terrific response. Uh many companies are implementing it, standards bodies are considering adopting it as a standard. But what have here's what happens every website that has a login feature that has a login button or a sign-in button is going to go through this little progression. They're gonna realize wait a minute, all of a sudden there's a bunch of non-human actors on my website. I'm not talking about bots, you know, web crawlers. This is a fundamentally different thing. This is an intelligence showing up. It's not just scraping the data off your site, it's an intelligence showing up trying to do something, but it's not human. So now I have these non-human intelligences showing up on my website trying to do stuff. They're doing it using HTML, like the most inefficient way possible. So I need to create, number one, a more efficient protocol. So therefore, I'm gonna create MCP. And then once I've you started using MCP, now I'm like, well, how's it gonna log in? And so what we've done is we've taken really a cue from the banking world because the banking faced this exact same problem in the early 2000s, which coincidentally, I wrote some of the first online banking software all the way back in the mid-90s when the internet was brand new. These gray hairs, I was there. There was a time before the internet and then after the internet. All the banks got online, they would show you your transactions, but you couldn't easily download them in those days. And so companies like Yoda Lee came along and said, Well, we will screen scrape these bank websites and we'll give you your transactions in a file. And they were very popular. Millions of users started using them in order to get the transactions in a file that you could upload to your accounting system or whatever. And those services really started taking off. And then banks noticed, hey, wait a minute. Uh, all of a sudden, this Yoda Lee company and others like it have the username and password of millions of our users. And that username and password can be used to transfer funds and everything else. This is an unacceptable security risk. This is not okay with us. So the industry invented something called OAuth2, which is a protocol which many of us have probably experienced this. If you've ever used a budgeting tool or a personal finance tool or connect accounting system to a bank, what happens is you go through an experience where you're using that tool and says, wait a minute, now I need to connect to your bank. You're then diverted to the bank's own website. So now you're you're in the budgeting tool, and the tool says, I'm going to send you to the bank, and you're out all of a sudden now the Earl, the top or on the top, is your bank. And you need to log into the bank, or if you're already logged in, you'll just go straight to a page. And then you get to some new page that says this budgeting tool is trying to get into your account and would like to do these things, would like to see your balance, transfer balances, and pay bills. First of all, do you trust this tool? And do you want to allow it to do some or all or none of those things? And you choose. Like, yes, I trust it, and it can see balances and but not pay bills, right? So then so then you click okay, you close it, you go back to the budgeting tool. Budgeting tool says, Great, I'm connected, I'm now gonna access your stuff. This is different than giving your username and password to a screen scraping service in some fundamental ways. Number one, we did not expose the username and password to a third party, right? Because we're we're not now trusting some third party to keep your username and password safe. Number two, there is an auditable, uh there was a logged moment when the human showed up and authorized this tool. So rather than just this site, uh this uh experience of well, whatever tools log in with the user's password whenever they want, we now have a specific moment where the human came to my website, logged, proved that they were themselves, and then authorized this tool. And I can audit that, I can log that. They also gave a fine-grained set of permissions. So you can choose do some things, not other things. Like with an airline, you might say, you can buy a ticket using my frequent flyer miles, but you can't gift my frequent flyer miles away to someone else. You might want to say that because right now, if you just give user password, it can go do both. So there's a fine-grained ability. And then the third thing is that this is revocable. The human can come back to the bank without going to the budgeting tool. The human can go back to the bank and say, turn off the budgeting tool. I no longer trust the budgeting tool. Or the bank itself. If the budgeting tool is involved in fraud, let's say, can go back and just say, nope, that budgeting tool's out and cancel the budgeting tool. So it's profoundly different from giving you some password. Well, we need the same thing, and it works great for banks. A lot of us have been through this, it's a trusted way to do things and proven. So now we need this for every website, and that's effectively what MCPI does. So what happens is now, when you are using an AI agent and you say, go buy that plane ticket for me, agent says, Great, I'm gonna go to the airline. Oh, hey, looks like the airline has MCP. No problem. I'm just gonna talk very efficiently to them. Then there's an MCPI protocol going back and forth between the AI agent and the airline, where the airline is saying, you got to send the human to my website in order to authorize you. First of all, what is your ID? What is your ID? The you, as an agentic AI, you need to have a unique identity that I can use to track your reputation. And if you behave well, I'll let you in again. If you don't behave well, you're never coming back. So there's going to have to be reputation for AI agents because there's going to be a lot of good ones. There's going to be a lot of horrible ones. And we learned this from email. You know, again, there's sort of nothing new here, right? When we invented email, anyone could send an email. And it turns out there's a lot of good email senders, but there's a lot of bad email senders. And 60 years into having email, we're still dealing with spam as a major issue and a major fraud channel. So we can we have to have a way to know the good UI away from the bad UI. And authors of good AI, I said UI, I meant AI. Authors of good AI need to be able, need to be incented to have their AI behave well and have a good reputation and have consequences if their AI has a bad reputation. So the first part of the MCI PI protocol is what is the identity of your AI? Okay. Then we go look and we say, do I even want to allow you in? Like, do you have a reputation? Is it good enough? If so, I'll proceed with you. Then we say, um, okay, you need to send your human to this area, to this, to the to this Earl and the human, which is on my site, and the human can then authorize you. So then the AI in the AI window with the human says, oh, you have to go to this Earl to allow me to work with this airline. Human clicks the Earl, up comes the airline. You can verify it. The Earl is the airline's Earl, and it's just like that banking page. It says, This AI is saying that you want them to work in your account. Here's the 20 things an AI can do for you. Which or any of these do you want to allow? You click OK. It says, Oh, and you know, you can come back later and change this or turn this off uh if you want to. Then we come back to the uh AI, and the AI says, okay, I have permission and I can go forward and do this. So that's what we've been building it vouched, and uh it's deployed in a number of MCP servers, and we're we're it's very interesting days because really every developer is facing this question for the first time. This problem did not exist maybe three months ago before a Gentec AIs came out. And uh so we we think there needs to be a completely new framework for how a Gentec AI uh agentic identity works if AIs are going to work on behalf of humans.
SPEAKER_00:Brilliant approach. And you've made this spec uh open to the public, I believe, or open source. We did.
SPEAKER_01:It's our country. So the MCPI specification, it's our contribution to the community. It's available freely, anyone can use it. Um, several standards bodies are looking at adopting it as a standard, which we're very, very flattered by.
SPEAKER_00:Brilliant. So I almost uh reluctant to ask you what's next. You're already leaning so far into the future, but how do you see identity and reputation shaping up over the next year or two? Like what is your uh outlook, roadmap, if you were over the next year or two?
SPEAKER_01:I think it's gonna be fascinating to watch. I think these services are going to become very popular because none of us likes doing the drudge work of life. And it we are now on the cusp of AIs being able to do that for us. So I think it's gonna be incredibly popular. I think that all websites today that have a login button are gonna have to rethink their security practices because they were all designed for the days of where it's a human coming in and using my site. And now we have a world where it's an AI coming in working on behalf of the human. And we have to reevaluate how do I even expose my website? Because I've designed it for humans. Well, what do I need to do so I design it successfully for AIs? I'll give you a simple example. On a retail site, an AI probably wants to know what are the last 10 or 20 products that this human looked at so that I can use those in my recommendations. A human usually doesn't want to know. There's a few sites that say what you saw last, but most e-commerce sites don't. So there's a difference in the information that the AI needs. There's a difference in the way that you want to think about security, as we've just spent our time talking about. And uh and it's gonna be fascinating to see where this goes because I think we're gonna see a whole new set of ecosystems come around, including reputation systems, where if you're a consumer thinking about trusting your AI with your most private information, access to your bank account or budgeting or any of your legal information or whatever, whatever, whatever's private to you. Um, can I really trust this AI? Like this looks appealing. What it does looks really good, but um, are other people reporting that this AI acts in their best interest? Or uh has this has this AI been been doing bad things? We've got there's gonna be a we've got to have a place where we can go to see the reputation of AIs. So this is all gonna get built out really quickly, and it's gonna be fascinating to watch it develop.
SPEAKER_00:And in every industry, it looks like I'm I'm on your website and you have integration coming soon with Epic, my chart. That could be a huge unlock for the healthcare industry, which is probably the most important place this technology could make a difference. Uh tell us more. When will that should that be available?
SPEAKER_01:Oh, well, that's we can we're working with our customers now on that. Uh so we're we're very excited about that integration with Epic. And we have many other integrations in that in the healthcare area. A vouched is the leader here. And you know, interestingly, in the healthcare space, it's interesting, companies need to think not just about know your patient, but also know your employee. Because today in remote work, it is very easy to create a false identity and come in and start working at a job and have access to very private information or do other bad things for that company and then just disappear. So a lot of companies are authenticating when you hire a remote employee, the onboarding process might look like, hey, send me a photocopy of your ID. And uh, you know, there's sites on the internet where it costs only$2 to get a fake photocopied ID. It costs$50 to$100 to get a physical one, but the photocopied one is easier. It's just a picture. So you can get this fake photocopied ID that has whatever picture, whatever information you want, you can choose whatever state you want. People use that to create a synthetic identity where some of the information is real, but the face is the fraudster's face, so the face matches. You then show up, you start working at this job, you harvest a bunch of people's personal healthcare information, and then you just completely disappear and nobody has any idea who you really were. So this is a this is this is a very uh concerning type of fraud, this employee fraud in a remote context. If any company, any company that's relying on a photocop, send me a photocopy of your ID or scan of your ID is very vulnerable to employee fraud. And that's part of why we're so excited about digital IDs. They're very easy to produce and they uh are not forgeable. You cannot create a fake identity with them. And so, Vouched, it's a lot of work to support these IDs. You have to support each state individually for the most part and different wallet types and everything else. So, vouched to build a platform to make that extremely easy because we're very excited about the potential of digital IDs to reduce fraud uh and just sort of make life easier for all of us. Uh, I predict that within 10 years, we will never again send six digits to your cell phone because it's just going to be more secure and easier for everyone to just say, just show your digital ID, please. And by the way. A lot of people have a concern that every time you use your digital ID, there's a phone home back back to the government. Um, so the government will be tracking you. That is not the case. The W3C and other standards bodies were very smart to build it into the standard that uh there these these protocols all work without any phone home. There's enough cryptographic proof that you are who you say you are in the ID itself that the ID can be used without a report back to the state. Just like when you use your physical ID. You know, if you're at a ballgame and you want a beer and you show that you're over 21, like the state doesn't get notified of that when you use your physical ID, obviously. The same thing's true with the digital ID. So a lot of people have a sort of a tracking concern that I'll be tracked, but um, I I don't think that that's a legitimate concern because these these protocols all specify that the state will not be notified when you use your ID.
SPEAKER_00:Well, good to know. Light at the tunnel. Congratulations on all the success and onwards and upwards. We're rooting for you.
SPEAKER_01:Thank you. It's really exciting to work on it. If uh we can help you or anyone listening, or if you'd like to uh work with us, we'd love to talk to you.
SPEAKER_00:Thanks so much, and thanks for joining. Thanks everyone for listening, watching. Be sure to check out our uh TV show at techimpact.tv, now on Bloomberg and Fox Business. Thanks, Peter. Thanks, everyone.
SPEAKER_01:Thank you.