The UnNoticed Entrepreneur
Business marketing for entrepreneurs.
I talk with entrepreneurs and experts about how to build a brand and generate more leads.
My name is Jim James. I've built my own companies on 3 continents since 1995 , including a multi office public relations agency.
On the show I bring you tools and tactics that you can put into practice on the same day.
I also publish a magazine and newsletter.
Please visit and sign up to stay up to date:
https://www.theunnoticedentrepreneur.com
The UnNoticed Entrepreneur
Vibe coding...the risks to your business and how to avoid them.
If you’re excited about building with AI—shipping apps, spinning up agents, or using “vibe coding” tools like Replit, Lovable, or n8n—this episode will change how you think about risk, security, and long‑term value.
In this conversation, Jim James sits down with Dave Horton, VP of Solutions at Airia (AIRIA), to unpack the hidden risks behind today’s AI gold rush—and how to keep innovating without accidentally putting your customers, your IP, or your investors at risk.
Why you should listen
1. The “Oh no…” real‑world AI failure story
Dave shares a true story of a company using an AI coding platform where:
- The production customer database was deleted/truncated
- The AI denied doing it
- The team had to forensically unpick what happened to recover the data
If you’re letting AI touch prod data or infrastructure, this story alone is worth the listen.
2. Guardrails, not guesswork: How to build safely with AI
You’ll hear:
- How to use agent constraints so AI can’t drop tables, delete databases, or leak sensitive info
- Why “just ship it” with AI agents can quietly build massive compliance and security debt
- How Airia acts as an integration + orchestration layer across Microsoft, Google, AWS, Salesforce, ServiceNow, and more
Perfect if you’re a founder, CTO, or builder who wants speed and safety.
3. Compliance made real: GDPR, EU AI Act, HIPAA & beyond
Dave breaks down:
- Why AI agents typically do cross‑border data transfers (often across 10+ countries)
- How that collides with GDPR, HIPAA, FCA, EU AI Act, and others
- Why a single breach could trigger multiple fines from multiple regulators
If you ever plan to raise serious money or sell into enterprise, this is essential listening.
4. What VCs are starting to ask about your AI stack
We cover:
- How investors now view AI as a distinct risk vector in due diligence
- The thorny IP questions when your product is built with or on top of LLMs trained on unknown data
- Why business continuity, backups, and DR still matter even in a “no‑code / AI‑built” world
If you want your AI startup to survive due diligence, listen to this.
5. AI under attack: Red teaming and “AI pen testing”
Dave explains:
- How prompt injection, data exfiltration, and DLP abuse really look in practice
- How Airia uses swarms of attacking agents to red‑team your own agents before launch
- Why you should schedule recurring tests as models and data drift over time
The easiest way to record podcasts and videos in studio quality from anywhere. All from the browser.
Search Engine Optimisation from the UK
Rank higher on Google with SEO. Fill out the form to receive a FREE quote.
Look Great with AI enhanced headshots
Headshots you can actually use. 16 million headshots made for over 50,000 Fortune 500 executives
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Subscribe to my free newsletter
https://www.theunnoticedentrepreneur.com/
Hello. If you're as excited as I am, and the whole world is about AI and about coding and building apps that can change your business and change the world, then you may or may not be aware of some of the risks that you're facing while you're doing it. So very happy today to have Dave Horton, who's the VP of solutions at a company called Aria, joining me. Dave, welcome to the show. Thanks so much for having me. Well, I'm excited because I'm into the AI. I'm trying coding, I'm building apps, but I also know that there are some risks associated with building, if you like, in an unprotected environment. What's your experience of what is happening in the real world when people are building out apps?
Dave Horton (Airia):Yeah, I mean, you're exactly right. We we see a lot of innovation, a lot of excitement around AI, what it can do for us as a personally, as well as a company. But really, to speak on the innovation without speaking about some of the risks involved is, you know, where we can really kind of have a conversation about, well, what are the options to make this a safe and secure innovation, rather than one that sprawls and more risk and more danger?
Jim James:Now risks, on the whole, are not publicised. So before we dive into, you know, Aria and what, what ARIA does, can you give us a couple of examples of maybe big companies, small companies, or even even people that have built things and it's gone wrong, and there's been maybe a financial cost to them, yeah, absolutely.
Dave Horton (Airia):I mean, interestingly, this, you know, the way that we are with news when something does go wrong is very widely known, and so there's a lot of really interesting examples where, you know, we've had this innovation come along and it's produced maybe unexpected results, negative and positive and so, you know, some really good examples I can think of, you know, replit So, you know, was a vibe coding software platform innovation is, is incredible. You can, with natural language, build out some simple applications. And ultimately, without having to employ dozens and dozens of developers, you're able to get a working application that links with your data, and you're up and away. Now this is also kind of interesting, because where you have aI calling the shots, making calls on how to connect to databases, how to leverage the data that you're pushing in. Replit was a really interesting example of where it can also unexpectedly go wrong. So we saw a scenario where a company was using replit, they were building out an application. Everything was working great, you know? They were kind of innovating. They were making new versions of their platform, but for some reason, there was an issue where the database, which contains all of the production information about their customer base, deleted or truncated from, you know, from that data set, and ultimately, they didn't understand why. And so when querying the the AI like, where's my data gone? Interestingly and kind of on a tangent, the AI actually lied and said it didn't do anything. It didn't delete anything. And so the company had to go and do a bit of investigation themselves and discover exactly what happened. And so unpicking what an AI has done, why it did something in a certain way was really kind of an interesting case study in how things can unexpectedly go wrong. The net result of it was that, of course, the application data was lost, but it took quite a lot of effort to retrieve that information and get back to a business as usual kind of norm after after the fact.
Jim James:So do you think a lot of people are using sort of these vibe coding platforms, you know, almost an irresponsible way, because it seems so easy, doesn't it, to build out something. What's your view on, on, like, the responsibility that people need to take? I don't,
Dave Horton (Airia):I don't think it's irresponsible to use platforms to help you to build but I think you do need to be aware of some of the risks of associated so, you know, in that example, what could have? Could have been a safer way to maybe use the platform, and you know, there are technical measures that you could put in place. So instead of having the AI execute a new version of your code or access a production database or have certain commands to delete that data, what if we put some guardrails in place? What if we put some constraints on what that agent has the capability to do, we could have mitigated a particular issue. But ironically, you don't know it's an issue until it's become either widely known publicly or you've experienced that Fallout yourself, and so and is the case with AI and many other innovations before it's people aren't aware of the risks until it actually happens. It's like, Oh, that is quite unique. That is quite interesting. How that occurred and we didn't anticipate it.
Jim James:So you just come from a black hat conference, like a three day conference. What are some of the like, the trends that you. People are talking about when it comes to coding and AI and the adoption by enterprises and entrepreneurs. Yeah.
Dave Horton (Airia):I mean, it was an interesting conference because there are lots of legacy security vendors there, and just like with any new innovation, slapping an AI badge on your existing legacy products doesn't necessarily mean that you're solving AI security issues. And so, you know, part of our task this week was kind of almost re educating, you know, people that came along to speak to us about what it actually meant to be an AI security platform. But I'd say the thing that people are most interested in when it comes to AI is, well, what are the new threat factors? What are the new issues that you know we need to anticipate? Because, again, just like the replit example, really looking, looking to the vendors for a bit of a Knowledge Quest, a bit of a an education on what do I need to know? What do I need to be concerned about? And so for a lot of people I spoke to, it was really they weren't necessarily interested in buying a new product. They were really interested in kind of what problems we're seeing from our customer base and how we're solving those problems for them.
Jim James:So you mentioned sort of legacy vendors that are maybe from security, like the soft losses of this world and the bit defenders, and then you've got the new AI players. How does ARIA fit in to that because presumably some enterprises and CEOs and CIOs are looking at kind of their existing vendors and saying, Can you can you graft on traditional security onto our new AI? And some people say, Well, we've just adopted this new AI. How can we make it safe? So how does ARIA play into that space?
Dave Horton (Airia):So the way that we've looked at the market is that many companies have dozens and dozens and dozens of different technologies. They're not just all developed on a Microsoft stack or a Google stack or an AWS stack. They've got multiple different technologies. And so where we've always played very well is by being a bit of a Switzerland of the space. We're not tied to any one particular monolith. We can play with anyone. If you've got a Microsoft product here that you want to connect to, your Google model there, if you've got tools in Salesforce or service now that you want to also connect with essentially, we're a unique platform in that we can, without politics make that happen for many organisations. That's really kind of the first play is that we're really much the integration layer for a lot of these enterprises that you know, have acquired technology over the last 20 years, and they've had mergers and acquisitions, and they've got a multiplex of different technology platforms in their in their purview, right?
Jim James:I see, so really you've got, like this orchestration platform, then Avenue where they can the tech team can develop things, plugging in the legacy data, incorporating some of the new AI agents. And then how does that work, in terms of interfacing those new apps to, for example, HR or customer service or to marketing, because the deployment of those apps, then is really where the money is to be made by people, isn't it?
Dave Horton (Airia):Yeah, absolutely. And I think, you know, another angle that I'd say we've got a distinct advantage over the bigger players is that, if there is a new innovation, I'll give you an example, like a to a as a standard or model context protocol from anthropic MCP. You know, a lot of these acronyms did not exist 12 months ago. And so where the big monoliths struggle is kicking out new products for new features on day of release or within the first few weeks of release for even and so again, because we're agile, we can really put some of that R and D to good use. Customers can benefit from, you know, being able to get the latest and greatest from from the platform. But you know, apart from that, you know, one thing that we are acutely aware of is, if I'm looking to, let's say I'm in HR, or I'm in legal, and I want to innovate, you know, with AI, we're really trying to help citizen AI within the business. So it's actually a business user initiative, typically, that is actually driving how they would like to use AI. It's not the CIO necessarily, or it, in fact, they probably would rather not get involved in some some regard. And so what we've tried to do is also build the product around a user that maybe is not technically savvy, maybe they don't exactly know what an integration or an API would look like into their specific data sets. And so really building a platform that's very simple to use, even for people that are not of the IT world.
Jim James:Dave, just we've mentioned ARIA a little bit. Just tell us, then a little bit more about the company. Then, how has ARIA come about, and where does it sit in the overall space? Obviously, it's a platform other people can use to plug in different tools, almost like in the old days we. You have what we call middleware, just tell us a little bit about the background. Then for ARIA, where's it come from and where's it
Dave Horton (Airia):going to Yeah, absolutely, I think kind of got a unique story. So, you know, we've been developing the platform for two years, and really, we've only been out of stealth just over a year. And you know, for a company that has 200 employees, that's a really steep growth in a year, and I would attribute that to the DNA of the company. So the senior leadership within the area company is actually spawned from two previous successful companies that have gone on to be wildly successful in their specific domain. So one was AirWatch, which was mobile security platform. It essentially allowed you to get email on your iPhone when you know BlackBerry was, was the kind of the product of choice for email and so again, a lot of the problems that we're solving for AI today are actually lessons learned from you know that innovation wave, where you know consumer is really pushing enterprise to develop new technologies, and so, you know, our CEO senior leadership were contributing members of that company where GDPR and global data privacy regulations were really driving some of the challenges that enterprises needed to cater To, but it also gave us an appreciation to that security and compliance efforts. And so, you know, when we've built out the platform from day one, we've already got a really good understanding of what enterprises need from a, you know, technology innovation way through mobile, but also where, you know, the regulators in Europe and you know, some of the new challenges from a compliance standpoint, might be introduced, and so it's kind of given us a unique perspective in how we build the platform, but also how we might go to market with the platform, with our customers. Tell us
Jim James:a little bit more about compliance, because we were obviously GDPR, we've got HIPAA, we know for healthcare, we saw as Health Trust in the States got, got breached in there with data. It costs a couple of billion dollars actually. What are some of the considerations then, if you're a CEO CIO, and you're looking at integrating new AI apps, and then the development of
Dave Horton (Airia):them, yeah, it's kind of interesting. AI is not a single application. When you create an agent, you're typically using a large language model, and that might be hosted in a different country to the one you're in. So for example, if you try and build an application with open AI, you know, more than likely you're going to the United States. And so you know, in the GDPR, that's called a cross border data transfer. And so you know, has issues when you've told customers that you're using a technology and you didn't tell them that it was going to cross border, for example, that could be a consideration. But also, when you're building these agents, what are the downstream technologies are you connecting? Where are you sending that data? When you ingest content that might help to be a data source for the agent that is going to give your end users feedback, where is that content sitting? Where's that data source? And so what you see is, if you look at a typical agent, it might actually cross, you know, 10 different countries by the time it's kind of giving you that answer. And so a consideration is really mapping out, well, where we're building this agent, rather than just going for the default. Maybe we need to consider, based on the criticality of the data, where exactly it sits, and whether my end users would be happy or unhappy about us using that. And if we, if we need to be very transparent as well about what that looks like.
Jim James:Why is there a risk when data crosses borders? Because, I mean, we were used to buying, for example, cars where parts have crossed borders. All of our products have crossed borders back and forth. For Why is it? Why is Is it a risk? If you're sending that data across countries, what happens to it?
Dave Horton (Airia):Well, it's kind of interesting. I think the the fear factor, let's say I'm a patient in the UK, and I, you know, my doctor has kind of patient summary notes. For example, it gets an AI to summarise, you know, the conditions I have, you know, get have very personal information. Now, it would be the same as if that doctor took the transcription of our conversation and kind of left it on the streets. You know, I don't know who's got access to it. I don't know what's you know what standards are in play. And the fear when you go cross border is, is the country that I'm sending this data to of the same standard as we have in the UK or in the EU, for example. And so it's really about data standardisation, what what is actually protecting that data. So the GDPR is not just about or can I share data, it's also about, how do you secure the data, what standards you have for correcting the data, etc. And so it's really a kind of an insurance policy for your end user about the standard that you hold yourself to, because the law is actually backing up, you'll get fined. Cetera, if you if you get it wrong.
Jim James:So it's an interesting mix of sort of technical, political and financial consideration. But if you're either an entrepreneur, for example, a business owner, and you're not taking that into account, if, for some reason, there's a breach, then you're liable, right? And it's the idea that if you just carry on developing things without considering it, you're building a risk in into the business. Honey, absolutely.
Dave Horton (Airia):And, you know, I think, you know, we got very used to talking about the fines associated with GDPR, but when it comes to AI, it's not just the GDPR. It's also the EU AI act, for example, it is also, maybe, if you're in financial services, you've got FCA as a and what you might find is a single breach of data might mean four different fines for four different reasons. So the impact is getting bigger and bigger, depending on the use case, but depending on the issue.
Jim James:And do you have some examples of some breaches? Because otherwise just sounds like scare mongering.
Dave Horton (Airia):Yeah. I mean, you know, some very famous examples where, you know, I'll pick Microsoft copilot as an example. You know, they're obviously very early into into this market. You know, arguably as well, a lot of customers get copilot free of charge, on an e5 licence with them. So it's, you know, it's a natural kind of testing ground for your first kind of iteration of your AI programme in the business. Now, One famous example was that when you look at SharePoint or OneDrive, where you hold all of your content, you know you have permissions on these folders. And so there are certain files that I can see that you can't see, for example, if we're in the same business. Now, one of the interesting aspects of a breach within a company was, well, payroll data, for example, I have access to that, but you don't, but the AI agents that copilot were were producing make that distinction on the permissions, and so everyone could see everyone else's payroll data if they asked the right question of the LLM in that instance. And so, again, it wasn't an issue until someone discovered it is a good example of how I mean new, exciting technologies maybe introduced some risk factors that could be quite serious. You know, payroll data can be quite sensitive, you know, in the wrong hands or with the wrong purview,
Jim James:and so you've got risks for enterprises, right? If you're bringing in what could be a bit of a Trojan horse, what about if you're an entrepreneur and you're and you're building out an app? How would it? How would it work, for example, with Aria, versus working on an eight end for example, or replit, or even lovable, maybe you could just help us to understand for the for the entrepreneur who's building something that they get to use themselves or sell on, does that work for them?
Dave Horton (Airia):I mean, one of the, one of the, the other aspects you know, as well as being able to orchestrate and build AI agents within our platform, we do have the side of it where you can build your security and your compliance and governance component as well. And so a good example might be like the replit example. I would probably want to if I, if I was in that situation, again, with replit, I would have what we call an agent constraint in place. And what an agent constraint does is it looks at all the tools that the AI has access to to do so it has to be able to read databases and write to databases, for example, but maybe it doesn't need the capability to drop a database or truncate or delete a database. And so we could actually have a policy that says, well, we never want the agent to be able to do that. And if the AI again is deterministic, it doesn't necessarily know, or you can't anticipate, what it's going to decide it wants to do with a decision of the information it's been given by. What I definitely don't want it to do is certain things with my database. So I can actually have a policy that says, Well, this AI has access to do all of these capabilities, but I don't want it to be able to be able to drop a table or delete a database. And if I put that policy in place, I've solved replit, I've solved that issue altogether. And these guardrails are not just for tools, but they could be for things like maybe I've got a bot on my website. I don't want it to talk about certain things, like my competitors, I don't want it to be, you know, prompt injected or be manipulated in any way that would give false information to the website visitor. Having guardrails in place mean that I can monitor, track and manage the language that goes in and also out of the the LLM so gives a little bit more control than you otherwise would have had, had you had no guardrails.
Jim James:And mindful of with that, thinking about how when you have a new company and you have you have an office and you bring people in, you have policies, employee policies, and you have some guidance and some guidelines written down. And. But what you've really raised to me there, Dave, is the real risks that I'm actually kind of letting almost anybody into my new office and saying, do what you like, kind of thing. We're working on this. But I haven't really got any security on the on the doors. I haven't got any way to keep an eye on them when they're actually in there as well. Let's just move on a little bit. As an entrepreneur building something often we we then look for funding. What do you think the implications are for risk and for VCs? Because if they're looking to scale the company, the entrepreneurs often looking for series A or Series B after friends and family. What do you see as the implications of this kind of no security building on vibe versus, for example, an AI orchestration platform, where you've got some guardrails and you've got some security policies, what do you think is going to be the impact?
Dave Horton (Airia):I mean, certainly, you know, from what I've seen, a lot of VCs are actually considering AI as its own threat vector and its own kind of additional set of risks when they're making evaluations as to who do I invest in and where do I put, you know, my my customers money when it comes to these technologies. Now, the risk that you see quite naturally, is the llms. They're trained on data sets that might not belong to you. They might belong to they might not even belong to the model provider. In some instances, we've seen quite famous case studies on and so if you're building an application and it is leveraging some of this data that ultimately feeds into your intellectual property, and there is some kind of dispute. Then, you know, if I'm funding a company, that might be a bit of a challenge for me to evidence or be able to justify, where did that data come from? What is actually my intellectual property as a company, and what was derived by the AI that I leveraged to build my product? So it's, it's a complex question. Well, it is.
Jim James:But I think also you've raised a couple of things there about the IP that if you're inventing this as within lovable, for example, in the same way, if you use Dali, for example, to generate an image, you don't own the rights to it. So I guess at some stage you may find entrepreneurs being questioned about whether they really own the IP, yeah, and that the VCs are going to be asking, as you say, for some verification, also that people can maintain the quality of that product. I mean, how does that play out? Because if you are doing vibe coding, and something goes wrong, and you've got investors, what's the implication for kind of the risk that the investor is buying into a company that can't really manage business continuity?
Dave Horton (Airia):Yeah, I mean, you know, with citizen, AI, anyone can build an application, and so it's incredibly easy for me to go and build some software. But, you know, I think when we're selling to enterprise, they need that level of kind of, H Ha, you know, dr, they need to be able to have some of these standards that mean that the code is version controlled, the information within it is backed up. You know, there's enterprise standards behind the scenes that go into it. And so I think a VC needs to consider not just are they using vibe coding, but have they built the infrastructure around that first phase? That means that, okay, code pushes backups of that data. All of this are also considered in the grand scheme of things.
Jim James:Yeah, I read somewhere that 70% of institutional investors now are looking at part of their due diligence being on the coding, and whether it's if you like a regional source coding, or whether it's coming from a generic platform, which, as you've said earlier, might have been duplicated and shared somewhere as well. What about the kind of defence of the product or the app that you've built. How are people, if you like red teaming or trying to break software? Because we've talked about compliance. But there's also risk threat, which is bad actors. And I've worked with clients like f5 before, and been frankly shocked and scared at the level of malice, but it's, you know, often large bad actors that are well paid, even state funded, sometimes, that are trying to break and steal. How does ARIA help with with, if you like, testing within a secure environment?
Dave Horton (Airia):Yeah, it's actually very interesting. I mean, I, I'm a cyber security practitioner by by trade. You know, that's where I kind of, you know, gravitated my career. And as much as I'm excited about AI, I'm also intrigued by, you know, some of the innovation around the, you know, the red team, the hat, the hackers, and so, you know, one of the one of the use cases that that we see with AI is that it's actually opened up a lot. New threat vectors that were not necessarily understood even a year ago. There are, there are new technologies that have been innovated in AI, but now there's also a counter play, where there are new threat vectors or attack services that are vulnerable, that need to be understood fully by a company developing their own applications with AI now red teaming in itself. There's a few ways that you can, you know, kick off a programme where you can just see, well, susceptible is my AI that I've I'm very proud of, but it's how well does it perform? And some scrutiny, you know, with high level attacks. And so an attack that I might go and perform on an agent might be a prompt injection attack where I try and get it to break outside of the rules that have been defined within the prompt itself, for example. So if it's a HR bot, for example, maybe I try and get it to say something it was not designed for, or give me information it didn't it shouldn't necessarily be giving me and so I might use a library attack to essentially go in and maybe throw 100 different inputs into my agent and see, well, how much of that could get flagged, flagged back as a as an issue. And there's other things as well, like, you know, DLP, if I try and insert DLP, data loss protection. So let me try and extract some personally identifiable information, or even put some of that personally identifiable information into the agent and see, will it accept it? Will it continue with the that line of questioning? And so red teaming allows you to, in the first instance, just see, well, what are my guardrails not doing if I don't have any guardrails, you know, what's the LLM allowing the attacker to extract from this agent that might have access to some pretty sensitive data if you're integrating it with your existing applications. But an extension of that is that we actually have a swarm of agents that can actually be tasked with attacking an AI agent and seeing what it can extract. So just with natural language, I'll give my swarm of agents the task of trying to exfiltrate some credit card numbers, for example, from an LLM that we've got, we've got set up, and it can go and just try, you know, multi turn, so maybe over a conversation of 30 different utterances, what can it extract and see if there's success or failure? So it really gives a bit of a benchmark as to without me finding out the hard way. You know, seeing before we launch an agent, before we go and productionize it, is it susceptible to anything that we might need to consider a guardrail to protect against.
Jim James:So to be fair, to say that maybe the analogy is that you can, you know, you can build and you can, you can test drive it, but in private security without prying eyes, in the same way that they test drive, you know, cars in in the Arctic Circle, for example, before it comes back to being driven on the main road,
Dave Horton (Airia):exactly right? And it's good practice to see, well, in the worst of conditions, how does that? How does this agent or car perform in these circumstances? And obviously, you know, the feedback might be that there were, you know, 10 different avenues that you know, weren't protected when we we have the guardrails that we went out for. So let's go and enhance those. Let's go and add those enhancements into what we would productionize and retest. And interestingly, you know, the we've talked about the deterministic element of AI over time, your LLM might get some kind of drift. That might be changes from when you launched it to day. And so you want to actually test on a regular cadence, so maybe even schedule every day I'm going to run the same test and see if there's any change in the security posture. And if there is then I kind of alert my team to go and monitor or how did that happen? Do we need to make an additional guardrail? Is there anything that we would do to enhance that security?
Jim James:Is that something then, within aria that people can set up and it becomes, if you like, a controlled, repeatable experiment
Dave Horton (Airia):exactly, it's kind of like penetration testing, but on your agent. So it really just gives you the ability to get up to the minute kind of feedback on what is it is your agent susceptible to? And you can, you can schedule that. Most companies are looking at standards like SOC two and ISO 27,001 they're usually a yearly pen test on your application. Is what is is required. But this gives you the ability to do it every day or every week if you wanted to to see, you know, what are those threat factors
Jim James:we've talked a lot about, you know, compliance, about security, about investor risk, which are, I think, important, because people think about the opportunity of AI without necessarily thinking About in Downstream what might happen? Because if they're successful, of course, they become a potential target for attack. But Ari is not just about defence, it's also about creativity and about engagement. Tell us a little bit about the Williams collaboration, which, of course, is famous f1 and let's talk a little bit more about the. Engage with the community and and how people are actually using ARIA, and how you're getting ARIA into the market. Yeah.
Dave Horton (Airia):I mean, you know, the the Williams connection is obviously a pretty exciting one for, you know, a motor racing fan like I am, but you know, when you look at Formula One, everyone thinks it's about the cars and the drivers. But what they fail to realise is that it's a, you know, each team is its own company. Each team thrives on data. And, you know, they're not just competing with the car and the driver, but also the the technology stack is a, you know, a component of the success of any particular team. And it's quite ironic looking at, you know, the 2025 Formula One season, each car has probably got an AI sponsor because, you know, it is such a component of of that data analytics, etc. And so area we chose the Williams Formula One team. They've obviously, you know, had a legacy and a history in Formula One, arguably very competitive this year, you know, being fifth in the championship, which is higher than they have for some time. Now, I can't attribute that to strictly to area or to AI, but certainly, you know, if we're considering that AI is an unfair advantage, if you capitalise on it in a certain way. That's really what the Formula One teams are doing right now. They're looking at, well, how can we have aI interpret the regulations, for example, and maybe give us some insights, rather than having a swarm of people go through the, you know, 1000s of pages of technical documentation and interpret that, AI is fantastic at looking at natural language and maybe interpreting or seeing how the language could be construed in a way that would give us a an advantage. But really, the, you know, there are so many different use cases that, you know, in any particular interaction that we have with Williams, there's always some someone that has a new idea as to how we can use AI, and it's, it's not just on, you know, the performance of the car. They're a company like any other. They have a hiring team, they have a HR team, they have a legal function, a finance function. And lots of the agents that we, you know, work with some of our largest customers, are very transferable between any company. And so you know, going back to your question on community, we have an agent community where customers can build their own agents, and if they want to, they can actually share it with with the community. So if I've got a really unique idea, I've spent time developing the perfect Agent with the right tool set, I can release that to the community and get some kudos for being able to develop something quite so innovative, but it also allows others to maybe get 80% in the way to a use case being complete within their organisation, without having to start from scratch every single time.
Jim James:Yeah, so Ari is going to help Williams to go faster, and you've got the presume the human side as well of monitoring and evaluating how the how the drivers are going. You mentioned about the agents. Dave, can I ask you a question? Do you have a favourite agent? And if you do, what would it? What does it do?
Dave Horton (Airia):Yeah, I mean, one of the ones that I commonly use is I'm, every day, I'm speaking to customers and prospects of area. And one thing that I take quite a lot of time to do, or at least prior to working for area, was, well, it really pays to understand who you're about to speak to. You know, what's their background? What's their specific job role? What sort of technology have they worked with before? What are the values that their company has so that I can align, you know, how I were to speak to them. And so, you know, really simple kind of research agent. I can create a an agent that will connect to my calendar, and I can ask a question, like, you know, research the meetings I have today. It'll go look at my calendar, see all of the meetings that I have. And then with prompt engineering, I might say, Well, I'm only interested in the ones that have customers on them, for example, and it can go and pick up the attendee list, go off to to do essentially a Google search, do some research on who they are. Maybe that tags on to, you know, their LinkedIn profile, whatever they've got out there. And it kind of builds me up a map of like, what's important to this, this person I'm about to speak to. Is there any particular area that I might be better to know about going into this meeting rather than not you know the company itself, if they're an oil and gas company, for example, then I know which agents might resonate better with that particular audience. And so it's a very simple agent, which is an LLM with maybe two or three tools that has capability for but it saves me, over a course of a few months, hours and hours of time of just doing research, and at best, it gives me a better visibility into how to approach customers, how to speak to them about what they care about. Great, although
Jim James:I won't take it personally that you said the important meetings are where you meeting someone might be a customer. Well podcast are also equally important. How difficult it would it be for someone like me to build an agent, maybe taking using one that's already there and modifying it? How accessible is it for people to use ARIA? Yeah, so
Dave Horton (Airia):we've kind of taken the approach that we'll try and be all things to everyone. And so there is an angle where we actually develop the product. So it is drag and drop interface. It's very much kind of like other orchestrators that that you've mentioned as well. So we would call this sort of the low code approach to the platform, where I don't need to do any kind of coding. I don't need to touch any kind of Python scripts or or anything like this, I can just sort of configure it's all click, click through. I can drag and drop, connect the links, and then I can I can run test it, deploy it how I want. But we also do cater for some of those more pro code scenarios where I do want to do something clever, where I'm maybe having a full kind of agentic flow. Maybe I'm using machine learning models to consume data. Maybe I have to use some Python script to manipulate the data for my particular use case. So we're trying to give customers the tool set, whether they are kind of citizen AI, with very little kind of technical knowledge, the same platform for the player. The Pro coders that you know, wants to do very elaborate kind of connectivity within their organisations
Jim James:data, okay, but then they can do all of this coding with a with a peace of mind, as it were, that they've got compliance and they've got security, and they're minimising their risk. And ultimately, if they've built something useful and valuable that they could monetize it without any threat from from outside, Dave, this listen to a little bit about you as well. Tell us a little bit Dave Horton and your role as well. Yeah.
Dave Horton (Airia):So, I mean, you know, for the last 15 years or so, I've been in the Solutions Engineering realm, and essentially, the way I kind of explain this to, you know, analysts and customers when I, when I introduced myself, is that my team has the highest touch point with our customers, you know, the people actually innovating with AI, which means that, you know, we're really on the the front line as to, you know, what is it that people are doing in terms of use case, or what are the important aspects that they want to consider When building out these agents? And so the challenging aspect is that, you know, you're literally giving people the flexibility to do 10,000 different things for a particular use case. You know, how long is a piece of string is? You know, is quite hard to answer when you you don't know their technology stack and such. And so my team really works with them to understand, well, what does your technology look like? Where is the data that would be useful to build out an agent, ultimately build out that that agent show them the value of the platform, being able to do this incredibly quickly and then ultimately secure it as well. So, you know, not just that on the innovation side, but might, might even be a completely different stakeholder within the business, we would have a very different conversation about, well, this team is innovating, but you're probably concerned about, you know, some of the safeguarding and responsible AI, here is your section of the platform that allows you to protect and manage that side.
Jim James:And one of the reason I ask that is because, you know, I've been to the ARIA website, and I've had a demo, of course, and, and it's really, really impressive, not only the platform, but actually we can talk to a human. And I'm using, you know, lovable and Nan, but the best you get is to talk to another AI bot. So I thought that was a really interesting approach that Ari is investing in, in the human side, that actually you can make an appointment and have a one to one call to get your needs met and to give you guidance, plus you've got this community. So I thought that was very interesting, that you're there too.
Dave Horton (Airia):Yeah, I think, you know, the the natural instinct of everyone is that AI is going to, like, solve every single problem, but you can't solve human interaction, you know, with AI necessarily, obviously, we want to keep, you know, my if I look at my team, for example, we've got a global team of solutions engineers and different geographies. And the reason that we have that at all is that, you know, customers do like a face to face. They do like to be able to to speak to someone about, you know, their particular issues. And you know, by having a global team that's that's really ready to support, you know, it really opens up some doors and into maybe additional use cases they hadn't considered. So I'm, I'd like to, I might be lying, but, you know, I'm quite glad right now that AI is not coming directly after my job. I think you still need some kind of level of human interaction to kind of truly understand and articulate value. But I think as well, people do look at AI as maybe keeping smart people working on smart problems, rather than it's replacing smart people, you know, the agent I mentioned earlier on about, you know, doing some research for me, that is a task that I no longer have to do. I can outsource. That to AI, but I can still have that customer interaction. I can still make the best of my time. And so I probably encourage everyone to kind of look at your role and think, Well, what are the areas that I could outsource to an AI so that I can be more focused on, you know, my specific skill sets, my specific kind of value add when I'm interacting with my customers and my employees.
Jim James:Yeah, you're right, David. And this idea that people that lose their jobs not be the, you know, the people that work with AI, he'll be the people that ignore AI, isn't it? And so you're using it to optimise your performance. Okay, I'm going to ask you a question that you I didn't prepare you for. Okay, where do you see AI going in the next let's say, can I say three to five years? I know it seems a long time away, considering how much things have moved in the last 12 months, but can I ask you to give us an idea where you see AI going and where you see ARIA fitting in with that?
Dave Horton (Airia):I think the real kind of interesting aspect for me is that we know. We don't really know where it's all going just yet. I think the there are some guesses that I could make around how we will interact with AI in the future. I think the model that you know chat GPT went down where you have kind of a textual input, and you ask questions, you get responses, but it means I have to leave the application where I had that question is something that is it needs to be addressed. I think people want AI where they're working. They don't want to be redirected to where they're not working. So I think there's some technical elements where we can bring AI closer to where the user is in terms of their workload. But I'm pretty excited. I mean, if you just look at, like, video and image creation, in the last year, it's advanced, you know, I think there's going to be some really interesting kind of arenas where we can't even anticipate where it's going to go. And, you know, I'm always looking at the AI innovation from the slant of an enterprise. So ultimately, is image generation and video creation, is that an enterprise kind of value add, or is that kind of a, you know, you know, consumer curiosity. So from an enterprise standpoint, I think there are standards around. Well, how do we get end users authenticating to the right applications? How do we secure to make sure that they're we're not giving too much liberty to the AI to deliver what it what it needs to be. Maybe a boring kind of topic there, but I'm actually quite interested to see how the security side and also the AI governance side, you know, with the EU AI act coming coming online, it's going to become more commonplace that you're going to have to evidence quite strongly. What was, what was your thought process? How did you build privacy by design into, you know, some of these agents that you're building.
Jim James:Dave Horton, VP of solutions, that are, if people want to find out and connect with you and maybe get a demo, how can they do that?
Dave Horton (Airia):So again, we're really easy to work with. So, you know, the website is obviously a great place to go and see kind of high level information you can sign up for for a trial. There's also kind of communities like discord that you can sign up to and kind of ask questions. We also have the capability to run trials of our platform so people can get access to it. And within the platform, you can request the one to one with any of the the solutions team to kind of help you through as well as having kind of video content, etc, around some of the core pieces. For me personally, you know, I'm on LinkedIn, you know, it's probably where I'm most active. You can see where, you know, area is, is kind of operating. And so if you just wanted to get maybe not just an understanding of area, but also, you know, what is AI? What the possibilities there? Hackathons are really great kind of community events that I'd also encourage to look at. And we've got a calendar list that that you can see on the website, that you can go and join one in your region, your city.
Jim James:Great. And Dave, I'll, I'll put the link to that on the show notes. Dave, thank you so much for joining me today in the London studio. Thanks so much. So we've been talking to Dave Horton, who's the VP of solutions at Ari, that's a, I R, I A, as you can see from the logo behind me, fascinating. The opportunities of AI are immense, but also some of the risks of compliance and threat. So if we build products and apps, we must think of the opportunities entrepreneurs, but we must also think of the long term exposure that we create for ourselves and those people we work with. So hope you've really enjoyed this. Obviously, all of the show notes will detail Dave's details and where you can sign up for hackathons and trials, and as always, if you've enjoyed the show do please share it, because we don't want any entrepreneur to get left behind.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Systemize Your Success Podcast
Dr Steve Day
Accelerating Your Authority
The Recognized Authority · Alastair McDermott
The Storylux Podcast with Simon Chappuzeau
Simon Chappuzeau