What's Up with Tech?
Tech Transformation with Evan Kirstel: A podcast exploring the latest trends and innovations in the tech industry, and how businesses can leverage them for growth, diving into the world of B2B, discussing strategies, trends, and sharing insights from industry leaders!
With over three decades in telecom and IT, I've mastered the art of transforming social media into a dynamic platform for audience engagement, community building, and establishing thought leadership. My approach isn't about personal brand promotion but about delivering educational and informative content to cultivate a sustainable, long-term business presence. I am the leading content creator in areas like Enterprise AI, UCaaS, CPaaS, CCaaS, Cloud, Telecom, 5G and more!
What's Up with Tech?
Building An Open Agent Mesh For Real Enterprise Workflows
Interested in being a guest? Email us at admin@evankirstel.com
What happens when agents can talk to agents—securely, asynchronously, and at scale? We sit down with Edward Funnekotter Chief Architect and AI Officer from Solace to unpack the architecture behind an open agent mesh and why event-driven design is the key to turning AI hype into dependable workflows. Instead of a single chatbot, imagine a network of specialized agents that advertise their skills, discover each other, and coordinate like a high-functioning team. They fetch data from Salesforce and support systems, compile a customer health report, rank the urgent accounts, and hand you the three actions that matter.
We get practical about where to start. Dynamic dashboarding and interactive analysis are low-lift, high-impact wins, and they translate smoothly into scheduled reports and triaged alerts. Using an event broker lets you subscribe to only the events you need, template them into agent requests, and run parallel tasks without disturbing existing integrations. The philosophy is simple and powerful: make LLMs do less. Reserve the model for judgment and planning, then let deterministic tools handle transformations, queries, and updates. You’ll cut latency and cost while improving reliability.
Security isn’t an afterthought. We dig into prompt injection, data leakage, and how to enforce tainted-context policies so confidential data never escapes to public tools—and external data never triggers risky internal actions. Lineage and audit trails make decisions explainable. We also explore the role of standards like A2A and MCP, why big-tech sponsorship accelerates adoption, and how open-source on-ramps paired with enterprise hardening create trust. Looking ahead, we see an AI org chart that mirrors the human one: layered roles, clear responsibilities, and shared context running on faster, cheaper infrastructure that prioritizes answer quality over raw speed.
Ready to build with confidence instead of cobbling in the shadows? Listen now, subscribe for more conversations at the edge of AI and architecture, and leave a review with the first agent workflow you’d deploy.
Interviews with Tech Leaders and insights on the latest emerging technology trends.
Listen on: Apple Podcasts Spotify
More at https://linktr.ee/EvanKirstel
Hey everybody, fascinating discussion uh around what it means to have an open interoperable agent ecosystem with a true innovator in the field at Solace. Ed, how are you?
SPEAKER_01:Great, yeah, great to be here, Evan.
SPEAKER_00:Good to have you. You're really doing some amazing work before that. Maybe introduce introduce yourself, your journey with Solace. And what's the uh big idea behind your agent-to-agent communication approach?
SPEAKER_01:Yeah, thanks. So yeah, my journey with Solace has been quite long, actually, almost all my career. I started way back 22 years ago at Solace, almost at the very beginning of the company. And I mean, through that time, I've been doing low-level hardware stuff, FPGA stuff, uh management, but the last three years has been all AI. And I mean, our background in technology, our last couple of decades, has all been around event-driven architecture, specifically around our pub sub type of broker that we have and all of the different innovations we brought to that. And now we're looking at, I mean, in addition to just carrying on that line, but how can we leverage all of that technology to build a very open and easy to use, easy to extend agentic AI platform? And that's really what I'm 100% focused on now.
SPEAKER_00:Wow, what an amazing journey. And your time is now. So uh congratulations on that. And so, what does agent-to-agent communication really mean for those kind of tuning into this space and why is it so important?
SPEAKER_01:Yeah, well, I think it's the next step in generative AI. You know, I mean, we we know from the last few years clearly that these, you know, a chatbot, a chat GPT type experience is good. We can get a lot of interesting um information from it. It can do stuff for us. But I think as we move to the enterprise and really try to leverage what it has available in day-to-day activities, and especially as we get better models, more automation behind the scenes, we need to build these platforms that are capable of doing a number of different types of tasks, but all as one kind of workflow. And ideally, it can be very uh adaptive to what's going on. So it's not too prescriptive, it has some flexibility, but you need, of course, some guardrails around that. To build something like that, um, at least the approach we're taking is to look at making great agents unto themselves, but then having them able to talk to each other. And I think there's been movement towards this, you know, A to A from Google last uh spring really came out, and that provided um a protocol essentially to allow these agents to talk to each other. But they are all point-to-point. It uses HTTPS as its communication transport. And we had already, for you know, a couple of years before that been working towards how do we get agents, you know, in the you know, back when agents didn't really exist as far as terms and uh and a very defined technology, but how do we get these types of things to talk to each other asynchronously? And that's what we've been building. And with the you know, arrival of A to A as a protocol, we we jumped on board that as far as defining message patterns, the protocol itself and how you know these messages are created, but did it in an in what we think is the right way, which is an event-driven architecture platform using SOLAS brokers. And it allows these agents to really have the flexibility to do their work, but be aware of others within the ecosystem and be able to call them very much asynchronously and then carry on with other work and then get results back, whether or not that's running very local or possibly around the world. Um one of the huge benefits of doing it this way is that you can run an agent process somewhere, maybe close to your data in one cloud or another or on-prem, somewhere that's fairly secure and locked down. But all it needs to do is reach out to where this broker is, so a single connection out, and then all communication goes over that. And when it comes out, it also is able to um basically publish its agent card, which it does periodically, and that allows everything else to be aware that it's there. It knows its skills, its capabilities, and all of that. And so then the other agents, you know, they have you know security and all of that to say what it's allowed to talk to and so on. But it makes it very easy to create these hierarchy of agents, you know, basically like a um a company org chart of agents with different roles and capabilities. And maybe you have a group over here that uses a number of different agents communicating with each other to fulfill a task, but then a higher level one can make use of that and go make use of this other group over here, bring all of that back together, managing all of the data and all of that stuff to you know fulfill the top-level request. Um, so it's really exciting. Like I think it really is a great way to break the problem into pieces, but build it on top of a robust enterprise-grade communication platform that we've been doing for decades and you know, create a deployable system for your enterprise that can make use of these agents.
SPEAKER_00:Fantastic. Well, that's impressive. There's a lot there to unpack. So I think you call your uh platform agent mesh. Can you give us some ideas of the new connected autonomous systems and applications you think are low-hanging fruit, you know, will be developed short-term versus long-term? What kind of scenarios uh are you envisioning here?
SPEAKER_01:Yeah, so I think like we kind of look at it as uh definitely a set of use cases that will grow over time as we get more security, more in place, more uh confidence and all of that kind of stuff. And as the models get better. A lot of the early-on ones, I think, are what we call sort of um, you know, let's call them dynamic dashboarding or um sort of analysis, like an interactive analysis of data. So you want to create a report and you have a bunch of different areas you need to pull from. Perhaps, you know, it's a report on uh the health of one of your customers or something like that within the business. So you might want to go to Salesforce and pull out data about opportunities and sales and stuff like that, but you also want to reach into the customer support system to pull in, you know, current issues they might have and that type of stuff. And there might be some other information that you have within your own internal documents, whether it's SharePoint or Confluence or something like that, and bring all of that together. And if you've built a platform with these different agents that have different kinds of responsibilities to fetch that type of data, you can talk to a higher-level agent to do that work for you. You know, do the request to these lower level agents to retrieve that and then produce a report, which to start with is probably somewhat interactive. But then over time it might be time-based. So you've scheduled one to happen once a week or you know, once a month, or whatever you want, and not only just create the report, but then perhaps you know have a higher level thing to rank them and say, you know, built this report, you can go look at it if you want, but really it's these three companies that you want to, you know, pay attention to. And that's the beauty of one of these systems is that you can do these underlying activities, but then on the higher level agent that will bring things forward that are important, um, and even saving you with the work of reading the reports that it's generated for you. Um, but you have them there when you need them. And if you're gonna go visit that customer, clearly you'd want to pull it. So that's where we see the kind of starting point. It's a little bit more interactive. It lets you, you know, check that the data is correct. And our system is very good at looking at the kind of lineage and the background of where the data has come from. But that's number one. But as you go forward, we see far more usage of our conventional um event-based platform where you might have lots of different things going on in your business that are just normal solacey type things where you know every uh PO that happens goes through the system or every every kind of event that you can imagine. And now you're peeling off some of those events to do some sort of agentic flow with them. So, yeah, I mean, for instance, yeah, a customer was just put into Salesforce for a new opportunity or something like that. And now you have some new kind of you know AI-related workflows you want to do, you know, things that aren't just a programmatic kind of flow. Um and once, like if you're especially for our existing customers, which are worldwide and and you know quite successful, um, using that technology is just very common to them. And now you can have this thing on the side that just will do extra analysis on select pieces of it. And again, the way that our you know an event-driven architecture, especially with PubSub, you add subscriptions to pull very selectively certain events in, and then you have a templating ability to say, okay, for this event, turn it into this type of request that goes to this agent, and then it goes and does stuff. It might respond with a new event to the system, or it might actually go and do some stuff in the background. That's more of a secondary thing, I think, from a use case point of view, because that is much more autonomous. And, you know, clearly you need to have done a lot of testing and make sure that this will work very, very reliably, or the risk is low if it doesn't. Um, later on, I mean, clearly there's more and more autonomous things we can do through that mechanism. But but it really is bringing together worldwide events, which again are already going through SOLOS or perhaps other types of event brokers or whatever, and then initiating stuff into the system using those events.
SPEAKER_00:Fascinating. So enterprises are starting to connect agents, and that will only snowball. You know, is there a kind of a tipping point where you know traditional integration tools just can't keep up anymore?
SPEAKER_01:Um, I I mean, uh like a lot of what I do, I think, is trying to leverage you know, large language models or AI models to do what they do best, but still, you know, whenever we can use the traditional software-based uh tooling and whatever. You know, LLMs, I mean, AI models will always be slower and more expensive than doing stuff that is just already coded and tested is reliable. So you don't want to just replace it all. What you want to do is figure out how to how to you know, honestly, make the LLMs do the least, but the critical parts of the of the flow. You know, make those decisions that are are just too complicated to code into a big FL structure or whatever. And but then have it kick off a set of tools or whatever that will do stuff in a reliable and quick and cheap way. So, I mean, I've said this to a number of people that you know, my job is to make the LLMs do as little as possible because that's where you're gonna have repeatable and good success, but uh and it will be cheaper and faster, but you want to leverage it for where you need it. And and it's sort of like I mean, I don't want to say replacing people, but but in a if you look at what a person does within a business, they're trying to do the stuff that their brains are good at and then use tools for everything else. And I think that's the same kind of philosophy for building a system like this. Fantastic.
SPEAKER_00:Lots of concerns around data governance and security in these distributed AI systems. Um, how do you put the your developers and clients at ease here?
SPEAKER_01:Yeah, so I mean, that's a huge subject unto itself. And it's obviously very important. I mean, there's a variety of different things that can go wrong. I mean, there's the bad, malicious ones of prompt injection where any data you get out there could, if it's going to come into your AI context and then go to a model, could have some type of jailbreaking type request in it or something to break out of that and then start doing things within your system. Clearly, very bad. Then there's just the inadvertent leaks of data where you might go to some internal tool, but then that agent has the ability to go off to an external one. And who's making sure that data from your financial system doesn't get put into a Google search or whatever, right? Or some other form, web form that this thing has access to. Um, and so we're looking very carefully at a lot of that stuff. I mean, a lot of what, again, making the LLM do as little as possible is also trying to keep the result data from tools out of the LLM context and trying to as much as possible have the LLM be told about the type of data, the description, the schema, the all of that meta information, but not actually saying this is the raw data that we got back from it. And then you can use more programmatic tools to do further work on that, and but only return to me these sort of um, you know, the the numerical results or whatever. Like, don't give me raw text ever. And so we're really doing a lot of work towards that. I mean, again, it's faster, more reliable, and all that if you can do that well. The other part of it is is sort of data lineage and tracking that and tracking a sort of tainted context or whatever, to say, how can I um describe my tools in a way that are like like give it a classification of data security to say this is classified data or this is public data on the source of it, and then have a restriction on various tools to say, is it allowed to consume this? Like once I have brought that data into my context, am I no longer allowed to call these tools because I've now kind of tainted my context if you want to call it that way, with confidential data. So I can't go and use a public tool anymore. And again, this is still maybe an area of research for us, but that is one of the ways that we we see this both both on the I've brought you know important data in, I can't go use those tools, but on the other side, I brought in external data, so now I can't go and call the sort of internal actions that I might call otherwise. And you can bring that into the let the AI actually be aware of that and plan out ways to avoid the problems where you might say, I've done a web search, brought something in, and now I want to go call my finance system to go do something in Salesforce. That's dangerous again because of prompt injection and stuff like that. So, you know, you can actually build a complete model of where data is allowed to be used and so on, and um prevent some of that leakage, but it's a tough problem. It's a very tough problem to solve.
SPEAKER_00:I bet. Uh, well, you look like you're on a crap to solving it. Um, but we are in early days. Uh, it was just a Gartner symposium. Uh, you know, there's an insurance company there that had 81 different AI projects and initiatives underway. Um where are you? Where where the industry do you think in terms of you know these these uh deployments, you know, science projects versus real-world deployments uh versus versus trials and and that kind of thing. Uh, but you have a unique perspective. What do you think? What do you think?
SPEAKER_01:Um, like I I think we're still clearly in the sort of trial demo, see what sticks to the wall kind of phase, honestly, for agentic AI. There's the security problems that people I think often are turning, or at least not worrying about yet because they're still just trying it and playing with it. But um, you know, what I see daily from my own experience that the people that know how to get the most out of AI are greatly accelerating themselves. And there's people that have not yet figured it out, or the tools they use aren't that good, or or whatever, or they don't particularly like it, and that's fair enough as well. But um I I can see from my for myself and for others around me, the ones who kind of get it get great acceleration. So clearly there's real gold here if you can mine it. And so there's building tools to allow everyone to find that gold, if you want to call it that. And then there's of course the people who are just cobbling stuff together and making it work for themselves. So we do see the success with that cobbled together thing. I think even the studies that are saying, oh, so many of these projects fail have highlighted that the shadow AI infrastructure, or whatever you want to call it, people are using AI successfully. They may not be using the tools that the company is providing for them, but they're using it successfully. And so I think there really is a reality that this will change how people work and it will be very valuable. And there is absolutely essentially uh gold here that you will find and will be easier and easier to find as these schools get better. But we're not there yet. Yeah, clearly there's a long way to go to really make it all work. And again, what what we're trying to do is provide a platform that makes it very easy for people to access that in a secure and company-managed way. So they don't need to run off and use ChatGPT on the side or whatever. They're getting a great experience from the company's provided tools. And the the company has a team, along with us potentially, that is managing the security updates and making sure all of the different pieces have been approved and are using appropriate security and all that, which is a nightmare right now. I mean, people are just slapping MCP servers into their local desktop tools and throwing in their tokens and stuff and saying, Oh, yeah, look how well this works for me. But if the security team of the company looks at it, goes, they go, Oh my God, like you've completely again connected this internally uh secure area to using web tools over here, and you just have AI that's just choosing what to use, uh, that's not secure. So, I mean, I think there will be a period of clamping down as well within these companies to say you must not do that. And I think it's again so new that people are getting away with it a bit, and we're rushing to try to provide a platform that doesn't they don't lose out on anything in terms of being able to do that type of work. It's just done in a more secure way.
SPEAKER_00:Interesting times. Um, clearly Google's donation of their A2A uh standard to the Linux Foundation seems like a good move uh in this direction. What is the role of you know big tech versus smaller innovators here, as well as you know, open source versus you know, closed or proprietary classic question. But uh how do you see this evolving over time?
SPEAKER_01:Yeah, no, it's it's a fascinating area for sure. I I think the big guys have the name, you know. I mean, someone else could have come up with A to A and it would not have gotten the impact that Google had with it. So I think there's a great and the same thing with anthropic, where you know, did the MCP definition as well. Both of those, again, could have come about through some more open source, smaller grassroots thing, but clearly you benefit from a big name just devoting itself for a little bit to put something out. If they've done a good job, let's just adopt it and move forward. Um, some of these things do tend to die as soon as you start getting huge committees. Everyone's trying to, I mean, you see it like we're we're in the messaging space and eventing space, and AMQP is a great example of that, which was a sophisticated transport uh technology for going between brokers, but it got so complicated and took so long to standardize the one die zero version of it anyway, that it's just hasn't really taken off. That could happen in this area as well, but if just one kind of company does it, gets a little bit of buy-in and then it moves forward, I think honestly is a better way. When it comes to frameworks, I mean, a lot of early agentic stuff came from like Crew AI and Langchain, you know, and so on, which were all open source and let people just download and use very quickly. We as well, for our solace agent mesh, have a very you know fully featured, uh, fully open source version that you can download and get up and running very quickly and do a lot of stuff with. We have an enterprise version, of course. We are a business and the enterprise version has way more security stuff. But in terms of getting something up and running and doing some interesting things uh locally, you can do that all with a downloaded version. And today it's actually, I think, essential. You know, how do people try and make sure this works if you can't just download it and use it? You don't want to go through some, yeah, you definitely don't want to talk to a salesperson before you can try something. I don't anyway.
SPEAKER_00:Yeah, great point. Well, you know, if you were to look three to five years out, what does you know a much more mature agent enterprise ecosystem look like? What is your vision, your ideal vision that you're uh seeing here?
SPEAKER_01:Yeah, I mean, for me, I think it becomes a sort of mirror of your of a company or an organization. Like I talk about an org chart, an AI org chart. I I do think that um as these things get more sophisticated, the agents in place that they have rules, and those roles will expand, they'll get more information and more tooling and all of that type of stuff. Um that will enable really very sophisticated things to be done from a top input request. You know, it's hard to predict AI, like literally. I mean, this is like a six-month prediction, it's not like a three-year prediction. Three years, I mean, who knows where these models were going? All bets are off, yes. I mean, I still, if when I look forward to say like which direction should we, you know, skate to meet up with the puck or whatever. Um, for me, I think it's more about if we take today's sort of transformer technology for large language models, we know that it will become faster and cheaper and you know, better, incrementally better. Um, because that's just how technology works. Will there be a new architecture or some other kind of crazy, you know, organic chip or something like that that does something differently? Who knows? And that probably will come eventually. But if we just follow the current path, I I don't worry too much about um like I like to think through saying you can use different models and look stack those on top of each other to do, you know, LLM as a judge and all that type of stuff. It slows it down today quite a lot, but we know just chips and everything just gets faster over time. What you know costs you a certain amount today and takes maybe 10 seconds today will be five seconds, you know, probably a year from now or so. It'll cost a quarter as much. And some of those things that limit you today purely because of latency and cost will not be a problem, I think, in a year or two. Like that's just the way technology works. I don't see that stopping. So again, even if the models themselves don't get any better, the all the efficiency around running them from power to speed, latency, all of that stuff will just nor as it has for the last thousand years, will just get better year by year. And so again, we can rely on that. And that just means that you know, doing a request that might look or take 10 minutes today because it's passing stuff around and calling this, calling that, and throwing away that result because it didn't pan out and all that kind of stuff will not be as expensive or slow in the future. So we shouldn't shy away from thinking about a system that does that. Yeah, we should care mostly about the quality of the answer rather than the time and and cost of it, because again, that will not be a problem.
SPEAKER_00:Well, uh extraordinary opportunity and thanks so much for all the insight and uh education here. It's been really eye-opening. Appreciate your time and uh and onwards and upwards, Ed.
SPEAKER_01:Oh, yeah. Thanks for the opportunity. It's always crazy.
SPEAKER_00:Yeah, thank you, and thanks everyone for listening, watching, sharing this episode. Be sure to check out our companion TV show on Bloomberg and Swatch Business at techimpact.tv. Thanks, Ed. Thanks, everyone.
SPEAKER_01:Yeah, thank you, Robin. Talk to you later.