
AI Proving Ground Podcast
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast
Model Context Protocol (MCP) and Agent-to-Agent (A2A): The Future of Enterprise AI
Agentic AI systems have the potential to work together, but not yet at scale. This episode breaks down two emerging standards — Model Context Protocol (MCP) and Agent-to-Agent (A2A) communication. Both promise modular AI integration, safer data flows, and agentic collaboration that actually scales. WWT engineer Sally Jankovic and AI advisor David Geddam explain why these protocols are becoming essential infrastructure for enterprise AI.
Support for this episode provided by: Glean
More about this week's guests:
Sally Jankovic earned her PhD in Applied Math from the University of Minnesota in 2023. Drawn to real-world problem-solving, she began her career as a data scientist at a healthcare startup, working on ML, NLP, and LLMs. Her hands-on experience led her to the platform team before joining WWT, where she was inspired by the MC group's collaborative culture and the chance to work across diverse tech stacks and projects.
Sally's top pick: Agent-2-Agent Protocol (A2A) - A Deep Dive
David Geddam is a Senior AI Advisor and Chief Solution Architect at World Wide Technology, with 25+ years of experience in AI, healthcare tech and enterprise solutions. He’s held key roles at Kaiser Permanente, GE Healthcare, Philips, IBM Watson Imaging and multiple startups. David holds degrees in Biomedical Engineering and Healthcare Technology and specializes in AI, cloud, NLP, computer vision and real-time navigation. On the hobbies front, he is involved in playing competitive tennis at USTA 4.0 level for teams in St. Louis.
David's top pick: Scaling Agentic AI: Impact of Model Context (MCP) and Agentic (A2A) protocols
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
From Worldwide Technology. This is the AI Proving Ground podcast Today. Two important breakthroughs you may be unfamiliar with model context, protocol and agent-to-agent communication, and how they are quietly rewriting the rules of enterprise AI. On paper, they're just standards. One shapes the data an AI can see, the other lets teams of AIs work together and hand that data back and forth, but in practice, they decide whether your customer bot can tap into real-time inventory system or whether a factory robot can talk to a quality control camera without tripping over compliance rules. To understand why this matters and what happens when hundreds of these servers and agents light up inside a single company, I sat down with Sally Jankovich, an MLOps engineer, and David Gedham, a senior AI advisor. Together, Sally and David justify why, as AI systems get smarter and start making decisions by themselves, why we need new rule books like MCP and agent to agent, and what problems they actually solve, both today and in the future.
Speaker 1:This is the AI Proving Ground podcast from Worldwide Technology everything AI all in one place. Let's get to it. Hey Sally, hey David, Thanks for joining us today on the AI Proving Ground podcast. How are the two of you? Good, Excellent, Well, 2025 has been billed the year of agentic AI, and two methodologies started to emerge. We've been hearing a lot about it on previous episodes of the AI Proving Ground podcast, and that is model context protocol, MCP or agent-to-agent protocol A2A. Sally real quick. Start us off, in case some of our listeners out there aren't familiar with either of those terms. What is MCP, what is A2A, and why is it important to understand it in today's somewhat chaotic AI landscape?
Speaker 3:Right, okay, so they're both different forms of standardization, basically. So the high level you can think about is how do we standardize how agents and by agent we mean an LLM that has access to other tools, including API, apis and that type of thing how do they integrate into you know environments and interact with objects? So MCP deals with when you think about the objects the agent is interacting with, it's those tool calls. So it's a way of standardizing how your agent plugs into a specific tool in a way that is modular and reusable. So it was developed by Anthropic and I believe November 2024 it was released. Since then it's taken off. But essentially the reason it's taken off is once you write an MCP server, they're called. That's an integration of a tool into your agent. You can repeat that with other agents and so it's just a really nice modular way of plugging in tools and also this means that it's a lot easier to always know how your agent is going to take in a tool if it's always in MCP format.
Speaker 1:So, oh, go ahead. A tool if it's always in MCP format? So oh, go ahead. Yeah, and David, tell me about how they work together, if at all, and what it enables the end user or the end client to do.
Speaker 4:Yeah, so it's interesting. I was actually looking at the history of MCP, right, and before MCP, what people were doing was, you know, when they were chatting with LLMs, like the chat GPT, they would start adding context into the things like, hey, you know, I had a conversation two weeks ago or two months ago about this. You know, these are some of the other events that happened. By the way, can you make a better sense of this? And that's where the whole context aspect started to get designed. And one of the other pre-MCP was the language server protocol, where these are the coding assistants that you're hearing about, right, how do they complete a coding sentence? Oh, because they're reading all the programming languages and they know how to construct it and then, therefore, they know what the user is trying to do with the code. And that's kind of where the MCP at least that's a history.
Speaker 4:And so, going back to why MCP and where I came in was how does MCP and A2A start scaling the transition to agentic AI and physical AI, right? How does this work? Because right now, everything was about generative AI. So that's where the connect is. So MCP is standardizing information, as Sally mentioned. But the second part of that equation is how does the agent take advantage of that standardization? And so that's where the agents, if you design it properly, are able to rationalize it, reason and be able to come up with autonomous decision making. And that's kind of where the agentic or the A to A uh protocol starts to take over. So anyway, that's kind of my feedback yeah, sally, how does it?
Speaker 1:how does a to a start to take advantage of of that standardization and what is the value of of that?
Speaker 3:yeah. So, uh, I mean, david did a great job of setting it up. A to a is is very similar in in some ways to MCP. It's not something that hasn't been done before, it's just being done in a standardized way. So someone who has a bunch of different agents working on different things say, one agent is helping you know customers and then another agent is on the back end, sort of like the Atom AI we have at WWT, and for some reason you want them to talk to each other, or one has a task that it might pass off to the other. Then the A2A protocol basically standardizes how those two agents would interact with one another. You might have a third agent, your sort of head agent, that you could talk to and say, hey, get this data from that agent, or get this data from that agent, or have those two agents pass data from one another. So again, many engineers listening to this will think, oh, I had a setup like that before. This is just the way to standardize it.
Speaker 1:Yeah, appreciate the mention of Adam AI. Just for those out there that may not be familiar, adam AI is WWT's AI chatbot that we have on WWTcom. It's available for all of our internal WWT users as well as our external registered users. So you could, as you're listening to this, go out and even type in at WWTcom what is MCP, what is A2A, and see what you get. There is A2A and see what you get there. David, you know Sally had mentioned that some of these methodologies, while being used for a while, you know, really started coming onto the scene just recently I think you said November 2024, at least for MCP. Does that mean that it's just now getting integrated into an enterprise setting, or how are enterprises adopting these technologies?
Speaker 4:or how are enterprises adopting these technologies? I think that, because of the kind of the starting you know, like, generative AI has become such a kind of natural way in which people are interacting with it. Now, this is a natural advancement of that. And how do you take that natural advancement and scale it right? And with the push of NVIDIA, with the agentic and the physical, it becomes very, very interesting. I see the MCP becoming the brain of the agentic aspects.
Speaker 4:Right now we're looking at, maybe task you know, automation, right, like if you had a customer service, sort of it's a customer service that's trying to automate. Let's say, somebody calling you and trying to schedule a travel or looking at booking your tickets or things of that nature. Can I automate that? Well, let's take it a step further. And now let's start to look at real-time systems that actually work in an assembly line or robots, right when. How do you think MCP could work in that in a real-time sense? And so this aspect of how MCP is designed and the scaling I mean, if you just look at the meta and the open AI releases, they're talking about thousands of MCP servers running synchronously to direct information into the engine so that tasks can be automated. I mean that's significant. So I think it's a natural state of scaling and that's what I'm really interested about. That's why I started looking at MCP.
Speaker 1:Yeah, Sally, it looked like you wanted to jump in there for a second.
Speaker 3:Oh, I was just sort of agreeing well with David when David said it was like sort of a massive thing. It's both massive from, like, a results perspective and then also like what that scaling looks like, right, like I think that's a really important question to ask because, like you know, from an engineering perspective, as someone I've, you know, built, uh, you know, some baby mcp servers and it has come a long way in the past like eight months for sure things are improving. Uh, there are still things that you know, um, in terms of scale, right like a server is going to be limited by the strength of, like, the LLM you're using, because it might only be able to call so many servers at once. So there's a lot of different interesting limiting factors when you're thinking about getting to that point that David is talking about. But I think that that's where, now that the main buzz of MCP has sort of begun to pass, that's where the work has been going.
Speaker 4:And Sally, I had a question for you. You know, from an engineering standpoint Obviously I'm sitting at the use case, development and ROI how does it benefit a customer in bringing an MCP type solution? A question for you would be, when I started looking at the design that Meta and Google and Azure how they're looking to architect it. Right, some of the architectures are a little bit different than others. One of the things that I have a question is about the memory management of this context, right, how much of a memory they want to place, how fast they want to have that in-memory. You know, from an access standpoint, from an inferencing standpoint, what do you think about how that would be designed from a dynamic and static context management?
Speaker 3:Yeah, I mean, I think again it's you know the answer, is it really right? It's going to depend sort of on your use case and on your goals. And when I say on your use case, part of it is going to depend on, like, okay, like, how many servers are you calling and then how many tools does each server have? Because you know, the more you have, like, probably the more memory you're going to want. And essentially also right, like again, you are really limited by your, your LLMs, in certain cases. So I'd almost argue, you know I haven't looked too deeply into those architectures, but this might be a place where I would want to use A to A and instead of trying to, you know, scale horizontally, see how many tools and servers you know I could pack in, see how much. I would almost say, well, why can't we have one agent have this specific task and this agent have this specific task, and sort of link it out more horizontally? So you're sort of spreading out the tasks you're trying to do.
Speaker 4:It's like a distributed kind of approach. Yeah, yeah.
Speaker 3:Yeah, as an engineer, when you're dealing with, I have to keep increasing memory over and over again, whether you do it the way I suggested or another way. The first question is how can I make what's a vertical scaling problem become horizontal?
Speaker 4:Yeah, I think that totally makes sense.
Speaker 1:so I'm hearing you both mention servers, um, often. So how much, how, how much more compute is necessary here? How much more cost is this going to add if we're talking about dozens, hundreds or thousands of of servers need it to uh to to execute this, this strategy?
Speaker 3:So, oh. So, to sort of uh backtrack and make sure I clearly defined what a server is in an MCP context, right, that is the sort of package that contains all the connections to the tools. So, um, you might have like a, a server that you know connects to a Postgres database that's just a SQL database and it might have different tools for different queries, different, you know, calls you want to run. It might, you know, it might just have a general query tool, a specific query tool, an add rows tool, whatever it is, and so those tools would all be packed into a server.
Speaker 3:And so when I'm thinking about compute, you know I guess there's like the compute that's the server is running, and then there's also the compute at the LLN is running in order to talk to all of these things. So it's almost it's a tricky thing to talk about, right, because, yes, if you're running a bunch of different processes on a bunch of different servers, you're using all of that compute. But then also you know the more complicated of a network you have. If you have a bunch of agents talking to each other, those are tons of LLM calls you're sort of generating, and so that is a downside of like one of the solutions I proposed. I don't know if you have anything to add to that, david. I mean.
Speaker 4:what I would add in that just giving a perspective would be is that it would be something like if you look at Adam just getting back to Adam, right, so Adam is now connected into Salesforce. It's connected into ServiceNow. It's connected into ServiceNow. It's connected into all the other information repositories at Worldwide, our platform, our web pages, all of that, right? So the point that Sally is making is those are servers in a virtual sense, but they really don't take the type of compute that you may be thinking about, the type of compute that you may be thinking about.
Speaker 4:Where the compute really comes in, my viewpoint, would be the memory allocation to actually generate the context, in this case, the vector databases where you're storing the vector representations of this information so you can quickly search a semantic, like the Facebook similarity search, as an example, and some of the other vector searches, your RAG applications. Those are where the compute. And then when you look at a Blackwell architecture, right, why does a Blackwell architecture start to benefit is because of all the networking capabilities from GPU to GPU, rack to rack, networking capabilities from GPU to GPU, rack to rack. You know all of those things. That's where the compute really would speed up the response of the system, and hopefully I'm making sense there, sally. So that's kind of what I'm thinking.
Speaker 3:I think you said that far more specifically and eloquently than I did, so really that's a really good breakdown, I think, of where the compute is really coming in.
Speaker 3:But, just to hammer in on that point from what David is saying then the more servers and various integrations you're adding and those types of things, that's where the compute is scaling up. So I do think, like I wouldn't say it's linear, but, yes, the more integrations you have, because of all the things that David mentioned, you're going to need more of those to support your application and that's where, yeah, the size does balloon.
Speaker 1:Yeah, so I do want to talk a little bit more about integration and, david, feel free to weigh in as well. Where does MCP and A2A fit alongside existing? You know how enterprise IT teams support AI Like. What does this do in terms of you know some of the API gateways that we've been using? Does it sit alongside, does it replace? Are we talking something differently or is it a disruption?
Speaker 3:Yeah, I wouldn't say it's a disruption. I would say that there is going to always be. Something I would encourage all enterprises to think about is there's going to be some initial lift, because likely you already have agents that know how to call tools, and integrating an MCP server is going to rip out some of that. But that initial that's like an initial down payment, basically, because once you sort of have your agents running so that they can talk to other agents or so that they can call other MCP servers, then it's much, much easier to integrate from then on. So like there's a high lift at first, but then I think it goes much better from there and then also then I think it goes much better from there. And then also I think that the gateway end if your gateway is calling an agent, right, like that integration shouldn't be touched at all. So yeah, I would say if you're interested in integrating it, you should be prepared for time to be upfront. That's the cost and then it'll save you tons of time on the backend.
Speaker 3:That's the cost and then it'll save you tons of time on the back end and up front what needs to be taken into consideration by some of our client organizations or any organization for that matter looking to explore these methodologies. Yeah, so there's a few. One, like I said before, like you're just going to actually have code based changes and that's going to take time and you want to test anything that's going into production rigorously. I think another thing to be concerned about and thoughtful about is, you know, security. These are very new practices, and so I know that MCP and Anthropic, who again is behind MCP, has just been coming up with. You know better ways of running servers, so, for example, they don't store your metadata and things like that.
Speaker 3:But there also have been vulnerabilities in certain servers, like the Slack one, so you would want to like sort of evaluate those things, make sure that the tools that you're implementing also are actually authenticated properly. That's going to be a little bit of a bigger lift up front as well. So, um, just just for in regards to that like production level, like we can actually put this out in the public and it won't leak our secrets, kind of thing, there's gonna be a little bit of a lift there too, but it's already. From when I uh first started thinking about mcp to now it's already gotten a lot better, so I can only see that improving.
Speaker 2:This episode is supported by Glean. Glean is the work AI platform that connects and understands all your enterprise data to generate trusted answers and automate work grounded in company knowledge. Put AI to work at work.
Speaker 4:I do want to kind of come back to the interoperability question, right, I mean, the reality of all of this, right, and Sally can kind of chime into it is that we live there. Legacy systems are going to be there, right? It's not like just because we brought in MCP, suddenly we lose all the legacy information. So the way I see MCP is, it's actually to the advantage that it's a great interoperability sort of platform, right, so you can connect legacy information as well as brand new information repositories together, and that really benefits, if designed correctly, can benefit the organization tremendously. And I'll you know, my background is I've been in 30 years in healthcare, designing picture archival systems, quality management systems, enterprise document systems, all of these things. And so when I look at NCP I'm like, oh my God, how great would it be to connect all of this information set, because at the end of the day they're all an ecosystem that has some relation from one to the other.
Speaker 4:And if you just look at a clinical IT example in a healthcare organization, right, a patient walks in and you know, for some reason they're having a problem, right, so you get a diagnostic. Right, you get a diagnostic or the image that then goes into the picture archive. Well, then the physician may ask for a blood test, and they may ask for this. Then they have an electronic health record and then they have a visitation appointment as to what's happening next. Then they have a drug that is provided to them, so that's a pharmacy. It to them right? So that's a pharmacy.
Speaker 4:So when you look at all of this information, they're all disparate information but they're all connected to the patient, to the physician, to the administration, to the insurance. All of this, well, if I had an MCP, well, if I structure the interoperability correctly, I can query all of that in a fairly quickly manner without having to take a time Like it's very siloed today. So these MCP and agent to agent can really help not only bring structure standardization but also the quickness, the automation to get the patient, at the end of the day, the right medicine at the right time. So that's how I look at it.
Speaker 3:I think, yeah, I think your vision is the correct one. Maybe if we even wanted to dig into that a little bit, david, and like talk through what that would even look like as an example, right, like. So, like you're talking about like there's tons of different systems you might need to query and stuff, so you know you can think of each database as its own server with its own integration and then maybe organizing, you know, on a larger level you'll have different, maybe different agents have access to certain different databases and then connecting them all with one host agent that can talk to each one of them.
Speaker 4:Absolutely Like. So if you look at the electronic health record system like Epic, for example, it's still pretty siloed, even though information is pretty standardized in them Right now. The way I see Epic, for example, is you can search information, whether it's an x-ray, whether it's a blood test, whether it's a lab, whether it's genetics, all of that right, the true aspect of NCP. And then AI starts to come in. What if the physician, just before seeing the patient, runs a query and a prompt saying, hey, what is going on with the patient and what's his history? And immediately it brings about a very summarized thing saying hey, you know, 10 years ago they had an accident or whatever you know, they came in, they did a blood test. These are current stats. By the way, they have a wearable monitor and so far we don't see any problems.
Speaker 4:So here is some of the suggestion based on the large image models are already trained with that data. You're giving context to the model, context and then you get the answer. And, by the way, now here's a prescription already. Here is the next steps, here is the next steps, here is the next appointment, you know, two months down the line. Wow, like like. You're really starting to talk about efficiency there.
Speaker 3:I kind of sound like what you were thinking, sorry. Yes, yeah, exactly, exactly. I was even like, yeah, thinking more granularly, for each one of those tests you might have a different database server. So I was really digging into the architecture, but I think, yeah, that's a great, great example of, like, what we want Gen I to take us to.
Speaker 1:Yeah, yeah, I love the real world application there. I'm curious is that an actual real world implementation, david, that you've been working on with clients? Or maybe walk me through a little bit about what you're seeing in terms of adoption or where it actually sits in reality in terms of making its way into enterprise?
Speaker 4:So I am seeing that, but in pieces. It's not all there yet, because healthcare has always been behind purposely. They're very risk averse in how they bring technology, whereas you know the likes of a Meta or a ChatGPT. They're much more faster just because it's an interaction for customers, it's a lot more free flowing, whereas you know, like if you look at healthcare and someone like finance sector, they're a little bit more risk averse, like they want to make sure that the integration and the technology actually don't make mistakes. You know, in making a decision you know, especially with the genetic aspects of it. So I think that's kind of, but I am seeing that progression, I would say is just maybe a little bit more time, and we are working with a lot of customers that are looking in that direction, and so part of our group, the AI advisory group, is how do you take the generative and the pre-generative AI to a genetic and physical? How do you make those steps? And a lot of that comes back to.
Speaker 4:One of the fundamental things is how good is your data? That's where it starts right. How good are your tokens that you are measuring and processing and how good is your inference going to be? But if your data quality is poor and that's why you look at Atom and Worldwide. They've done such a great job because the data is really good. Right, the approach is already good, so you can build things on top of it, just like a foundation. So that's kind of my feedback.
Speaker 1:Yeah, I like how you say you're starting to see it more in the enterprise setting. I'm curious of those early adopters, david. Where are some of them finding you know bits and pieces of momentum and where are they faltering a little bit? You know, what are some of the warning signs that others should be looking out for?
Speaker 4:I think, places where they're really looking to use AR and using as one example we drug discovery, like J&J AbbVie, some of these companies, some of these companies. This is where they're really putting especially new chemistry, biochemistry designs, protein structure designs, drug formulations. They go to thousands and thousands of formulations and in the past that would take months, years to develop this, and now with the AI, especially with how quickly you can generate new ideas and designs, that's a great acceleration. That's happening right. Is it in full blown, you know process? Not, but I think it's starting to get there, at least from a healthcare standpoint. Another area I would say is like supply chain you know what I would call as finance area, like fraud detection, even like telecommunications, with network operation centers Worldwide has taken a great lead in that area, based on what I'm seeing. So those are some of the areas where I've seen how AI can truly be beneficial, and maybe Sally can add what do you think are some of the other things?
Speaker 3:Oh, I mean I was just going to say I mean, you know, from my perspective, I work in management consulting at WWT, right, and I would agree 100% with those and especially, I think, with, like you know, communications and just anyone who's dealing with networks, predicting outages, that kind of thing, summarizing that data quickly. That is somewhere where I have certainly seen an uptick of interest and potential adoptions.
Speaker 1:So I'm curious from an engineering perspective are there any warning signs or hiccups or lessons learned that clients, if they're looking towards adopting MCP or A2A, that they should be aware of or looking out for as they go along that journey?
Speaker 3:Yeah. So I would honestly go search for the. There's like an eight minute YouTube video about like MCP Slack vulnerabilities and I would go check that out and see how, like you know, there can be in a server especially if you don't write it, then there can be vulnerabilities. You know, one of the advantages is you can sort of take a server anybody's written and not have to write that code yourself that integrates into the tool you want. The disadvantages if there's a vulnerability in that tool then, like you could be exposing potential data and it wasn't even malicious, right. So I would say the very first thing people should think about is security and making sure that they have the resources to deal with that correctly.
Speaker 4:Number one thing I would say Sally, I think that's a great point and I think that's where you know what you know. I've been in recent discussions with NVIDIA and using their kind of enterprise architecture framework right, they provide the guardrails, they provided the validation models, they're providing the scalability elements. You know how to structure the storage and the models and all of those things, and so you do need a safe environment where you could run this to kind of prevent what Sally is kind of mentioning. Because, again, like I said, you know how do you know that this model and this architecture and framework is secure? Right, that's one of the critical things that you have to make sure it's right, and so that means that you need to have somebody, a third party, that needs to validate that or provide some sort of a registry so that you know that's kind of safe, you know. So I think it's a great point, sally, that you're making.
Speaker 1:Yeah, I love the fact that you guys are bringing security into the conversation. Here Is it, and we can bring security experts on and talk about this if we'd like. But just from your perspective, what security considerations should we be talking about? Is it back to the basics, just no access and visibility and things like that? Or what types of security solutions are needed?
Speaker 4:to make sure, david, to your point, that these are running in a safe environment. Well, I think you need to have the general security of you know making sure your data transactions are encrypted in some cases, because a lot of the information could be IP related that you are trying to get responses to. You need to have guardrails so that information doesn't get, you know, pushed out into an area of discussion that you're not really supposed to talk about or that the responses can lead to other hacking. You know efforts, right, and so there's several types of security at the basic level, your Internet security, and then your you know, your prompt security or your LLM security, other models in a valid state. How were they trained? You know what are the responses. All of those things need to be kind of addressed as a whole, holistic, but yeah, I think that's what I'm looking at when I look at a framework vulnerability I was talking about, right, but something you know.
Speaker 3:just to even add on to that, there's also just the integration perspective of security. So to go back to data's healthcare database sort of example, say, you know like a doctor has certain amounts of access, or different doctors have access to different you know different databases based on what they need, or something like that. So at the server level again, right, this is what's connecting to each database in this example, each tool. You'd want to make sure that the correct authentications are leveraged for each server, and there are an increasing number of ways to do this. But it's just something to think about when you're building is right, not just the security to authenticate to the LLM itself, but you know how can you make sure that, like, this person is authorized to access this particular tool, this particular you know server? So that's another consideration when you're integrating many different things.
Speaker 4:Yeah, that's a great point, Sally.
Speaker 1:Yeah, and what brought that up? You know, sally, your initial vulnerability was just talking about. You know, if you're not writing the server, you're not exactly sure you know what the vulnerability might be. That makes me think are we going to have a whole kind of server economy here, where a lot of you know there's gonna be a lot of vendors or solution providers that are writing their own servers? Is that going to be something that pops up, or is it already a thing?
Speaker 3:it's, it's there, um, and there are a lot on the internet. I mean, I've, you know, I've used a lot too and and, like I said, there have been changes so that now it's not they're, they're not storing any of your metadata. So I would say it's, if a server is uh verified to you know, uh, follow all the model context protocol rules, it's, it's safer to use now than it was even like a few months ago, right, but, um, uh, yeah, there there already is an economy, but at the same time, something else that I've sort of seen and this is sort of almost a segue, but it's like okay, like there might be a server to interact with this specific thing, but it doesn't have the tools I need. So I'm either going to need to use the tools it already has to add it to the server, you know, customize it a little bit, or something like that. So I think that there's like a lot of flexibility and variability around that.
Speaker 4:And Sally, I think one thing I wanted to get your perspective would be I thought that also, obviously there are going to be some sort of standards that are going to start coming up with the security side, right. But I think explainability that especially with the Atom, how it builds citations to specific information, that it pushes back on the explainability aspect of it, is a way to secure. It's a secure design to me because if I just put in a prompt and say hey, what's the answer, it splits out an answer but without any explainability. That immediately you know I need to do more scrutiny around it. But if I had explainability, saying hey, here's my answer, and these are the reasons why I came up with the answers and here's the citation to specific articles or the database or a customer ID or whatever it may be, I think that builds more, you know, secure elements to how I've designed the MCP, for that matter, right, yeah, yeah, I think so.
Speaker 3:I think explainability is a really important part of it. I think that and again, as you were the one to bring up guardrails too super important. You know, how do I know that I'm not getting prompt injections in this server? I found, and those kinds of things, and maybe even honestly, maybe a way to to test them, you know, before you plug them into prod solutions. I think that would be. Or prod environments is into broad solutions, I think, or broad environments is, you know silo. It put it in a Docker container or some you know other environment of your choice and make sure it can't touch anything and pass it out.
Speaker 1:Well, recognizing this is, you know, relatively new still do we have. Do most organizations have the people to manage this, or are they going to need to train or hire, or is it something that potentially I can do for itself?
Speaker 4:At least in my article. That was one of the big challenges, honestly, is that you know the architecture, the engine, the concepts and the frameworks are there, but you also need the appropriate engineering teams that can put this together. So they do need to understand the aspects of what does a context management mean? What does a vector database mean? You know how do I apply servers, how do I structure the servers. You know how do I implement agent-to-agent distributed approaches. You know you need to have the right workforce that understands these concepts. You know to be able to pull this. So it is a challenge that the companies will face as it starts to progress.
Speaker 3:In my viewpoint, yeah, I mean, I think the solution to this is just clone me several thousand times and hire me at every org. I will. I will say, though, you know more on a on a personal. You know, note, as a machine learning engineer, I started as a data scientist. I think it's a, it's a pretty common path and you are seeing, you know, note, as a machine learning engineer, I started as a data scientist.
Speaker 3:I think it's a, it's a pretty common path and you are seeing, you know, there are definitely many cases where you know having a pure data scientist is super undervalued, someone who just really knows models and algorithms well. But I think you are seeing more of a shift towards many data scientists getting better at like sort of the engineering side of things and many roles, looking for someone who can sort of straddle the line, and I think that that, you know, transition will also help. What David's talking about is that, like you know again, not for everyone, but like sort of right now, what I do in NL Ops is very niche, and the less niche it becomes, the better prepared the workforce will be to sort of deal with this kind of thing.
Speaker 1:Yeah, what about moving forward? What's to say that there's not going to be another deep seek moment or something similar that would put this all kind of behind us? Is this is MCP, agent to agent? Is all this going to be relevant when the next big rollout comes around, or is this going to be a consistent part of our future and strategy?
Speaker 3:kind of say. I think the reason it might stick around and I'd be interested in David's perspective is it's not truly, you know, like it's not like a model right, it's not like deep seeks, a model and a different way of making an LLM, it's just a protocol, it's just a standardization. So unless someone comes up with a better standardization and then also generate the same kind of hype, and then also generate the same kind of hype Like one of the reasons MCP and A2A, like what will make them successful, what's making them successful so far, is that adoption right, Like that's the sort of soft part of it that you have to get to get it to work too, because what's the point in having a standard if no one's using it? So from my perspective, I think that that's a huge hurdle. But I could certainly see things you know what I don't know, but things coming out that, like you know, takes away the hype from them and makes it more about how do we integrate this new thing with systems that already have those.
Speaker 4:But yeah, I would agree with you, sally. I think I would say MCP will actually mature a lot. You know, and we can see right, like the, you know, as you know, you had the basic internet and then you started to have web websites and then you started to have, you know, we started moving to the artificial intelligence. You know, we started moving to the artificial intelligence and then AlexNet came in 2012. And that made a big shift because of computer vision and how it saw it, and now we are at the NCP, after large language models. So what I would say is that it's only going to mature and become more and more sophisticated and intelligent. Become more and more sophisticated and intelligent, but there will definitely be, in my viewpoint, the direction into what they call as physical AI that we are talking about.
Speaker 4:And then the question really comes is how would we advance, like sensors, for example, taking raw information that you're seeing or hearing, or visualizing, or you're sensing, touch, smelling, all of that type of raw physical data? How do you channel that to the likes of a large language model, to the likes where? How do you interpret that information, which will become key for AI to really succeed at that next level which will become key for AI to really succeed at that next level, and it is only going to be seen. But that's kind of how, where I see the big transition and I just don't know how quickly that's going to happen. And then you can also look at the quantum computing aspect, which how do you really kind of scale that? That's what the interesting aspect to be. Once the quantum compute thing gets sophisticated, then it's going to get implemented into the AI and then what happens, right, that's to be seen.
Speaker 1:Lots to consider moving forward. I know we're running short on time, David and Sally. Thank you both so much for taking the time out of your schedules today to walk us through what I know is on top of mind for many of our listeners out there today. So thank you for joining.
Speaker 4:Sure absolutely.
Speaker 3:My pleasure, and I'd just like to give a really quick shout out to our intern, vasu Khanna, who taught me a lot about A2A and wrote a great article on WWG's website about it, so check it out.
Speaker 1:Absolutely Okay. Before we sign off, there's three things I hope stick with you from this conversation. First, the handshake matters more than the hype. Think of it this way MCP puts every scrap of data and every tool your company owns into the same shaped box. The A2A protocol is the courier service that moves these boxes between agents. When those two standards click, you don't rewrite integrations, you just pick a box, pick a courier and let the work move.
Speaker 1:Second, security is a day zero decision. Sally's warning was blunt An unvetted MCP server is basically a vending machine for your crown jewel data. Encrypt every hop, every sandbox and every log off that they hand off. And third, talent is the real throttle. The frameworks are here. What most firms lack are the engineers who can wire them together. So start small one use case, one KPI and give your teams time to learn the new plumbing. That's it for the AI Proving Ground podcast this week. This episode was co-produced by Naz Baker, mallory Schaffran and Cara Kuhn. Our audio and video engineer is John Knobloch and my name is Brian Felt. Thanks for listening and we'll see you next time.