Digital Transformation & AI for Humans

S1:Ep81 AI Evolution: Theory & Practice Behind Sustainable and Responsible Implementation

Prakash Senghani Season 1 Episode 81

In this episode, we are diving into the evolution of AI with my brilliant guest from Dubai, UAE - Prakash Senghani. Together, we’ll explore the journey from LLMs to Agentic Solutions to the Model Context Protocol, and take a closer look at the theory and practice behind sustainable and responsible implementation.

Prakash is a digital construction thought leader, startup founder, early-stage investor, board advisor, Co-Founder & CEO at Navatech (Abu Dhabi, UAE) and General Partner of Empede Capital. 

I’m honored to have Prakash as a part of the Diamond Executive Circle of the AI Game Changers Club - an elite tribe of visionary leaders redefining the rules and shaping the future of human–AI synergy.

🔑 Key topics covered:

  • From Tools to Agents: The turning point in AI evolution and leadership mindset for 2025–2030
  • Agentic AI Myths: How to integrate next-gen AI responsibly without losing control or clarity
  • Rise of MCP: How MCP redefine building, scaling, and teamwork
  • Implementation Traps: Common mistakes when moving from automation to autonomous AI agents
  • Learning from Failure: Real cases where agentic AI fell short and lessons for human-compatible design
  • Ethical Scaling: What leadership and governance are needed to responsibly expand MCP in fast markets
  • AI Operating Philosophy: Core principles for sustainable, values-driven adoption beyond policy
  • Future-Proof AI Strategy: Practical advice for meaningful, scalable, responsible implementation
  • Unlearning for the AI Era: Shifting outdated mindsets to thrive with intelligent, adaptive systems

🎧 Tune in now to discover how to lead, build, and innovate with greater alignment, humanity, and intent.

📩 Ready to grow as a leader or explore AI-human collaboration?
Join us at AI Game Changers Club – the global movement for visionary executives, entrepreneurs, and investors.

🔗 Connect with Prakash Senghani on LinkedIn: https://www.linkedin.com/in/prakash-senghani/

🌏 https://www.navatech.ai

Support the show


About the host, Emi Olausson Fourounjieva
With over 20 years in IT, digital transformation, business growth & leadership, Emi specializes in turning challenges into opportunities for business expansion and personal well-being.
Her contributions have shaped success stories across the corporations and individuals, from driving digital growth, managing resources and leading teams in big companies to empowering leaders to unlock their inner power and succeed in this era of transformation.

AI GAME CHANGERS CLUB: http://aigamechangers.io/
Apply to become a member:
http://aigamechangers.club/

📚 Get your AI Leadership Compass: Unlocking Business Growth & Innovation: https://www.amazon.com/dp/B0DNBJ92RP

📆 Book a free Strategy Call with Emi

🔗 Connect with Emi on LinkedIn
🌏 Learn more: https://digitaltransformation4humans.com/
📧 Subscribe to the newsletter on LinkedIn: Transformation for Leaders

SPEAKER_01:

Hello and welcome to Digital Transformation and AI for Humans with your host, Annie. In this podcast, we delve into how technology intersects with leadership, innovation, and most importantly, the human spirit. Each episode features visionary leaders who understand that at the heart of success is the human touch, nurturing a winning mindset, fostering emotional intelligence, and building resilient teams. In today's episode, we are diving into the evolution of artificial intelligence with my brilliant guest from Dubai, UAE Prakash Singhani. Together, we'll explore the journey from LLMs to agentix solutions to the MCP protocol, and take a closer look at the theory and practice behind sustainable and responsible implementation. Prakash is a digital construction thought leader, startup founder, early stage investor, board advisor, co-founder and CEO at NOAATEC and general partner of Empedacapital. I'm honored to have Prakash as a part of the executive group of the AI Game Changers Club, an elite tribe of visionary leaders redefining the rules and shaping the future of human AI Synergy. Welcome, Prakash. It's a great pleasure to have you here in this studio.

SPEAKER_00:

Thank you so much. And the pleasure is all mine.

SPEAKER_01:

Let's start the conversation and transform not just our technologies, but our ways of thinking and leading. If you are interested in connecting or collaborating, you can find more information in the description. And don't forget to subscribe for more powerful episodes. And if you are a leader, business owner or investor ready to adapt, thrive, and lead with clarity, purpose, and wisdom in the era of AI, I would love to invite you to learn more about AI Game Changers, a global elite hub for visionary trailblazers and change makers shaping the future. Prakash, I've been waiting for this conversation for such a long time. And today we are going to dive into what really matters. And I'm sure that our listeners and viewers will not only learn something new, but also get many insights. Let's start with your story. I would love to hear more about yourself, about your journey, and everything you would like to share with us.

SPEAKER_00:

Yeah, so my journey started as a civil engineer. So I have got a master's in civil engineering. I was born and brought up in the UK. Being part of the construction industry, I guess that's where I understood the role of construction professional, is a lot to do with managing people, right? It's understanding what drives people, how people react and do things is a lot to do with what's happening in their lives, what's going on. And in order to motivate them to succeed, you need to understand who they are as people. Most of my career, although I graduated as a civil engineer, was spent doing digital transformation. So looking at how the construction industry can take and adopt technology to help make it safer, more productive, improve the quality. And even that space, although it was digital transformation, was the title of my role, most of that, I would say 80% of that time, is looking at people and how those people can adapt and utilize the technologies to be able to get some of the improvements that we were looking at doing. I was really fortunate to also have been given the opportunity to travel around the world. So I worked in India, I've done a little bit in Indonesia, and then for the last 12 years or so, I've been here in the Middle East, based out of Dubai. And again, having worked in these different places, you get to understand how people's perception of things are different, right? People's perception of risk is different. People's perception of technology and how to use technology and how to adopt technology is different. And so I think these kind of unique elements to my career have really helped with what I'm doing now in running Narvotech, where we've developed AI tools that put the frontline workers first, right? We want to make sure that people who ordinarily technologies like AI might pass them by. We help them to get access to this technology, we help them to, you know, understand this technology, and we help them to interact with business systems using AI as the main interface.

SPEAKER_01:

Amazing. Thank you so much for sharing your story. And I agree, working with different cultures, seeing different places, and working with different structures helps seeing the opportunities and finding those common connectors in order to enrich your product and create something truly amazing. In your view, what marks the true turning point in this evolution and what mindset and skills should leaders adopt to stay ahead of it?

SPEAKER_00:

So I think you're right that is that the technology has rapidly moved from some of the basic things that LLMs do, where um you know they became really good at writing, right? So a basic LLM, I think most people know, it's a really good, essentially a really prediction engine. What it does is is that um you ask it a question, it predicts what the first word would be to the response to that question. Then based on the question and the first word, it'll predict what the second word is, and then what the third word is, and et cetera, et cetera, right? And so most people still, unfortunately, are still only using it to do that. Whereas I guess there's probably a lack of education in terms of what it can do now as we've moved to agentic systems. So I guess the true turning point was we went from you know being able to create better emails, write better responses, or you know, create content for our social media posts. We've now switched into what we call agents that are capable of taking autonomous actions over time. So they're able to have a memory, they can remember what was said previously, they can act autonomously to go and fetch information for the internet and other sources. And that evolution introduces these things called persistence, planning, and purpose. So this changes the way that we use LLMs, not just as a almost like a passive tool to give us things, but as a true collaborator. So we can start working with it and do meaningful things, right? For the agent, you're gonna do meaningful things for you and with you. And so the moment we let AI operate across these multiple steps and carry out action and you know, give it feedback on how we can improve, uh, the more autonomous we can get these agents to become. And so as we enter this kind of new phase where uh we have alignment, control, and obviously protocols, you have to apply leadership over it to make sure that you know you're understanding what it's going to do. As soon as you start giving up some of this autonomy, giving it to the agents, or you start giving up agency to these agents, you need to really understand what the boundaries are and how these are going to act and react so that you can therefore predict what's gonna uh come out from and be sure of the responses and what the actions that the LLMs are gonna take.

SPEAKER_01:

I couldn't agree more. And just to dive a little bit deeper, I want to ask you the biggest misconception about agentic AI systems and how can businesses practically prepare for their integration without losing control, clarity, or accountability.

SPEAKER_00:

So I think and linked to what we were just talking about, I think the biggest myth is that agentic systems are just you know super duper LLMs or LLMs on steroids. They're absolutely not. Agents are usually goal-oriented. Agents are designed to do something specific and particular, and it's more than just give you a text-based response or you know, give you back a generated image. There are systems that are an interface, but sometimes interfaces to uh external tools. So you can connect them to things like APIs, um, and we're going to talk about this in a second, but you can also uh start connecting them to other agents, and even what we're starting to see now with agents being uh connected to robots, um, so even external physical environments, right? So you're now starting to see this crossover between AI and LLMs that used to be very computer-based, whether that's your phone or your PC, now moving more into the physical world for you to interact with. I think in terms of some practical steps and things, maybe you know, how to prepare, as organizations, you probably need to start looking at assessing whether organization is ready for gentic systems. You know, do you have the right culture in place, the right protocols in terms of safety and guidance for your employees to be able to use this in a safe way? One thing I'm starting to see is organizations create sandbox environments for safe testing and experimentation with some of these agents. So if you're asking your agent to place orders for something on your behalf, you want to make sure that it's not going to over-order things or it's not going to order, it's going to repetitively order things, right? I see people talking about, if I bring it back to what we're doing in construction, is talk about giving an agent autonomy to order a concrete pool, right? Uh and depending on the weather, you then cancel it and then set it up for another day. Now imagine like there was there was some glitch in the agent or there was some misunderstanding with the agent, and it ended up ordering that same delivery of concrete three times. Obviously, there's real-world implications, both in terms of cost, but environmental and all this, you that concrete would go to waste, and there's a whole bunch of things. So you have to be really careful. And so creating some of these safe spaces to do the testing and making sure you get trust and build trust in it is really important. I think the other part of it is being able to have cutoffs and like dead switches with agents, right? In case they do end up doing things wrong. So putting safeguards in place in the expectation that the agent might go awry and do something that you don't expect it to do. So I think these are some of the things that I'm starting to see, and some of the safeguards I think as organizations, we need to start looking at.

SPEAKER_01:

I like that you highlighted the culture needs to be in place among all the other things and technologies. And of course, it is important to test it before you implement it in real life, because otherwise, the implications might be so impactful that it might be even dangerous. And too much concrete is, I believe, not the worst case scenario of what might happen when you trust your agents blindly.

SPEAKER_00:

Yeah. But I think the culture, you're right, the culture is really important, but it also goes both ways, right? So most organizations, if they're going to be looking at deploying agents and agentic workflows, are probably going to be spending some amount of money to put this in place, right? And other resources. And so if you don't have a culture that's ready to adopt it in the first place, that's one element of it as well, right? So making sure that you've got the right culture where you know people want to experiment, want to innovate, want to look at different things. And then you've also got the culture where people are understanding the limitations of what the agent can do and some of the things that could go wrong. So I think that of all of the things, I think the culture is definitely one of the most important elements that you need to understand and work on getting right.

SPEAKER_01:

Absolutely. It goes both ways. And I love that you just highlighted it and explained it a little bit better because sometimes it tends to become just a buzzword, and leaders try to bypass that topic because it is seen as something one-sided, but it is deeper than that, and it has a huge impact on the consequences. That's why it is important to understand that it's not that straightforward and it is more powerful than we would like to see it. But let's talk about the emerging MCP. How will it change how we build, scale, and collaborate with AI across industries?

SPEAKER_00:

So I think this is probably going to be one of the most fundamental changes that we've seen in AI since its inception, since it started becoming into the public consciousness in what back end of 2022. So MCP is model context protocols, and sometimes it's called agent-to-agent, so A to A or multi-agent context protocols. It's essentially a set of standards that allows different agents to be able to talk to each other, share content, sometimes even share memory. So this is now, you can imagine, becoming really, really powerful. So if you think about agents that are being designed to do one task, and usually they're designed to do one task really, really well. Um, and that task might have multiple steps. Here now you can get uh with MCP, you can get agents that talk to uh other agents. And so they can do things in different systems outside of the confines of of where they've been set up. And so they can also pass tasks from one to another. So some people create an analogy like MCPs are like the APIs for LLMs. I think that's a little bit short-sighted, not quite like that. It's a lot more powerful than that. And so you can get basically multi-agent systems that can do uh multiple different things. So you can get agents that are really specialized in one thing and another one that's specialized in another, and then you can get orchestration agents then that that kind of sit over these multiple agents and are able to work with the inputs and the outcomes from these different agents. That's not currently possible with straightforward LRMs, and it's very difficult to do with a single agent. So I think this move into this kind of multi-age environment is extremely powerful. And I think this also allows us to do things like um use different models for different things. So, you know, we've seen recently all these uh latest models coming out, like Grok and the latest GPT models and things like that. And as these models evolve, they're gonna become better and better at certain things. So even you've got this menu of models that you can access ChatGPT, because some of them are good at certain things, some of them are good at reasoning, some of them are good at creating images, some of them are good at um giving you summaries. And so once you've got that, you can create agents, utilize these different models, and then bring them together to carry out tasks that are much more complex and nuanced and bring all of these things together. So the way I like to do to kind of see MCP or the way I like to describe MCP is that it's very akin to basically building a human team, right? So, you know, you have a manager, then you have next level down, and then you've got workers, and each of them have got certain skills and certain specialisms, and then the manager is kind of there to kind of coordinate and orchestrate between them to get an output. And so if I apply it to you know some of the things that we're doing, we're actually experimenting with MCP in construction and in safety, we have to create multiple documents for regulation, for communication, and things like that. So, what we're trying to do is look at how we can create MCPs that are really good at specific tasks. So you might have an MCP that's really, really good at understanding how to work with ladders, another one that's really, really good at understanding how to work with scaffolding, another one that's really good understanding how to do excavation, et cetera, et cetera. And then you'll get an orchestration agent that sits over the top that you know is really good at writing method statements or uh or doing a risk assessment. And so what you do there is you can talk to one agent and say, you know, write me a risk assessment for doing this piece of work. It will then go to the relevant expert agent, they will produce their content. The orchestration agent would look at each of them and ask questions of the others to make sure it's all alright, and then finally produce and give you an output that is going to be contextualized, it's going to give you the correct information, the validated information, and it'll be of a high quality. So that's one kind of very simple example of how MCBs can be applied. We're kind of experimenting with some of these. Um, there's lots of other things. Um, you could use it as like a planning agent, you could use it for you know doing vision interpretations. There's a whole bunch of things. Without MCPs, these agents would probably operate in silos, and sometimes it'll be difficult to achieve a greater goal to try and do something useful. So with a single agent, you're probably limited, or your prompts would get so complex that there'll be edge cases where it would fall down. So using MCPs, I think, is going to be a great expansion of what LLMs can do.

SPEAKER_01:

Sounds fantastic. What an amazing world we're living in. Just a few years ago, we couldn't even dream about all this. And today we are discussing such topics, and it is also applied in reality. By the way, you mentioned that you are applying it in your business. So I wonder if you have a success story around your positive return on investment while you applied those technologies in real business, in real life.

SPEAKER_00:

So we're still at the experimental stage with things like MCP. So I think there's a couple of risks with it as well. So because it's so new, some of the security protocols are not well established. So, you know, being able to ensure that the data flowing from one agent to another is controlled, what it can do back in your own systems, you can potentially um inadvertently expose your internal systems because of the way these MCPs interact, and especially because some of these agents have a little bit of autonomy. And so I guess at the moment, the ROI for us is look how we can um enhance our product lines and also look at doing internal processes more efficiently. But currently I think it's a bit too early to get any definitive ROIs. It's it's very much for us anyway, is in is in the experimental state.

SPEAKER_01:

I love this, and I totally understand and support the approach where it's better to be on the safe side and test it a little bit longer. And of course, the topic is quite new in our reality, in the way we're running business. So obviously, it will take a little bit more time, but once it is applied to the real cases, it is going to be fantastic, and then we are mitigating our risks along the journey as well. And to all our listeners and viewers, if this conversation sparks something for you, hit like, follow, or subscribe, and share it with one person you know would be inspired by this episode. Share in scaling. Prakash, in your experience, what are the most common mistakes companies make when trying to implement next gen AI, especially when jumping from traditional automation to more autonomous agents? We just mentioned that sometimes it is smart to slow down and not rush into the unknown, but at the same time, I would like to hear more about it.

SPEAKER_00:

So I think one of them is giving the agents too much trust too early. Organizations sometimes skip some of the things that we talked about right at the beginning, about you know, uh understanding what the governance models are, understanding the culture in the organization, setting yourself up with sandbox environments and things like that, and thoroughly testing what the agents can do and also what they shouldn't be able to do, uh, putting some of these guardrails in. So I think that's one thing that organizations should definitely make sure that they don't jump too quickly into trusting agents with a rush to try and just deploy them. So the way I like to describe it is you know, if you've got an intern in into the organization, someone who's got relatively low experience and you know, just have just come out of university or something, you wouldn't necessarily have them making the most key decisions in your business, right? Particularly ones that are going to potentially impact revenue or um, you know, safety or even relationships with your clients. You'll put them as shadowing somebody else, train them, make sure, and then you know, slowly start giving them responsibility over time. Once you've established that trust, then you let them do more and more things and give them more responsibility. I think we should be looking at agents in in very much the same way, where we we essentially treat them as if they were interns and drink feed them levels of responsibility over time so that you can build that trust yourself, but also you know, start uh understanding where the agents are going to be useful and are not, and where they could potentially create harm and introduce unwanted or unrecognizable risk into your organization. Um, some of the other things I think are like having no context management layer. I think agents to be able to understand and improve agents, you need to have a layer of context management to be able to make sure what the agents are producing. Um, if it is wrong, you've got a feedback mechanism to be able to go back into it. And that feedback then enhances and it learns from uh so the agents learn from these things. A lot of organizations almost treat agents as they do with LLMs like a one-shot, right? You expect it, you give it something, you get something back, and then you blindly take and trust that. More and more people are starting to understand that just as LLMs hallucinate, agents can also hallucinate and misinterpret things. And so making sure that you've got protocols in place to be able to identify those, but more importantly, reinforce the agents and uh the underlying LLMs to make sure that those things don't continuously keep happening. I think from recommendations similar to what we were talking about, you start with supervised, making sure you're keeping the human in the loop at every single stage. So you don't go to full autonomy straight away. Again, similar to what we're seeing with you know driverless cars and things like that. You're starting with a driver in the seat, then the driver moves into the passenger seat and make sure that they've got control over these autonomous vehicles if they need to. And eventually we'll get to, I think it's level five autonomy where that the cars will drive themselves. So, in exactly the same way, we should be looking at agents within our organizations where we start with the human in the driving seat and you kind of seeing what the agent can do. Then you're probably sitting next to it, uh almost like a co-pilot, not to be mistaken, with Microsoft's product, um, and then letting it work alongside you, and then you start giving it autonomy over time and letting it do things by itself. The other thing is what I've been reading about is creating agent identities and then role protocols. So going back to my previous analogy about treating it as if it was an employee and starting it as an intern, and then as it develops better skills or you develop more trust in it, you almost give it promotion, right? So understanding the role that it has to play and giving it protocols that it can work within. And then, and as soon as it goes outside of those uh guidelines or protocols, there's you know mechanisms to stop it or ask for permissions or do certain things to make sure it doesn't go beyond those guidelines.

SPEAKER_01:

It reminded me of another episode with Dr. Victor Monga from California, where we were discussing cybersecurity and zero trust journey. So it is exactly the same logic behind where you really need to start with something simple, something that can't bring too much damage, and then move from there and develop together and then co-create along your roadmap. But I wonder what's one project or scenario you've seen where Agentic AI didn't work as expected, and what lessons can we take from that to design more sustainable, human compatible systems?

SPEAKER_00:

So I think the previous about ordering concrete or giving an agent you know autonomy to make decisions about when to order concrete, that came from I've seen uh organizations start to experiment with this. And so one of the use cases they gave there was that the agent was designed to look at the weather. And if the weather meant that um it was beyond the minimum requirements, if the temperature was too high or you know, there was uh precipitation or rain uh expected on that day, it would cancel the order and then rebook the order for when the weather was likely to be uh better again. And we what we saw was that a level of discretion wasn't there. So if it saw that the you know that the right conditions were not met at the exact time that the concrete was supposed to arrive, it wasn't contextual, it didn't have enough context to say, okay, in an hour's time it's going to be okay. So let the let the delivery come and we can still do the pool, right? So some of these nuances and that were, and again, this is learning, right? This is these are um just and a very simple example about uh over time we start building it. So um, and luckily this was a sandbox environment. And so it was able to learn, or the humans were able to teach it some of these edge cases and some of these nuances so that it can start making decisions like a human would, right? The human's decision making isn't uh finite, it's it's based on a lot of nuances, uh, heuristics, and trying to teach agents all of those, which we as humans have picked up over life, is gonna take a long, long time. The downside to some of that is by doing that, we also start embedding our own biases into the agents as well, right? So they'll also start inheriting some of the um biases that exist because of our own experiences of that. So I think there's a there's a really fine balance to be had between teaching the AI agents on how to navigate the real world without enforcing our own biases onto it. We've seen it in LLMs as well, right? Where you where the responses come back, and because they're based off of things off the internet, they start getting you know misogynistic or racist or whatever these these things are. And I think with agents, we've got a risk that the behaviors that we exhibit or we've been used to do um acting out are now going to start becoming embedded into agents. And so there's a fine line, I think. And then and I don't know what the answer is on on how we do that, but we just have to be really conscious that we don't end up going down that route.

SPEAKER_01:

This is so interesting. But what do you think is the most dangerous bias in the long-term perspective?

SPEAKER_00:

Dangerous? That's a good question. I guess if we transfer some of our insecurities into the AI agents where we limit the possibilities are, but sometimes because of our own fears, our own personal insecurities, then some of those not necessarily dangerous, but they'll they'll end up limiting what's possible from the AI agents, right? So again, going back to talking about balance, I think there's a real balance to be struck in terms of making sure these AI agents are useful and are productive and are doing things that can really help us, but at the same time, uh, we're not limiting what they can do just because we can't think about it, right? We we can't imagine what the possibilities are.

SPEAKER_01:

You're right. This aspect is definitely going to define whether we're going to make it or break it in the new stage of development. But coming back to the leadership topic, what kind of leadership is needed to ethically scale MCP, especially in regions moving fast like the UAE? And what should leaders stop doing? What should they lean into?

SPEAKER_00:

So I think this is true for all technologies, not just for uh AI. I think it's maybe exacerbated with AI agents and MCPs, is just chasing the novelty factor. I think there's a real risk that leaders will say another company or our competitors are doing this or everyone's looking at this, like we need to do this, we need to do something to do with agentic AI, we need to do something with MCPs just because everyone else is talking about it. I think understanding your business, understanding where some of these tools could be applied and utilized is really, really important. And understanding, like we said, your organization's capacity and culture to be able to accept and absorb and utilize these tools. So AI could quickly turn into an expensive mistake if you're not careful and really purposeful about what and how you deploy. And again, I always, I know, compare AI as digital transformation that's been happening for the last few decades. Where there'll be this new shiny technology or this new shiny digital tool, and they would want to press ahead and deploy it just so that you know they can tell their leadership or they can tell their stakeholders that they're doing something in this space. The hype around AI is potentially starting to create this risk where leaders kind of jump in and almost have this fear of missing out and want to do something for the sake of doing something. And I think that that's really, really dangerous because therefore, usually in those scenarios, you'll skip some of the things that we talked about, right? You know, set setting up the framework, setting up procedures, um, creating sandboxes just purely from the desire to get something done and start showing results, right? I think the other thing is we're talking about what we were saying before, is start treating agents as if they're like junior members of your team, right? Literally like they're an intern or a graduate, and that they'll need hand holding, they'll need integrating, they'll need understanding of your business and how you work and what you do. Uh, I think all of those things with oversight and then uh continuing the analogy, like doing things like performance reviews, right? So reviewing the outputs, making sure that they're contextualized. I think those types of things are going to be required to make sure that we control some of the risk and also get the most benefit out of these types of MCP systems. We should probably be leaning into is experimentation. Is so we should be encouraging people to experiment, but in sandbox environments, doing it, taking it one step at a time and kind of reviewing what the outputs are, doing almost like you know, security and safety reviews in the same way that you. You know, we've got cybersecurity protocols and that we should be having protocols around agentic review, agenc safety, and like we have cybersecurity, and you kind of have a agentic security and making sure that that's constantly being reviewed and understood about what's happening. And then I guess in the same way that we ended up when we were deploying things onto servers, and we ended up creating like uh development operations, so DevOps, um, and now we've got MLOps, you've got machine learning operations. Most likely we're gonna see the equivalent of something like agentic ops, right? Or agent ops. So we we're looking at you know the whole that where the infrastructure is, what where the data is going, what it is that is being stored, how those agents are interacting with other agents. I think there's gonna be a whole, probably a whole category of skill set under this kind of bracket of agent ops that's gonna start coming up, which will be monitoring, creating the protocols, assessing and constantly adjusting all these different elements, right? The protocols, the infrastructure, the accessibility, and all of those things around uh where and how agents operate.

SPEAKER_01:

I really appreciate that you took a look into the future because of course everybody who is working with these topics is wondering what's coming next, how is it going to develop, and what are the other opinions in the world around the same subjects? And of course, it is valuable when you are sharing your vision and also warning about something we should keep in mind before we step into that future. Prakash, do you believe we need a new kind of AI operating philosophy to guide this shift? Something beyond policies and protocols? And if so, what core principles should it include?

SPEAKER_00:

So I think absolutely. So I think just governance isn't enough. Um I talked a little bit about Agentic Ops or AgentOps creating a new set of skill sets that understand what agents can and can't do and how they what the interaction between them and existing business systems and people is this whole philosophy and understanding that this is different. This is very different to you know the types of infrastructure, the types of software, the types of tools that we've that we've ever used before. And so I think automatically that's going to start start for it has to create a different set of skills and different approaches uh within organizations. So I guess one of the things that we should do is look at context over computation. So one of the kind of core tenants of it is to prioritize relevance over the biggest model or the fastest model and things like that, is looking at making sure that the outputs from both LLMs, Agentic, and when you put it together, the outputs of MCP are relevant and contextualized to what you do or what you need as an organization. It's really easy to just use these tools, like I said, and then and just get tons and tons and tons of output. But if it doesn't really help you do anything, and actually potentially might even hinder and make your processes slower because you're now having to evaluate or do something with a whole bunch of more data that you never never really needed or never really had access to. So there are risks that it can it can actually slow you down and make you less efficient, and which is the uh the opposite or antithesis of what we're supposed to be doing. I think the other element is making sure or we're looking at um a philosophy where you get explainability, right? So the models are able to explain what they're doing and how they got to the answer. This is a really important area within the evolution of LLMs and agents, is that they're continuously designed to explain how they got to the answer. So it doesn't become a black box. I think it could be really dangerous if we let the LLMs and agents kind of continue moving on, and especially as we move to more towards more and more autonomy, if we don't know how they're arriving at the answers or at least aren't able to explain when queried. And so I think that plus interruptibility, right? Being able to interrupt an agent and almost like have a kill switch to be able to stop it from doing something as soon as you realize it's starting to go in the wrong direction, so that you can prevent any potential ultimate catastrophic uh outcomes from happening. And then linked to that is this agent-to-agent transparency. So you've got explainability within one agent, but when you when you get one agent talking to another agent, making sure that you've got transparency of those interactions. I think in the early days, there was um I think Google, uh I think it was Google, they they came up with a um a method of of letting DeepMind, uh one of the early AI kind of technology companies, uh translate languages from one to another and then and then another language to another. And what the AI did was it created its own language to translate between language two and language three, uh which the scientists didn't understand. And so that's obviously very dangerous, right? Because you could then you had two systems communicating with each other that uh that was uh completely beyond the scientist understanding, and and therefore you had no idea of how it was doing it on what way it was doing. And so these are some early examples of how AI systems will will learn to start doing things without human intervention, and that could potentially lead to negative outcomes. And then one other kind of philosophy that I think we should be looking towards is um getting consistency out of the agents. I think one of the things we've learned with you know human interactions and interacting with each other is you know, we build relationships and bonds because we know what to expect from another human being. So, so you know, a friendship or or any other type of relationship is built on trust, and and that trust is usually because you know what you're gonna get. There's a dependable outcome and interaction with these people. Similarly, I think we're gonna have to look at how we can get behavioral consistency. Because imagine if you are gonna go to an agent and it gives you a quality response once a day, or it gives you a quality output one day, but on another day it's really poor, and you can't extend changed, nothing's different. But just because you know the way that the models work and they work on probabilities, you're getting inconsistent answers. You're less likely to rely on it, you're less likely to trust it. And so I think that's another area and another philosophy I think that we're gonna have to start looking at. And I think there's there's there's a bunch of work being done to try and ensure that you know that agents don't behave differently every time you interact with them, and they don't behave differently depending on you know who's interacting with them, as long as as it's within it within a certain frame framework.

SPEAKER_01:

I just imagined how dangerous it might be if the agents start depending on their mood, like humans do sometimes, and they offer different quality and uh vary in their outputs depending on their hair day problems.

SPEAKER_00:

I feel like on this on this uh this conversation, there's quite a lot of negativity, right? And I don't want to come across negative, but I I think that there's a lot of unknowns, right, in this in this space. And so I guess the message I'm trying to get out is just be aware of you know what this technology can do and it's great potential, but also be wary of um some of the potential downfalls and some of the pitfalls. I think that the three things that have got to go hand in hand.

SPEAKER_01:

Exactly. And I totally share your opinion that it might sound a little bit negative, but at the same time, this is a serious game, and uh we need to be real. And in order to be real, we have to look into the positive outcomes and into the dangers. And of course, everybody prefers talking about the positive outcomes and the results we might reach and grow our business and enable humans as well, but still at the same time, in order to come there in a good, sustainable way, we have to highlight all the negative sides or potential limitations and dangers so that we don't need to go through it in reality, but can eliminate it and reduce that risk through discussing it, through talking about it. And uh that's exactly what we are doing. So this is a great help to everybody who is entering the space of agentic solutions so that they can avoid all those pitfalls and go straight into the winning game. Prakash, what is your one piece of advice for leaders and business owners preparing to implement next generation AI in a way that is meaningful, scalable, responsible, and future-proof?

SPEAKER_00:

I guess if it's just one piece of advice, don't ask, you know, can this agent or agency system work? Right? Like, don't ask, does it work? Ask, can this agent work with my my teams and my people? Can it work with my system? Can it work at scale? And and can it work with transparency? Is there explainability in it? Make sure that when you're putting the agency system in place, you're doing it for a reason that we understood, and that you're not you're not just expecting it to do something, you're expecting it to do something with the people in your organization, with the systems and protocols that you you have in place. Um, it's able to be scaled across your organization, and that it's it's your understanding how it works and that that there's that level of transparency. So I think you've got to think sometimes hard to think beyond LLM. And I I hear this a lot from people is that they use LLMs as if it's Google, right? As if it's a search engine and they're typing answers in there. And then as soon as they start using it for more than that to help them to you know uh refine their thoughts and define what they what they're about to do, and that it opens up a whole new world. And so I think um all of these things are really important to remember when you're looking at moving to the next level, right, from LLMs into you know agents and then MCPs, is things like context sharing, um, role delegation, understanding the role of the different agents and what what the inputs and outputs are likely to be from each of those, some of the ethical boundaries that we talked about, um, and and then always, always making sure that there's a human fallback, right? So no matter how much uh or how powerful some of these agentic systems uh will get, is I think we should never be in a situation where it's 100% independent of human intervention. There should always be some level of human in the loop to uh, even if that is uh just to make sure that there's a there's a safeguard or a uh a way of shutting it off, there should be a human in the loop at all at all times to ensure that we don't end up with this potential you know negative or disastrous uh circumstances.

SPEAKER_01:

100%. I couldn't agree more, and that's uh truly great advice. And my last question for today, even though I would love to continue this conversation and hear so much more about it from you, but I trust we can come back to this topic in the future. So, what is one thing we all need to unlearn in the AI era?

SPEAKER_00:

Oh, I think there's so many things. Uh one of the we just talked about is you know not treating LLMs like like they're search engines. I think we're so used to um, you know, uh Googling uh things and and then expecting an answer back. So unlearning uh that is uh is probably one of the kind of lowest hanging fruit. But I guess if we're gonna go a little bit more deeper, I guess unlearning some of the command and control mentality, historically, our relationship with digital tools and and computers and things like that is assuming that we we input something and then we get an output and we know you know what we expect, what the output is expected to be in actually probably quite quite a narrow band of expectation. And what we're seeing now in the AI era is we get agency systems that are we're no longer commanding a piece of software, right? We're commanding something that we're co-creating with uh and and it's using probability and probabilistic outcomes. And so so getting out of that command and control mentality and and almost getting into a co-authoring mentality, right? So so this thing is going to be there, uh your AI agent is going to be there to do things with you, not for you. I guess and that's a really there's a really difficult mental mind shift to do. So even now, I think a lot of people are still using LLMs to do things for them. Uh and the ones that get the best outcomes from LLMs are when they when they work with the LLM, do multiple iterations, do refinements, and then you know create whatever the outcome, uh the uh the output that you're expecting is. I think that's that's really, really important and probably one of the most difficult things to do because we've been so used to and grown up with doing it this way. So I think you know the the the younger generation who are gonna be exposed to this technology right from the beginning, they're not gonna know any different, right? This is going to be the way that they interact with the technology, and so they're gonna grasp it much, much quicker than the the likes of you and I.

SPEAKER_01:

So true. Thank you so much, Prakash, for being here in the studio today, for sharing your experience, your vision, your wisdom. I truly appreciate you.

SPEAKER_00:

Thank you so much. Thank you for giving me the opportunity to do this. It's been my absolute pleasure.

SPEAKER_01:

Thank you. Thank you for joining us on digital transformation and AI for humans. I am Amy, and it was enriching to share this time with you. Remember, the core of any transformation lies in our human nature, how we think, feel, and connect with others. It is about enhancing our emotional intelligence, embracing the winning mindset, and leading with empathy and insight. Subscribe and stay tuned for more episodes where we uncover the latest trends in digital business and explore the human side of technology and leadership. If this conversation resonated with you, and you are a visionary leader, business owner, or investor ready to shape what is next, consider joining the AI Game Changers Club. You will find more information in the description. Until next time, keep nurturing your mind, fostering your connections, and leading with heart.