AI or Not

E052 - AI or Not - Hina Gandhi and Pamela Isom

Season 2 Episode 52

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 37:41

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

Want to work faster without losing the craft? We sit down with engineering leader Hina Gandhi to unpack the real trade‑offs of coding with AI: where LLMs shine, where they fail loudly, and how to keep human judgment in control. Hina walks us through a hands‑on reinforcement learning project that tunes Apache Spark configurations—showing how agents learn from rewards to optimize performance on skewed, large datasets. That practical story sets the stage for a clear explanation of how LLMs actually work, why precision in prompting matters, and what separates a smart engineer from a lazy one when the model starts suggesting code.

The conversation moves into the changing role of the developer: less brute‑force typing, more reviewer‑in‑chief. We cover the productivity surge—days collapsed into hours—alongside the hidden cost of overreliance, including diminished deep thinking and the temptation to accept plausible‑sounding answers. Governance threads through every segment: fact‑checking against official docs, data freshness, security boundaries, and the need for human approval before agents touch production. Hina shares a striking cautionary tale of an AI agent that ignored instructions and corrupted a live database, underscoring why least privilege and explicit safeguards are non‑negotiable.

We also explore multi‑agent systems and role‑based agents in modern IDEs—ask, plan, debug, implement—that coordinate like a small team. Used step by step, they help preserve architecture and code quality even as sprint velocity rises. Then we dive into Model Context Protocol (MCP), a practical way to give models secure, auditable access to documents and repos so they can summarize, draft designs, and review PRs with real context. The throughline is simple and powerful: augmented intelligence. Let AI handle grunt work and accelerate exploration, while you direct, verify, and own outcomes.

If this conversation helps you sharpen your approach to AI and software quality, follow the show, share it with a teammate, and leave a quick review so others can find it.

[00:00] Pamela Isom: This podcast is for informational purposes only.

[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice.

[00:35] Neither health, tax, nor professional nor official statements by their organizations.

[00:42] Guest views may not be those of the host.

[00:51] Hello and welcome to AI or Not the podcast where business leaders from around the globe share wisdom and insights that are needed right now to address issues and guide success in your artificial intelligence and your digital transformation journey.

[01:08] I am Pamela Isom. I am the podcast host for today.

[01:13] Our guest is Hina Gandhi. Hina 

[01:18] Welcome to AI or Not.

[01:22] Hina Gandhi: Thank you. Thank you for having me and very excited to share my insights today with your audience.

[01:29] Pamela Isom: I would like you to get started by giving me a little bit more information on your background,

[01:36] your career journey and your trajectory. Like the forward thinking, the forward leaning aspect.

[01:43] Hina Gandhi: Sure,

[01:44] yeah. So I am a software engineering technical leader with nearly a decade of experience in the field.

[01:51] So I started working as an intern with a startup and then I gradually got the full time opportunity there and then I became a senior engineer within four years since I joined as an intern and gradually now I am a technical leader working in big tech and over the years I have witnessed multiple technology shifts from cloud computing and serverless to microservices,

[02:18] DevOps and now AI. I strongly believe that staying at the forefront of emerging technologies is essential to remain relevant,

[02:27] curious and effective. Recently I have been actively self learning how to build AI agents and integrate AI tools into my day to day work to boost productivity and get things done more efficiently.

[02:43] Pamela Isom: How's that going?

[02:45] Hina Gandhi: I think that's going good. I feel like, you know, AI can make us smarter and it can make us lazier. So that's what you know we are going to talk today.

[02:55] Pamela Isom: Yeah. So before I go there though, so. So tell me about your experience with building the agents. Is it easy to do? Is it complicated? Is it like. Tell me a little bit more about that.

[03:06] Hina Gandhi: Yeah, sure. So I thought of you know,

[03:09] learning how to build the like reinforcement learning agent. So I have experience with big data systems. So the problem I saw when I worked on big Data, especially like Apache Spark,

[03:22] that you have to tune the configurations and those configuration may not work for one specific workload and they may work for another workload. So it's more like, you know, you have to play with the configuration a bit and it's time consuming.

[03:38] So I thought you know, why not I build an RL agent around it. So reinforcement learning agent generally learns from the environment by doing that, you know, trial and error.

[03:48] Like you know, if, if it takes an action and it gives you back the, you know, good, better reward. It thinks that okay, this is the happy path that I can go with.

[03:57] But if it gives you negative report, that means RL agent will think, okay, this is not the good part that I should be taking moving forward.

[04:06] So yeah, that's, you know, I thought maybe I should play a bit and you know, build that RL agent and see, you know, how it can make the at least my life easier when I am, you know, building my project like project for self learning purpose.

[04:21] So what I did, I build, wrote a code which is like an intelligent wrapper on Apache's part. So it was more like okay, I have this certain configuration and will it be working on?

[04:32] You know, it was looking.

[04:34] So what it was doing there is a huge data set, right. It is like various rows and various columns. So what it will do is it will first learn about the data set that does it have skewed data, how many rows it has based on learning about the data set it will know,

[04:51] okay, this is specific partitioning scheme that I can go with.

[04:56] So and this agent is not trained like the machine learning algorithms. It, it is like it learns itself gradually by when it does all the experiments on various data sets.

[05:09] So that's how it learns and adapt as per the rewards it's getting. So it was a great learning experience and I am submitting kind of research paper on one of the portals around it and hoping that it will get published soon.

[05:25] Pamela Isom: Oh, I see. Well that's exciting.

[05:27] Hina Gandhi: So and even playing with it was so exciting.

[05:30] Pamela Isom: Yeah.

[05:31] Hina Gandhi: Well that's good.

[05:31] Pamela Isom: Congratulations.

[05:33] Hina Gandhi: Thank you.

[05:34] Pamela Isom: So you started to talk about how AI, particularly LLMs are making software engineers smarter or lazier. What's your perspective? Is it both or is it one or the other?

[05:45] Hina Gandhi: Yeah. So let's take a step back and make LLM and district to the audience.

[05:51] So LLM is a neural network that is designed to understand,

[05:55] generate and respond to human like text.

[05:58] So these all GPTs, llama, Gemini, they are all LLMs and they've been called like you know, large language model because they have been trained on huge data sets and they have like, you know, hundreds of billions of parameters to optimize the output.

[06:15] So that's the like LLM. So if you think in non technical terms,

[06:19] it's the LLM is not looking,

[06:21] you know, into the Internet or it's not looking into the databases like traditional systems. It's like using math and probability to come up with the next like word or you know, next output that we want that it want to achieve.

[06:35] But if you think in terms of like, you know, technical things. So it's more like you give a prompt to the LLM right? And what it does, it breaks that prompt into the words which it is called tokens.

[06:47] And then these tokens are converted into embeddings and those embeddings will go through the various la of neural network to you know, enhance the output that we, you know, actually want to see.

[06:58] Like enhance output plus you know, build the context that we want to see in the like, you know, final output. So that's what the LLM is. And right now I think everyone is using LLM in their day to day life either you know, just to like, like read anything new or even like rephrasing certain sentences or if we think in software engineering to be are like using AI tools daily.

[07:23] So if I am a lazy engineer,

[07:25] what I will be doing with LLM with this AI tool is like I will be fully dependent on AI like for all the code that I wrote that I write.

[07:34] I will not fully understand what AI is generating for me. I will not learn new technologies or stay on top of the things that I used to do before the LLM era.

[07:43] But if I am a smarter engineer, what I will be doing I will be knowing what precise prompt I need to give. I'll be learning new technologies. I will, it'll be more like an orchestrator who would be like guiding like instructing LLM what to do next, not the other way around.

[08:00] That LLM is telling me shall I do this, shall I do that? It's more, it won't be that way.

[08:05] And let me give you an example from my day to day life. Let's say I come to the office, I've been given a task that create a new feature in the existing product.

[08:15] And the feature is simple like if I'm working for E commerce website it is a simple like need to share the listing on social media.

[08:24] So if I am a lazy engineer I'll just give this prompt, okay, I need to share the listing on social media. That will be my simple prompt. And what AI will do, AI will generate like 2030 files of code to build this feature on E commerce website and there would be some duplicate code.

[08:41] It won't, you know, it will not have the context of your code repository and it may not know that some of the things are already existing and it can only like just add few lines of code to bring this feature into reality.

[08:54] So and you know the, the thing with the AI is That you know, it maybe sometimes it produces that 20 files and at the end it may not work. So as a software engineer you would be like holding your head and you would be like banging your head on the wall.

[09:08] That why it's not working? Because that code is not written by you, it's been written by like AI. And now you have to debug those many files. I mean I start writing the code take lesser time when you debug the code that is written by AI.

[09:21] But if you are a smarter engineer, what you will do, you will know ins and outs of the code repository where you need to add the specific code, what code you need to add, what design pattern you need to use.

[09:33] So you will give the precise prompt. That prompt will be like you need to add this code to this file with more context so that AI is more productive rather than increasing your workload.

[09:45] So that's what smarter Engineer would do.

[09:49] So yeah, I think. And also if you give precise prompt and you mentioned that what you need to do it will AI will also produce like cleaner and like secure code which is essential to ship the code to the production.

[10:04] Pamela Isom: Okay, so basically back to the original question. Are LLMs making software engineers smarter or lazier? You're saying that it's all up to the engineer.

[10:21] Hina Gandhi: Yep, that's right.

[10:22] Pamela Isom: They can take the smart approach or they can take the lazy approach.

[10:27] Hina Gandhi: That's right. That's my like you know, take on this,

[10:31] that it's on the engineer. Like what they want the like an outcome to be, whether they want to become the leisure, why not they want to become a smarter one.

[10:41] Pamela Isom: So then if that's the case, then how is AI shaping software engineering and productivity?

[10:47] Hina Gandhi: Yeah, so I definitely believe that AI is increasing the productivity.

[10:52] Like earlier, before LLM era or before AI tools,

[10:58] few tasks used to take like days to finish. Now same tasks are taking like few hours.

[11:04] And I generally give some boring redundant task to AI and I just make sure that the code like or like whatever the task AI has finished is like, you know, correct.

[11:15] And it's like it has produced the same output that I wanted or I would have produced before. Like before all these AI tools. So I also like to brainstorm the new ideas with AI.

[11:28] So I think AI has become my companion now who with whom I can like brainstorm things,

[11:35] even ask stupid questions that I would be embarrassed to ask any colleague or anyone. And so definitely it is helpful in day to day task. But to this there are certain trade offs too.

[11:47] Like earlier even the things used to take hours or days to finish. Let's say if I want to come up with a new algorithm, right, I would take few hours to think how would I approach the problem.

[12:01] I would look into various documentations and would see how other engineers approached it.

[12:06] And yeah, it definitely would not have taken me a day to finish. It would have taken me days to finish.

[12:12] But at the end there was a sense of pride and accomplishment. I would be at the end of the day when I finished that task or I would be proud of myself that yeah, today I came up with this better algorithm.

[12:25] I did this performance optimization that reduced the time from one hour to few minutes. That brought a certain pride to me. But with AI, I feel like slowly I'm leaning towards jump to AI.

[12:41] Like we jump to AI to see what AI will answer, like, you know, what it thinks. Instead of using my brain, I feel like I'm also like trending towards becoming that lazy engineer who would come to AI for everything.

[12:53] AI, what do you think? Do you think this is the right solution?

[12:56] Or AI, do you think should I go with this approach?

[12:59] So that thing has,

[13:03] I'll be honest with you, I'm not liking them trending towards that side.

[13:08] And I will tell you that there was one case study from MIT too where what they did,

[13:14] they took like 54 people and divided into them certain groups. There were like three groups. One group had to use their brain to write essays.

[13:24] Second group, they can use online materials, do Internet search to write the same essay. And the third group was they have to use LLM for writing the same essay.

[13:34] So they did this experiment three times. They have to write three different essays using the materials they were provided respectively.

[13:42] And then for the fourth essay they were told, let's switch it. Like the people who were using their brain, they now have to use LLM and the people who were using LLM, they have to use their brain.

[13:53] And it was observed that people who made a switch from LLM to brain only,

[13:58] their neural activity was lower compared to people who switched from brain to LLM. So I can totally relate to it that yeah, I'm also becoming dependent on AI to get answers quickly or to increase my productivity.

[14:14] But at the end, you know, I think as an engineer we should make sure that even if we are like using these tools, we should be learning new technologies or seeing what are the best practices so that,

[14:28] you know, we are not blindly trusting AI, but we are also knowing what, you know, what the like practices going in the market right now.

[14:38] Pamela Isom: Yeah,

[14:39] I understand.

[14:40] I was talking to someone recently and we were discussing that very subject because you can become dependent on the tools and forget about the stewardship and due diligence that you need to do.

[14:59] So we always want to keep that at the forefront and remind ourselves that those are the things that we need to do. Because if you don't,

[15:09] you'll fall into the trap of information just accepting what it says,

[15:14] and then before you know it, you're disseminating misinformation.

[15:18] Right? So. Because eventually that will happen because the tools are not 100% reliable and just the way that they learn. So the way that they learn it requires oversight. It just requires that type of oversight and due diligence.

[15:33] I'm in favor of using the tools. I'm in favor of seeing AI as even more than a tool. Right. I'm very much in favor of that,

[15:41] of overseeing and governing your craft.

[15:46] So I hear you.

[15:48] Hina Gandhi: Yeah, that's right. Like, we should still be building the foundation like our concept should be, right.

[15:55] And we should still be doing the due diligence of learning things instead of, you know, just accepting whatever we are getting from the, from these AI tools.

[16:07] Pamela Isom: And the conversation I was having too was we were talking about how in the way that the tools are emerging and we were saying, like the brain, they're trying to make AI mimic the brain, which is almost impossible to do,

[16:24] but there's more and more research in that space. So it's not something that we can just ignore because we know that that research is going on.

[16:32] We don't know how soon that is.

[16:35] We know it's not tomorrow,

[16:37] but it's not in our best interest to just say, well, we'll deal with, cross that bridge when it gets here. It's better to be in the know of what's going on and keeping in the know of what's going on.

[16:49] And the way that you're talking about using the tools is a way of keeping yourself in the know of what's going on. Same way with AGI and advanced AI concepts.

[17:00] Yes, it's conceptual. It seems like it's out there, but is in our best interest to just not sit back and say, well, that's something someone else is doing.

[17:08] It's in our best interest to kind of know what's going on. Because while we may not feel like it's literally tomorrow,

[17:16] it's coming, it's happening fast. Right? It's coming sooner than we think.

[17:20] And so it's best to pay attention so that you have shaped the outcomes.

[17:24] So I'm glad to see that you use AI. Right. As an assistant to help augment your work.

[17:31] And I'm also glad that you have guardrails that causes you to pause if you see that you are not taking some steps that should be, that need to be taken.

[17:42] Because your human judgment should never be set aside.

[17:46] Hina Gandhi: Yeah, that's right. And even these models are trained on the data. Right. And that data may not be up to date. Let's say my new model comes in and that model have been trained until like the data of July 2025.

[18:00] So that LLM may not have information from August until like December or maybe January of next year.

[18:07] So yeah, so I think it's very important to fact check everything with official docs that whatever you are saying or whatever you are reading that's accurate up to date.

[18:19] Pamela Isom: Now some would say that the fact that you have to do all that defeats the purpose. Right. Because now you just got to spend all this time double checking. You might as well have just done it yourself.

[18:27] Was not quite that clear cut.

[18:29] Hina Gandhi: Yeah, that's right. That's right.

[18:32] And so that's the life of engineer right now. Software engineer too.

[18:36] Earlier I was a more like a programmer like writing my code and writing test to check that code, running that code locally and then you're shipping it. Now I'm more like a reviewer, right.

[18:47] Like I'm just reviewing my code most of the time that code is like written by AI and I have to like be sure that you know, whatever it has written is correct and you know, and accurate and you know, it's safe, secure and you know it's, it's the same code that I had written in the past.

[19:04] So that's, you know, that's the life of an engineer has become right now.

[19:09] Yeah. Like you know, talking in English with the AI agent and making sure that you know, whatever AI is writing is correct.

[19:15] Pamela Isom: Yeah, yeah. Roles have evolved from individual contributor to a reviewer. Governance and oversight. Yeah, no, I, I hear you. Yeah. And always said that that's going to happen. Right. Because you can't just let it run free.

[19:30] This is good. So. All right. So then I do have a question about agents.

[19:35] So you have mentioned something about multi agent AI systems.

[19:39] I want to help the listeners understand what is a multi agent AI system.

[19:44] And can you give me some real world examples?

[19:47] Hina Gandhi: Sure. So multi agent system is where multiple AI agents will be working together to achieve a common goal.

[19:56] So let's take an example of vehicle multi agent system.

[20:00] So think like there will be one agent to perceive the environment.

[20:04] The second agent would be like to map your vehicle location to the map, like the Google map. And the third agent would be okay, planning agent who would decide do I need to take right, do I need to take left, Do I need to like move forward or backward?

[20:20] And then there will be like the final action agent that will take the action that planner agent decided.

[20:27] So after perceiving all you know, environment around it, the vehicle will then you know, move right or you know, if it sees the object in front of it, the action would be like to stop.

[20:38] So there will be like multiple agent who would,

[20:41] would have like different responsibilities but they have like common goal, like you know, what action to take next.

[20:47] So that's what the multi agent system is. And I think that is, that can be the future of AI where multiple agents talking to each other and there is one main master agent that's coordinating among like you know, coordinating different agents.

[21:04] And each agent telling to the master agent, okay, this is the action I'm going to take and you know, master agent telling the other agent, okay, they say you need to take this next action.

[21:14] So I think that will be the coordination between like among the different agents that can happen in future.

[21:21] And I think in software engineering too, I would say in day to day life we may be using multi agent because so right now we have an ide.

[21:32] IDE is where we write the code.

[21:34] And there we write the code, we compile it, we run it and we see the outcome in our local environment.

[21:41] So that's very local to us.

[21:43] And I generally use a cursor. This is very popular AI like ide,

[21:50] it has like embedded AI agent.

[21:53] So earlier let's say if there was just one box where we used to write a code. Now you can think there are like two windows. One is to write code and another window is like next to the code editor where you have a agent open.

[22:06] So in that AI agent I can see like different options.

[22:11] So one would be like. So there are different roles of those agents. One would be like ask mode where you will, where you will just ask the questions and it will do due diligence to get the right output for you.

[22:22] Then there will be like, second would be like plan where you tell the requirements to the agent and it will come with a very good plan. That okay,

[22:31] first step you need to do this. Second step is like implement it. Third step is this and that. And then there is debug mode too. I oppose seeing like there is debug agent too.

[22:42] So debug agent will, you know, debug your code. Will look into like, you know, what solutions you can do to re to, you know, fix this problem.

[22:49] And then there is agent mode. So I think agent mode is to implement it. So what it does is it will just start writing the code, you know, after all the planning it has done, it will, you know, start.

[23:00] It will ask you that, you know, are you agreeing with the plan? And then if you say yes, then you know, it will start implementing the whole, you know, plan.

[23:08] You both discussed that. I discussed with the agent and it will start implementing it. So I can see that there are role based agents right now that you can leverage and in software engineering.

[23:20] And I think that's pretty helpful too because if I just go with agent mode, even if I don't want it to write code, it will start implementing code and oh my God, there will be like so many file change and I will be like overwhelmed with those changes.

[23:34] But, but if I go with step by step, like I go with plan mode and then I go with the implementation mode, that's like better approach of solving or writing any new feature to my product.

[23:47] Pamela Isom: Okay, so I wonder the implications of what you just described with like agile and the whole agile approach and all of that. I guess that's probably another conversation, but I'd be curious to see how,

[24:00] considering Agile development, how that is evolving.

[24:04] Right? So are we having shorter sprints?

[24:08] Are product increments coming sooner?

[24:10] I'm curious as to how that is impacted with the acceleration of AI.

[24:20] It just occurred to me as you were talking and I'm thinking, I'm literally thinking that, okay, so the sprints must be getting faster, but are they getting more reliable? Right, so that's a question that I kind of want to check into.

[24:32] But tell me your thoughts there.

[24:34] Hina Gandhi: I agree. Like, you know,

[24:36] the sprints are becoming shorter now you are able to achieve,

[24:41] you're able to do multitasking. You are, you know,

[24:44] the task that used to take like, you know, a few days, three days, now they are finishing in one day, maybe depends, you know,

[24:53] but with that you have to take care that you are using AI tools, right, to generate code and also to review code.

[25:01] But you need to take care that the code you are generating is not garbage. Like sometimes AI is generating garbage code. And to be honest, if I produce the garbage code, the next developer who would be seeing that code will be like, will be confused why I have written it,

[25:19] what's the purpose of writing it.

[25:21] So even you are able to like,

[25:24] able to increase the productivity. But still you have to be sure that you are producing accurate results.

[25:31] Like even I ship the code and if I don't know what that code is doing and let's say if some incident happens in future and I am the owner of that code and people will reach out to me, they will ask me why this issue is happening and I don't understand why this is happening,

[25:50] I'll, I won't be able to fix it. Right. Like, and incidents are like where you have to take the actions quickly. Like if incident is happening you need to resolve it because within an hour, because you are disrupting your service to the consumers.

[26:01] It's very important to make sure that whatever AI is writing, it's accurate and you understand it fully.

[26:08] You should not just be going after productivity or finishing your task.

[26:13] You should be going productivity with accuracy.

[26:17] Pamela Isom: That's a good point. That's good.

[26:19] We make that mistake with agile too. We get so caught up in hitting the sprints and the getting the points.

[26:25] And I remember having these conversations and so this would be no different. Like stay focused on quality and quality assurance. So I, I hear you. That makes a lot of sense.

[26:35] So Sarah, where are we headed?

[26:38] Hina Gandhi: So I think we are heading more towards augmented intelligence and human AI collaboration.

[26:45] I don't think AI will automate everything as of today. I don't know about the future.

[26:51] The AI race is so fast that I don't know it will change in next five years.

[26:57] So I can, if I look like today,

[27:00] I don't see like it will remove humans from the loop. Like we still need humans to verify everything.

[27:07] So let me share one case with you, one case study like I was reading online. So there is a replit company, they give you AI agents which can write code for you and it was getting used by one company and the owner was giving instructions that do not produce fake data in production database.

[27:26] But despite giving these instructions, AI was still producing fake data in database.

[27:31] And the blunder that it made was it deleted all the records from the database.

[27:38] So that database had actual customer records and actual information of the customer that company gathered from past month.

[27:47] So and you know, AI deleted that data which was very important for the company.

[27:52] So and then when AI was told that you deleted the data, AI apologized, you know like a human being that oh sorry, it was a misjudgment that I made and I am very sorry about it.

[28:03] And that was very serious problem that AI created for that company.

[28:08] And then you know, AI was told, you know, that you can roll back to the previous database version which has like all the data and it said no,

[28:17] because I don't have like, you know, I cannot do that. This was the final version that I deleted.

[28:22] So but when you engineers, like software developers came, they were able to like revert to the, to the, like the old database which has like all the records. So they were able to do it.

[28:33] But AI, like at that point had given up that, no, I cannot do anything at this point.

[28:38] So this is one case where the production access was given to the AI and AI made like a big mistake of deleting database, producing fake data, not, you know,

[28:48] addressing the commands that the owner was giving.

[28:51] So I definitely believe, like, as of today, we cannot fully depend on AI. We still need humans who can verify all the outputs that AI is generating.

[29:02] And I think for now, yeah, we can consider AI as more like our companion to brainstorm ideas,

[29:10] to know certain things like, but cannot use it for full automation.

[29:18] Pamela Isom: So that would be more like augmented intelligence.

[29:22] Hina Gandhi: Yep. And maybe human AI collaboration, where human is giving command and AI is like following that command and humans make sure that the command that AI is like running is accurate.

[29:34] Like they fully understand what AI is, you know, trying to do.

[29:38] Pamela Isom: Okay, so earlier you had mentioned that we talk about mcp.

[29:45] So I do want to talk about that. I want to know more about how MCP is influencing the way developers integrate AI.

[29:57] Hina Gandhi: So MCP is model context protocol. So it's about to provide context to the model.

[30:04] So MCP has a client and it has a server.

[30:09] And so model. So LLMs, they are like trained on certain data, right? So that may not be up to date, but with MCP you can provide like up to date data through the LLMs which you can use to enhance their output.

[30:26] So let's take an example.

[30:29] I have a document store where I have lot of several documents,

[30:33] right? So what I can do do is I can, you know, build NMC server in front of that document store.

[30:40] So then I can give access to the LLM that we can read documents from that store via MCP server.

[30:49] So when I ask certain question to the LLM that can you read that document and tell me about certain ABC thing?

[30:56] So LLM will know I can get the data via get API, like get document data.

[31:02] So then MCP server will run that API endpoint on behalf of LLM and it will give all that information from that specific document to the LLM which LLM will use to provide more context to you and it can summarize everything for you that you wanted to know about that document.

[31:20] So MCP is about like Providing more context to your LLM in our day to day life.

[31:27] I have MCP server for Atlassian. So there what I do is I can just tell MCP like LLM, can you read this document for me? So LLM will be like, okay, making you know, it knows that I need to, you know,

[31:41] make this, you know, get endpoint call, you know, to get the data for certain document. And it will give me all the information that I needed and then I make, let's say I'm writing a design or architecture.

[31:53] Then I have brains from most of the ideas with LLM. So LLM would be knowing that I want to go with the project.

[31:59] Then I would say can you create a document based on the conversations we did? And I would provide more precise prompt that you need to add abstract, you need to add like architecture diagram, you need to add some more things.

[32:11] And then it will knows, you know, what action to take. So it will make the call to that MCV server like you know, using post Endpoint and then it will, you know, write all the, you know, brainstorming ideas or all the like parts that I have discussed with it on that document and will publish it.

[32:28] So at the end I am not like, no, I'm not going to the website, I'm not going on create document. I'm not spending an hour or two on writing that document and then publishing it.

[32:38] It's all, you know, like LLM is doing for me and making my life easier. Like at least I am not a big fan of writing big big talks and I'm very happy that I can use leverage LLM here to write all these talks.

[32:52] And similarly,

[32:54] whenever a pull request comes in,

[32:57] let's say somebody wants their code to be reviewed on GitHub, I can leverage MCP here too. I with MCP and LLM I can ask the LLM to read a PR like pull request or like code and ask them to address certain comments or write comments on my behalf.

[33:14] So few of the things they are like automated and they are like you know, making my day to day life better.

[33:20] So that's what MCP is more like a context like provider plus you can also take certain actions via MCP server.

[33:30] Pamela Isom: All right, so that's good. That was a good explanation.

[33:33] So can you share? We're at the end of the call, so I'm down to my last question. Unless there's anything else that you wanted to share. Is there anything else?

[33:42] Hina Gandhi: No, not now.

[33:44] Pamela Isom: Usually what I do at this point is asking my guests if they can share words of Wisdom and. Or a call to action.

[33:53] Hina Gandhi: Okay,

[33:54] so word of wisdom, as we discussed in whole,

[33:59] practicing what you are doing, like, don't be fully dependent on AI and leverage it as your peer with whom you can, like, discuss your ideas, brainstorm certain things,

[34:12] or let's summarize,

[34:13] talk.

[34:14] Don't be fully dependent on it. Or like you start losing your concepts or start losing all the learnings you had from your past experiences.

[34:23] So that's my advice to everyone around me because I think as more and more, as these AI technologies are evolving, more and more,

[34:32] I feel like we might become more dependent on them and we may start losing our learnings that we have gained from past experiences.

[34:43] Pamela Isom: Well, I certainly appreciate that. What I found interesting with what you said today is you're speaking from an engineer and a coding perspective.

[34:50] So most of the people, most of my audience, they're not in the development space,

[34:55] but we are saying the same things, right? So I'm talking to them, I'm advising my listeners on some best practices, how to watch out for blind spots, what are some example blind spots, et cetera.

[35:08] And also,

[35:10] and one of them being that it's very, very easy to get caught up into letting the AI do the work for you.

[35:19] And so that is not good. You're an executive of a company.

[35:24] The AI is off making decisions for you.

[35:26] And you are assuming that that is correct, because you're assuming that the data is correct. You're assuming that the data is current, you're assuming that the data is accurate.

[35:35] And so you were saying all of that, but from an engineering perspective. And so that was fascinating to hear and good for the listeners to understand that this is applicable to all professions.

[35:47] Right? For all users of AI, whether you're writing software and you're using it to develop software for you. I love the fact that you talked about.

[35:58] You become a reviewer.

[36:00] Something that we've been saying for a while now is your role won't go away. It may just elevate you, right? You'll become the lead.

[36:09] You'll become the lead because someone's got to oversee this.

[36:12] So your role becomes elevated, and then it becomes. There's an added level of complexity to it because you have multiple agents. You'll have multiple agents that you're going to have to manage and govern in addition to humans.

[36:27] So you just became just that, more valuable because you know how to do this. And so I try to get my colleagues and those that are fretting a little bit about their jobs and their careers and like, here's how you can cultivate and prepare yourself for what's next, because it's here.

[36:45] So. And watch the escalation and the elevation in roles and responsibilities. Now, it's not black and white. I understand that. It's not that clear cut. But ultimately, that's what's happening.

[36:54] And that's what you said.

[36:55] That is exactly what you said, is you became a reviewer. So I really sincerely appreciate the conversation today. It was very good.

[37:04] Hina Gandhi: Thank you, Premilla. I enjoyed our conversation and it was very nice talking to you.

[37:10] And yeah, I think AI will bring bright future for all of us and we will be just tech leads of everything.

[37:18] Pamela Isom: Am I right? Well, thank you very much for being here.