Tech Unboxed
Stay ahead of the curve with BBD's Tech Unboxed, the podcast that unpacks the latest trends, innovations, and transformative digital solutions driving tomorrow's world.
Tech Unboxed
Is your AI hallucinating? From RAG to vibe coding, and everything in between
If you’ve felt the pressure to “go agentic” or bolt RAG onto everything, this conversation is a deep breath and a better plan. We dig into the real decision points behind modern AI: when a clean prompt solves the problem, when retrieval is worth the effort, and when an agentic system adds cost without adding value. Along the way, we call out how vibe coding accelerates learning but can sabotage maintainability when teams don’t understand the code they ship.
We get practical about data. More isn’t better—better is better. You’ll hear how RAG actually raises the bar for data hygiene, why outdated or messy documents produce confident wrong answers, and how to build retrieval steps that respect source structure and change cadence. From noisy transcripts to multilingual contexts, we map the preprocessing and governance moves that prevent hallucinations and keep answers grounded.
Then we unpack agentic AI as a network of specialists: models and tools with clear roles, routed by a coordinator that chooses the right path, including non-LLM components for math or structured queries. It’s powerful, but not a default. We weigh costs, reliability, and the risk of overengineering when classic ML, search, or a database would do. The through-line is human judgment: engineers stay in the driver’s seat, setting constraints, validating reasoning, and designing systems that can be supported over time.
If you care about building AI that lasts—clean prompts over cargo-cult pipelines, data quality over dashboards, agents where they fit—this one’s for you. Subscribe, share with a teammate who needs a sanity check, and leave a review with your biggest AI misconception so we can tackle it next.
Hello, welcome to the next episode of DVD's Tech Unboxed, where we chat about some of the hottest topics in technology at the moment. Today on our couch, we have the incredible, well, on our chairs, we have the incredible brains of Hala and Rosal. Welcome. Thank you for joining us.
SPEAKER_00:Thank you. Happy to be here.
SPEAKER_01:Hey. So today we're going to be chatting about AI and RAG and agentic AI and all of that exciting things. Can you first please tell us what kind of qualifies you guys to be talking about this?
SPEAKER_00:Honestly, I feel like there are so many people qualified to talk about AI these days. But I have worked on a couple of LLM projects, building out chatbots, internal teams, as well as for banks, which I'm not allowed to name, nameless banks. So mostly my experience. Yes. Yes. Mostly my experiences with mostly my experiences with LLMs and in general just data processing.
SPEAKER_01:Awesome. And Rosal, yourself?
SPEAKER_02:Um similar to you. I've worked on quite a few POCs in the generative AI space. We also both attend a lot of conferences together. So we're both very interested in the space. So I think that's the most important thing.
SPEAKER_01:Can you guys tell me what are some of the biggest misconceptions that you're hearing today about AI, about RAG, about um agentic AI, specifically in data-rich environments?
SPEAKER_02:I think I think there's been a lot of hype now. I think it started with generative AI, and now agentic AI has taken the scene. I think a big misconception is that we need agentic AI to solve all our problems. A lot of the time, most of the issues that you have are quite simple in the generative AI space. You just need RAG or a simple LLM prompt. You don't need these very intense agentic systems that have multi-steps and that are also very expensive.
SPEAKER_00:Yeah, I agree with that. I think a lot of discussion happens in the tech field about solving a solution rather than solving for a problem, if that makes sense. People want to use agentic because it's a buzzword. I know we talk about buzzwords all the time in tech, but agentic AI is a huge buzzword right now. And like Russell said, it's just becoming this very uh processing heavy thing that people want to use agentic AI. Even RAG, sometimes it's unnecessary. You can go for the lower level.
SPEAKER_02:Just a prompt. Yeah, just just a prompt. Prompt engineering.
SPEAKER_00:Yeah, you don't always need to be using an LLM even. Uh, but it's fine. It's the the hype will pass.
SPEAKER_01:Will calm down, maybe. For yourselves as software engineers, um, how is the role of software engineering changing as GNAI and smarter IDEs become more commonplace in sort of the everyday toolbox?
SPEAKER_00:You're finding a lot of vibe coding. I know we talk about vibe coding as all a lot, but you can see that people are struggling to understand their own code anymore because it's just being generated for them. With IDEs, sometimes through uh LLMs like Gemini or Chat GPT, people will feed it a problem and have it solve for that problem. And then the code that is generated is so specific to how you describe that problem that once you need to tweak it or make it more general, firstly the person who needs to tweak it doesn't know what's happening with the code, so they might mess it up there. But otherwise, it might immediately fall over as soon as one of the parameters you entered shifts slightly due to business requirements.
SPEAKER_01:How does that impact continuity of systems going forward, especially for like enterprise organizations?
SPEAKER_00:Well, what it's going to mean, and I think we all agree, is that uh the developers that are supposed to be maintaining systems and stuff, they're going to really, really struggle to it's going to become slower, right? Right now it's the fastest paradigm to shove some prompts into an LLM and then hope for a good answer back. And then you can apply that code, but eventually it's going to become slower because now you have to understand code that you didn't understand to begin with, and then you're going to have to add more code to it. And probably if you vibe coded the first solution, you're going to vibe code on and on and on. And it's going to become this mess of uh junk, basically.
SPEAKER_02:I think I would actually counter you on that. So I think I like that. You know, I love to counter you a lot. Um although vibe coding can be quite dangerous, it's also a tool for software developers. It's allowed us to learn a lot faster. It's allowed us to push into different areas of tech that we haven't used before. So if done correctly, it will actually speed up your development and allow you to be a lot more autonomous, which is where I think the software industry is kind of going now. We're moving away from very siloed roles where people are either a software developer or a back-end developer. And now we're having to become a lot more autonomous. You have to maintain the entire system, you have to build the entire system, you have to kind of handle business requirements. You're expected to be more of an engineer or an architect rather than just one role, which is which is exciting.
SPEAKER_00:No, I 100% agree. Um, I think it's always easy to slip into the dangers of AI and how they're they're going to ruin your life. But even I, it's it's interesting we're having this conversation now because we've had this conversation before. And I'd say two months ago, I was asked at some point if I use uh generative AI when I code. And I at that time I didn't use generative AI. But I found that I needed to uh join a new tech stack, basically. I've need to needed to learn new languages and a new framework. And I did find that I went and I asked, I explained the problem, and I was like, what type of solution does this framework lend itself to with this problem? And I will say I've like, it's much faster than when I used to have to go watch 10,000 tutorials, stack overflow questions. And I feel like I understand it much better because there's someone there explaining my explaining a solution to a problem as it's being generated, if that makes sense.
SPEAKER_02:You're also able to ask the question immediately and that removes the fear of going to your techie or going and searching. You just ask a generative AI agent, like this is this is my issue, how do you solve it?
SPEAKER_00:Yeah, and it and we did mention as well at the time that we would have to, you would have to be more adaptable, basically. You need to be more focused on how to solution problems, the most efficient way to solve a problem, not what language you're going to be solving in it, what framework you're going to be solving.
SPEAKER_01:But I suppose the most efficient, but also making sure that it is supportable and reliable and will stand, you withstand whatever pressures that system's going to be putting on it. And just jumping back to the to the risks that you mentioned earlier, do you think that there are different risks at for you as an engineer at different stages of your career when using these tools? Rosal, do you want to take this one?
SPEAKER_02:Definitely. So I think, like we said, it's a tool, you need to know how to use it well. If you're not someone who's able to realize what your role is in generative AI and what the role of generative AI is, it will become a problem because then you are just going to rely on the code being generated without understanding it. And I see that a lot with graduates now. I'd ask them a question, I'd be like, why did you do this? I know it's completely wrong. They're like, I don't know, I used AI to help me. That is not how we expect people to use AI. You need to use it, but you need to help kind of understand where and why you're doing it. And if you don't, use it to understand. Ask why you're doing this, why you're not doing this. You on your own, do your own research and decide what is the best approach.
SPEAKER_01:Um so you kind of need to know enough to vet it, or if not, make sure that you're asking the right questions.
SPEAKER_02:I don't think you should be using it just to do all your research, your designing, your implementation. Do your research on your own and use this as a check. Or as like a buddy. It should be your buddy. Rubber duck level two.
SPEAKER_01:You know, I I 100% agree. No, yeah. Hannah, jumping onto you mentioned data once or twice already. Why do so many organizations think more data is better? And what's a better approach that they should be taking or could be taking?
SPEAKER_00:You know what, there is kind of this narrative, and this is from the conferences that we've attended as well, that if you have data sitting somewhere, why is it sitting there and how can you be using it? And I think that's a good thought process to have. But what you also need to be thinking about is should we be using our data to train a model? Is that really what we need to be using our data for? And how do we need to structure our data? Because that's becoming so important in this space. One of the biggest things when when we develop a POC or when we develop a fully fledged chatbot is looking at your data first and seeing how it needs to be structured for your specific model to understand it. Do you need to retrain your model on your data? Or does your date the structure of your data need to change? And I think there's this big misconception that any type of data can be like whoop and then thrown into an LLM, and then that LLM will be able to reason and understand that data, but the it's it that's just not how it works. And I think the big message that I just want to say here is data analysts are important. You need people to look at your data and understand what's happening there before you decide that you can feed it into a bot, before you can decide the application for the data. It's not really good enough to know what data you're collecting. Because a lot of the time it's unstructured data, a lot of the time it's bad data. It's not correct, it's nonsense, gobble gook, especially in the case of like audio transcripts or anything like that. How good is your transcription software? Do you need especially in like the African context? Yes. How much processing are you going to have to do on that data beforehand? And if you're going to ask me next, should you be processing your data? Almost 100% yes, depending on the data. Sometimes it's really nice. It's like a SQL table where you're just collecting names and fields, which is lovely.
SPEAKER_02:But almost never.
SPEAKER_00:I've never had a never had someone come to the come to me with that problem. So the the if the question is, is garbage in, garbage out, it's that's still true, right? LLMs haven't solved the problem of uh bad data.
SPEAKER_01:How does RAG change this conversation around data? Or does it change this conversation around data?
SPEAKER_02:Definitely not, not at all. You still need really good data for RAG. Garbage in, garbage out. Um, I think with RAG, people, it is very useful. It's a great context. So RAG is where we yeah, maybe we should give them some introduction. So RAG is retrieval augmented generation, and it's a very popular business use case that people are now using where you augment LLMs with extra documentation that you now feed it. So the perks of it is that your LLM is now grounded, you're not going to have hallucinations as much, and you're able to personalize the answers that you're getting from your LLMs with very specific data. Um, regarding the data of that, if you the process in which you do retrieval augmented generation really requires the data to be good. Because if it is not good and it can't find the relevant answer from your data, if you have not prompted it correctly and put the right steps in place, it will either hallucinate with no source or find a source but still hallucinate, or say I don't know. So it's still gonna be an irrelevant answer. Um, yeah, the ways in which you have to tune your data and structure it is very dependent on the use case, but still very much an important rule.
SPEAKER_01:Interesting, that's really cool.
SPEAKER_00:I feel like, sorry, just to chip in there, I feel like what's happened for me is RAG has almost highlighted this garbage in garbage in. 100%. Yeah, because it becomes ever so much more frustrating when a user comes to you after you've made this POC with RAG and they say, this answer is wrong, like your model is wrong, right? And then you look at the answer, you look at what it used to answer the question, and it's it's done everything correctly, but you can see that the data that was provided had an inaccurate answer or an out-of-date answer. And that's a big issue, especially out of keeping RAG up to date has been a very interesting challenge in terms of the methods that you use to describe this data to your model, right? You have to re-describe it every single time it gets updated. So that's that's very interesting.
SPEAKER_01:Very cool. Um, switching to maybe I suppose RAG isn't so unknown anymore, but agentic AI. I think for a large part of the population, agentic AI is is this unknown concept. What is it?
SPEAKER_02:I think Rosal should take away. Oh, I actually just did something cool. Um, so agentic AI, how it differs from generative AI. Generative AI is using these large language models to generate content. Agentic AI is making use of well-crafted or designed models that have better reasoning, better classification. So they're just like a little bit smarter to solve very specific problems and specific tasks. So the key difference here is generative AI will generate in a very creative way, whereas agentic AI is meant to be more autonomous at solving problems. So we're kind of giving it a little bit more intelligence. So it's thinking for itself. Yes, the idea is that it would try to think for itself. Obviously, it does not fully think for itself, and a lot of the time the agentic systems that you come up with are not very smart, but that's the the main difference.
SPEAKER_01:So basically, it's step one into AI's world domination.
SPEAKER_02:It's step one into AGI.
SPEAKER_00:Yeah, yeah, that's very true. That's very true. I will say it's almost like what you're doing right now. You know, people, someone told you that you need to do this, you know that you needed to ask Russell about agentic AI, right? So you've got one main per main interface, and then this interface knows that I need to talk to Russell about agentic AI, and to Hala about data, and I need to talk to Matthew about sound quality, you know.
SPEAKER_02:And that's I feel like a good high-level Yeah, you're given the high-level instructions and the way in which the agent will take the path of doing it is up to the agent.
SPEAKER_00:So the agents are very good at one specific thing. Um, but if you asked uh Hala to go do gymnastics for you, that would not go super well. So maybe this main agent would be pretty good at doing gymnastics. Like, all right, but like mess up every now and then. But then gymnastics agent over here is just the best. It's the Simone Biles of gymnastics, you know.
SPEAKER_01:So basically, for for the non-techies watching, it's creating it's it's almost like you're building subject matter experts who can think autonomously. Exactly. Yeah, that is 100%. Cool, and thank you. So, what excites you both uh about agentic AI, if anything.
SPEAKER_00:I'm excited, and I think I said this last time as well, at the prospect of not using AI for stuff, and hear me out. I want the agents, I want the agent, the main agent as we discussed, to know to maybe not consult another agent, right? It could be something that translates into math, for example, because LLMs are notoriously bad at math. It's just because it's don't worry about that. We'll talk about that later. Right? But you don't need a model to solve math problems, right? You just need an agent that's capable of doing these math problems, like tough integration problems. But as we all know, it's really awful to type a math problem into a machine. So if you can tell an LLM, look, I need the integral from here to here of this very complex function, and that can translate it into math language, and then your math agent can take care of that for you. That's a very exciting part of it because it's cutting out a lot of processing, unnecessary processing that's happening. And that's what I'm a little bit excited for. And I mean, obviously, just the general benefits of agentic AI.
SPEAKER_02:Yeah. I think, yeah, I think agentic AI will definitely start to shift the way industries, not even just software industries, are kind of working where the routine tasks will now be taken over by these agents, and that gives humans a lot more time and flexibility to work on things that are a lot more high-level and autonomy in in the software industry as well, which is quite exciting. It's very cool.
SPEAKER_01:Earlier you mentioned, just on this point, earlier you mentioned that AI isn't always the right tool for the job. But do you think that agentic AI might be moving towards AI being, well, the specific agentic AI agent being the right tool for the job? So, in a way, we're kind of moving towards an era where there will be more AI for these problems that right now AI isn't the answer for. I think that was a very complicated question. I'm sorry, go to any AI.
SPEAKER_02:I think agentic AI, generative AI, machine learning, all deep learning, all of these areas within the AI realm is all very important, and every single one has a specific use case. There's lots of cases where you do not need agentic AI or generative AI. It is a lot more expensive, it's a lot more environmentally unfriendly. So there's lots of cases where machine learning and deep learning are perfectly good at solving these problems. That's not going to change. In time, there will definitely be a lot more cases where agentic AI is useful. But once again, you need to be very critical at noticing where it is needed because agentic AI is a lot more expensive than generative AI.
SPEAKER_01:I suppose going back to that metaphor, you don't always need a subject matter expert. Sometimes you just need someone to do the task. Exactly. Or you just need to do a Google search, you know.
SPEAKER_00:Like let's forget about it, you know.
SPEAKER_02:A lot of the time, like even with the POCs I worked on, people have wanted these multi-stepped agentic systems when you just need a well-pumped LLM. There's no need for it. That's very true.
SPEAKER_00:Yeah. I also uh I think that's actually going to be the problem, is it is going to be capable, like I said, uh currently LLMs are capable of generating code for you. And agentic uh systems are going to be capable of solving a lot more of the problems that couldn't be solved. And I think that's going to be more of a problem than it is going to be a solution. Not because I'm I'm like cynical about AI. I'm a little bit cynical about AI, but not super cynical. Yeah. Everyone should be a little bit cynical about AI, but it's because you are going to get this very complex system that's going to be sold to you, right? I'm predicting packages of agent AIs that are going to be sold to you. And you are going to buy it because it will work. But what you will have actually just needed is like a database management software or something to that effect. And people will not know those things exist because everyone has moved towards agenda AI.
SPEAKER_01:I suppose that's why it's so important to not just buy off the shelf, but also make sure that you're talking to the experts who can guide you in what you actually need for this problem, specifically within your area.
SPEAKER_00:Yeah, and I think in the history of humanity, we've kind of proven as long as people are investing money into a problem, it will become a solution. If may so as long as we're investing money into improving agentic AI and we're thinking about can we want it to solve this problem, it will end up solving that problem.
SPEAKER_01:Yeah, absolutely.
SPEAKER_00:But will it be the right choice?
SPEAKER_01:Who knows? Speaking about humans, how how do you think the role of the humans within all of these tasks will continue to evolve from where it is now?
SPEAKER_02:I think you will always need to be in the driver's seat, which a lot of people seem to get too comfortable and complacent with. Going back to vibe coding, vibe coding is great until you don't understand what the code is being written, and then that's not so useful because although it works, as soon as you need to maintain it, if you have buttons, you need to solve it, you're not gonna know how to do that. So as AI evolves, you will have a lot more systems that are in place that are solving problems using AI, but you will always need to be the driver of that, in control of that, knowing where and what is happening and kind of guiding the AI to the right solution because it's it's probability at the end of the day. It's not always going to give you the best answer. You still need to do research to figure out which is the best solution going forward. It will give you a solution, it's not always the best solution. That's very true.
SPEAKER_01:Before we close out, um, can each of you give us a takeaway, a final thought, or just a closing idea that you want to share with the audience?
SPEAKER_02:That's a good question. Um I think you always have to be maybe be slightly afraid of AI, but more importantly, be excited for it and always know that although it's around, you will always be the driver. So use it to your benefit, not to your fear.
SPEAKER_00:I think regarding AI, I'd say be adaptable, be smart, and don't be dumb about your data, please. I was waiting for data to come.
SPEAKER_01:Thank you so much to both of you for you know sitting down and having this conversation. I really enjoyed it, and I'm sure everyone else will as well. Um, and thank you to all of you for joining us for this episode of Take Unboxed. We'll see you at the next one.