
Digital Transformation & AI for Humans
Welcome to 'Digital Transformation & AI for Humans' with Emi.
In this podcast, we delve into how technology intersects with leadership, innovation, and most importantly, the human spirit.
Each episode features visionary leaders from different countries who understand that at the heart of success is the human touch - nurturing a winning mindset, fostering emotional intelligence, soft skills, and building resilient teams.
Subscribe and stay tuned for more episodes.
Visit https://digitaltransformation4humans.com/ for more information.
If you’re a leader, business owner or investor ready to adapt, thrive, and lead with clarity, purpose, and wisdom in the era of AI - I’d love to invite you to learn more about AI Game Changers - a global elite hub for visionary trailblazers and changemakers shaping the future: http://aigamechangers.io/
Digital Transformation & AI for Humans
S1|Ep67 Building with Intent: Innovation, Human-Centricity & Risk Mitigation in the AI-Driven Construction World
In this episode, we’re joined by Prakash Senghani, a digital construction innovator and CEO of Navatech, to explore the future of AI-powered construction in one of the most ambitious regions in the world – Dubai and the UAE.
From intentional innovation and human-centric leadership to risk mitigation in high-stakes projects, we dive deep into:
🔹 What it means to build with intent in an age of speed, automation, and smart systems
🔹 Overlooked risks in AI-enhanced construction and how to navigate them
🔹 Why human well-being and emotional intelligence still matter in large-scale urban projects
🔹 The disconnect between AI potential and real-world adoption — and how to bridge it
🔹 Real stories, future visions, and wise advice for construction leaders and tech investors
Prakash is not only a board advisor and investor, but also part of the Executive Group of the AI Game Changers Club – an elite global circle of pioneers reshaping the way we live, build, and lead in the AI era.
🎧 Tune in now to discover how to lead, build, and innovate with greater alignment, humanity, and intent.
📩 Ready to grow as a leader or explore AI-human collaboration?
Join us at AI Game Changers Club – the global movement for visionary executives, entrepreneurs, and investors.
Connect with Prakash Senghani on LinkedIn: https://www.linkedin.com/in/prakash-senghani/
About the host, Emi Olausson Fourounjieva
With over 20 years in IT, digital transformation, business growth & leadership, Emi specializes in turning challenges into opportunities for business expansion and personal well-being.
Her contributions have shaped success stories across the corporations and individuals, from driving digital growth, managing resources and leading teams in big companies to empowering leaders to unlock their inner power and succeed in this era of transformation.
AI GAME CHANGERS CLUB: http://aigamechangers.io/
📚 Get your AI Leadership Compass: Unlocking Business Growth & Innovation 🧭 The Definitive Guide for Leaders & Business Owners to Adapt & Thrive in the Age of AI & Digital Transformation: https://www.amazon.com/dp/B0DNBJ92RP
📆 Book a free Strategy Call with Emi
🔗 Connect with Emi Olausson Fourounjieva on LinkedIn
🌏 Learn more: https://digitaltransformation4humans.com/
📧 Subscribe to the newsletter on LinkedIn: Transformation for Leaders
🔔 Subs...
Hello and welcome to Digital Transformation and AI for Humans with your host, amy. In this podcast, we delve into how technology intersects with leadership, innovation and, most importantly, the human spirit. Each episode features visionary leaders who understand that at the heart of success is the human touch nurturing a winning mindset, fostering emotional intelligence and building resilient teams. My fantastic guest today, prakash Sengani, from Dubai, uae, is here to explore the importance of building with intent. I'm looking forward to diving into innovation, human centricity and risk mitigation in the AI-driven construction world. Prakash is a digital construction thought leader, startup founder, early-stage investor board advisor, co-founder and CEO of Navatek. I'm honored to have Prakash as a part of the executive group of the AI Game Changers Club, an elite tribe of visionary leaders redefining the rules and shaping the future of human-AI synergy. Welcome, prakash, it's a great pleasure to have you here in the studio today.
Speaker 2:Thank you so much and the honor is mine. Thank you so much for inviting me to the group and on this podcast.
Speaker 1:Thank you. Let's start the conversation and transform not just our technologies, but our ways of thinking and leading. If you are interested in connecting or collaborating, you can find more information in the description and don't forget to subscribe for more powerful episodes. And if you are a leader, business owner or investor ready to adapt, thrive and lead with clarity, purpose and wisdom in the era of AI, I'd love to invite you to learn more about AI Game Changers a global elite club for visionary trailblazers and change makers shaping the future. Prakash, to start with, I've been waiting for this conversation for such a long time. I'd love to hear more about you, about your life, about your passions, about everything. What brought you into the field of AI and digital construction. Tell us everything.
Speaker 2:So I guess it started with construction. So I was born and brought up in the UK. I studied civil engineering. My father was in construction as well, and so I guess my passion for construction and the built environment comes from him. We used to do construction projects at home, you know, like refurbishments and extensions and things like that. I learned a lot about construction and how things go together At school. I was quite good at maths and I was really good at physics and civil engineering, and construction kind of brought those two things together, and so I guess my passion started really, really early.
Speaker 2:And then, as I entered into the construction industry, it was just starting to go through this wave of transformation. There was these reports coming out about how the construction industry's production levels have been really, really low compared to other industries. We tend not to learn or change or adapt. We're kind of doing the same things that we've done since the Romans started building things. There's all of these kind of anecdotal stories and kind of studies done, and so as I entered the industry, there was always this kind of self-reflection happening where the industry was looking at themselves and saying how can we be better, how can we be more productive, how can we do things safer, how can we do things and improve the quality? And the digital tools were seen as one of the reasons why they weren't. And so there was all of these studies by these management consultants to show that construction, compared to other industries, was really low when it comes to digitalization, it spend, r&d, and so, as an industry, people started looking at, okay, how can we bring some of these tools in? And I was just entering the industry, so it's almost serendipity that I came into the industry as digital transformation was happening to the industry and then, as we kind of started to develop tools and systems and processes to introduce digital tools and, you know, went through that true transformation, both in terms of technological but also systems, processes and people, we started to see that digital tools can make and have a big impact on on what we do in construction and simple things, like you know, using 3d models, using digital tools instead of paper-based tools, and so all of these kind of started to build. And then about six or seven, I think maybe seven years ago, I was working and by that time I'd emigrated to the Middle East, based in Dubai.
Speaker 2:We were looking at health and safety and how health and safety was still done very traditionally. Even though other parts of construction had kind of developed and started using digital tools, health and safety within the construction industry still seemed to be very analog, paper-based, and even where it had shifted to be digital, it was still kind of copying paper-based processes. The processes themselves hadn't evolved, and the other part of what I saw was that it was a very people-driven function. Right, it was all about the people. It was all about engaging and communicating with people, and the traditional ways of managing safety were very how do you put it?
Speaker 2:Very governance driven, very compliance driven, and and not really looking at what the people and how they do.
Speaker 2:It is all about how do we stop them from doing something? Right, it was very it was how do you stop them from hurting themselves? How do you stop them from doing certain things? And that uh, kind of misalignment of trying to, um, you know, stop accidents from happening, but but also, um, stopping people from doing productive work, where it always, always meant that health and safety was. It was always that friction with the way the construction wanted to do things, and so I saw a way of using chatbots at the time, so very rudimentary AI, as you can imagine, six years ago, when chat, tpt and some of these tools were not available, I saw a way of using this tool to help break down some of these communication barriers, to be able to understand, help communicate and help the users understand what's going on, and so that's kind of where my journey came. So it started in construction, then evolved into digitalizing construction and then, using that as a basis, then moved into AI.
Speaker 1:Amazing. Thank you so much for sharing this story. It sounds really meaningful and so exciting as an evolution, as a development of your interests and everything what you are creating as a value for the world. Prakash, you are operating in the UAE region, driven by iconic architecture, speed and innovation. How do you ensure that this human well-being is embedded in the construction business, from the first blueprint to the final brick, through all the involved aspects? Can you tell a little bit more about that and mention the details around it?
Speaker 2:Yeah, look, I think this is one of the biggest challenges, and not just in this part of the world, any part of the world right is making sure that we're enhancing the lives of the people that work in the industry. There are obviously known risks that occur, but they should be no more risky for people to come to work in the construction industry than go to work in an office or, you know, in another environment. We should be able to put in controls. And it's interesting that you mentioned from the blueprint, right, because safety is seen as something that happens on a construction site, but that's not true, right. Safety happens way, way, way before that. It happens when we're designing, when we're procuring, when we're, you know, when we're starting up the conception of a project. And so I think trying to get people to adhere to that and understand that that's one of the biggest challenges. Trying to get people to adhere to that and understand that that's one of the biggest challenges.
Speaker 2:And there are certain AI tools that are now coming to be able to help with identifying some of the risks and hazards that can be designed out right at the beginning. So people don't often think about the fact that when you're designing a building and you're designing, for example, how to maintain it, how to clean the glass, right. You see these tall buildings in Dubai and someone has to think about how is somebody going to go clean the glass on the 102nd floor right? And and if you, if you can put in a system where maybe the glass doesn't need cleaning, it cleans itself, you can create an automated system that a human doesn't need to go up there and clean it. So there's a bunch of things that you can do right from the very beginning, before you've even laid a brick or even done anything on the construction site, which can help with safety.
Speaker 2:And similarly, if you think about some of the things that we're designing and engineering, if we can design them in a way that they can be built safely and effectively, that also has an impact on how long it takes to build as well. So sometimes there's this notion that safety adds cost and adds time, but actually if it's done really well overall the whole kind of lifecycle of a project you'll save time right, and so I guess it's kind of, first of all, having an appreciation of the fact that safety doesn't happen just at the back end Once we start building. It happens at the front end of when we're designing and, you know, at the very beginning of the construction project.
Speaker 1:That's so true and I think it is really important, as you mentioned, to highlight what belongs where. To highlight what belongs where because it starts with the depth of understanding that it's not just about building. So it is one team who is in charge for building and constructing and somebody else who is going to be in charge for the safety, because we're still humans and we have to take it as something crucial, as a cornerstone of our design thinking, and incorporate it from the early stage, and it is amazing that we're having this conversation now so that we can highlight it and remind about what really matters Absolutely. I love that you are working with it in such an active way. So what does building with intent truly mean to you in an AI-enhanced world where speed, automation and scale often overshadow deeper human needs and probably even safety aspects?
Speaker 2:So I think it means resisting the temptation to automate just for the sake of speed.
Speaker 2:Right, I think building intent is about building for the long-term impact.
Speaker 2:So we're very focused on the frontline workers, right, I think technology generally and there's a big risk for ai to kind of bypass some of the demographics right, the guys and the girls who are going to work every single day at the coalface or at the, at the very, you know, bleeding edge of construction, and some of these tools would and they have done in the past completely pass them by.
Speaker 2:So we're making sure that we build tools that are not just going to be benefiting the small demographic of people that sit in the office the white collar workers but we try to make sure it's inclusive and available to the biggest demographic that works in the construction industry, which is the labor force, the kind of frontline workers, as we call them. And I think the whole point of what we're trying to do is that the technology, particularly things like AI, are there to amplify human expertise. Right, they're there to augment them and not to replace them or overwrite them. I think the intent is making sure we make what we call purposeful progress and not just scaling and growing for the sake of growing.
Speaker 1:I so agree, and there are more risks, I assume, connected to those processes. So let's talk about that and dive deeper. Which risks are we not talking about enough when it comes to construction, especially in hyper-urban, high-stakes environments like UAE and specifically Dubai, and how AI can mitigate these risks?
Speaker 2:So I guess the biggest one that people don't talk about in construction is the mental health risks. Right, it's a very high-pressure, quick turnaround industry and, particularly in this part of the world, some of the pressures that get put on to construction projects are not necessarily due or at the fault of the construction teams. Right, some of them are because of decisions made completely remotely design changes, you know, external forces, supply chain issues and things like that but all of these compound to create a really high pressure environment within construction and that creates, you know, a real environment for mental health issues. So I think the construction industry is one of the highest suicide rates in the world when it comes to, you know, people killing themselves just because of the fact that they can't talk to anybody or they don't feel like they're going to be heard, and so I think that's one of the big risks that we don't talk about.
Speaker 2:And then, attached to the digital transformation that's happening and I think we're talking about this just more generally, even outside of construction is the cognitive overload, you know, constantly being connected to technology. Having that, you know, the constant feed of information, can lead to a massive overload, right, and we're constantly having to, you know, react to things. So the cognitive load is a huge issue where we're using more and more digital tools within the construction space and we're asking people, to you know, spend time entering information, analyzing data, and this is similar to what's happening even outside of construction. In our personal lives, right, we're constantly bombarded with information. We can access information at our fingertips, and that digital overload can have huge negative impacts on the kind of psychology of workers as well, and I think, particularly in high-stakes environments like the UAE, these small software additions that we kind of introduce can have massive impacts, right, because there's so many people that work on the construction sites in the Middle East, and so the amount of data that you can collect, even if it's small pieces of data from numerous people, can have a huge impact on the people having to analyze this, the people having to decide what the impact could be from some of this stuff and to coordinate some of this information to ensure that you're capturing and understanding what's going on, and so I think that adds another layer of pressure where you can potentially be overwhelmed with the number of data points and the amount of data coming in.
Speaker 2:So I think we've got to be really, really careful about the we spoke about earlier on about the speed at which we kind of deploy technology and digital tools and making sure we're bringing everybody along on the journey with us. I think the risks of adding to some of the pressures because of the introduction of new tools and the pressure of constantly being engaged with digital tools can add to some of the risks and I think very few people are talking about it to some of the risks and I think very few people are talking about it.
Speaker 1:I totally agree and I am so happy that we are mentioning it today and talking about it because it resonates so much with what I'm preaching and sharing with the leaders all over the world, because this way of technological development requires our own upgrade as human beings. We can't handle the expansion of artificial intelligence and other technologies. They are amazing, but we can't handle those technologies and this evolution or maybe even revolution in this case without upgrading our mindset, our skills, our way of handling reality and sometimes also slowing down. All the conversations are mostly around performance, about, about efficiency, about exponential growth. But how are we going to survive this chase of results when we can't focus on the fact that we still are humans, we are still fragile and we need to take care of each other in a different way. So thank you so much for mentioning this and giving depth to the understanding of why this is so important today and even more increasingly important day by day just just just you kind of build on some of that as well.
Speaker 2:So so I think the use of AI tools can help, and I'm seeing quite a lot of this happening right, where AI tools are being used to help understand mental health issues, to be able to kind of almost be a first responder using these chat interfaces and chat bots, to almost create a conversational interface for people to feel less lonely, feel like they can get some help and do a bit of triaging. But now we're seeing AI tools automating certain tasks, which allows people to free up part of their day right to make sure that they're not sitting at 10, 11 o'clock at night writing a report, because now they're able to use tools to help them to do that more efficiently and effectively and arguably help with the quality of the output. The other risk I guess we didn't touch on is the risk of the bias that exists in some of these AI tools as well. So by solving one risk or mitigating one risk, we could be introducing others, and so I think we need to really be cognizant of the way that we're applying AI, making sure that we understand what it does, understand where it's getting its outputs from, and not just treat it like a black box right, just blindly trust the answers coming out of it.
Speaker 2:I think that's a very dangerous way of us proceeding. I've seen it happening. I can see the kind of path that we're going down would potentially mean that people just you know, ask it for an answer. Ai gives you an answer and then you just blindly follow what it says without thinking about you know the repercussions of it, and I think we at NarvaTech are certainly making sure we're understanding AI right. We're trying to make our models understandable and so that when we need to make tweaks or, you know, if it gives an answer that we didn't expect, we can go trace back and understand why that happened.
Speaker 1:Exactly this is the critical moment of anti-fragility in this context. But now I'm so interested in hearing more. Could you please share the most brutal case you've seen in your experience where this black box impacted business or life in a negative way?
Speaker 2:so I think in the early days we saw some of these early chatbots um start to turn, you know, racist and misogynistic. The early days were there, I think, when I can't remember who was meta, who released a chatbot and it quickly started becoming anti-semitic and misogynistic and because it was reflective of the content we talked about the bias in just a few minutes ago, it basically became almost as a mirror of the way that humans interact on the internet. That's not how they interact in real life, right. And so this is other massive risk that we've got these LLMs have been trained on content that's on the internet and there's this baseline assumption that's humans are, they're not right. That's how humans are when they're interacting with the internet, and the internet has only been around for about 25 years, and so there's this real kind of risk of that we're embedding certain biases and negative connotations and negative aspects of humanity that are evident on the internet and and then now using those to kind of build tools to help us do business right. And so I guess there was that one and there was, uh, there was a google google one where it was analyzing images and it and it became again, it was really really poor, identifying non-white males right. So, um, the way that I would identify even even white females was really poor. Those were real catastrophic failures of ai very early on, but in in a way, it was good, because what that meant was that these organizations learned that they needed to put safeguards in place. They learned that the models had to be of a certain style before they became consumable by the wider public. So I think there's that element of it, but we've got to continue to learn those lessons.
Speaker 2:I think that we can't just rest on our laurels as we get AI to do more and more and we get AI to augment more and more of what's happening and automate things that we're doing within our daily lives and our business lives. We have to constantly be checking and remembering that the AI can still do things wrong. It can still give you incorrect answers, so you still have to go and check the sources of things, understand where these are coming from and basically build up your trust. The way I see it is we treat it in the same way that you treat a new colleague right Is that you wouldn't automatically trust that colleague with everything right. You build trust in that colleague over time and I think that's the way that we've got to treat most AI tools.
Speaker 1:I agree and I also thought about two things. The first one I remember those scandals where AI was presenting absolutely incredible things which had nothing to do with history, with reality or with humanity. Anything of that and uh, that was on the surface, that was easy to notice and it happened sort of overnight with another update. So there was a lot of people who could point out that something doesn't make sense at all, might not notice it in the same way and the consequences might be much more dangerous. Actually, because that was quite innocent, relatively innocent. It didn't impact lives.
Speaker 1:In a way it might impact, as we are using AI solutions in medical system, in different systems which are truly changing the course of one's life. And I think the other aspect of this is also that AI systems are hallucinating more often today because the complexity is increasing and it means that it requires a different approach and I love your solution to it that you need to treat it as a new colleague and develop trust over time. And actually yesterday I had a conversation with Chad GQT exactly asking for those most brutal, most negative cases of requests from other people, what it's asked to help with, to do, to support, and I've got those examples and I can tell that it's not something we want to see more in the world, and hopefully those AI solutions are going to preserve their ethical aspects and boundaries, to distinguish what is good and what is bad, because otherwise it might create a lot of trouble for all of us and for our future future and look, you made a really good point.
Speaker 2:There are certain other where AI is being used that can have a real human impact, right, so you know?
Speaker 2:Ai being used to evaluate job applications, for example, that can have a massive impact, right, and simply because you know there's certain jobs that have traditionally gone to certain gender types, the data that the AI works upon basically thinks that that's what's normal, right, so you know it's normal, for, uh, it thinks that it's normal for a ceo to be a man as an example, right, like, obviously, we know that that's not the case, but because of the historical data that that over skews it, we learn what a ceo looks like, as an example, when it then reviews a candidate and the candidate is a woman, potentially, might start ranking that candidate down just because of the gender of the applicant.
Speaker 2:And so, again, being cognizant of the fact that the data that we're using to train AI has biases in it, has imbalances in it, and we need to make sure that those then don't propagate and then get exacerbated, right, because the other fear is that now we're getting to a place where the AI models are being trained on AI-gener exacerbated. Right, because the other fear is that now we're getting to a place where the AI models are being trained on AI-generated content, right? You see the internet now and you're seeing videos now and so much content being generated by AI hosted back onto the internet, and then, at some point, you're going to have a flick where you're going to have more AI-generated content than human-generated content, and so if we're embedding biases in the content, and then that content is then being used to train the next stage of AI, we're just going to get into this real death spiral, right, which is quite scary.
Speaker 1:And so, again, we have to be really, really, really careful that we're constantly keeping the human in the loop, right, in the way that we train the models, in the way that we train the models in the way that we evaluate what's coming out from the model, making sure that our AI models are understandable at the human level so that we don't run into the risks of some of these things happening. It's great that you mentioned it, because I just thought about the same when you mentioned it the synthetic data which is put in the digital space today. I saw already some updates around the fact that the data after 2022 is considered to be unsafe and, as we don't mark AI-generated content as AI-generated content, it's close to impossible to distinguish those two human-generated and AI-generated content. It's close to impossible to distinguish those two human-generated and AI-generated synthetic. And then AI is basically training more and more on its own content, which gets further and further away from our human way of creating information. It's a really interesting aspect we have to address as well.
Speaker 1:So I love our conversation. We are touching on so many important things today and, by the way, to all of our listeners and viewers, if this conversation sparks something for you, hit like follow or subscribe. Subscribe and share it with one person you know would be inspired by this episode. Sharing is caring. Have you ever witnessed ai make their own decision in the project. Now we're talking about the general deviations, but let's talk about business again. What did it reveal about the boundaries of trust between human expertise and machine logic, and how can we mitigate those risks in business?
Speaker 2:So I guess in the early stages of developing our product, we sometimes saw AI categorizing certain risks and hazards in a completely different way to a human would, and initially actually we thought that that was wrong. But when we took it and showed it to you know, numerous health and safety experts, even they thought twice about it and actually maybe this is a different way of looking at it, right, or this is a different perspective, and so I guess it's kind of a double-edged sword right. So there's elements where clearly AI can get it wrong and it can hallucinate there's absolutely no doubt about it. But there's also the potential for it to help you see a different perspective, for it to kind of open things up where you, because of the constraints that we've got, based off of our own internal biases, the limitation of you know, experience that we have, the limitation of the knowledge that we can call upon and the time that we've kind of spent on this earth, I guess AI has this advantage of being able to give you this very broad and diverse window into the way of looking at things. And so there are certain aspects where I think we should also not play down, giving AI the opportunity to be a bit I don't want to use the word creative because that's not what it's doing but give it a bit of latitude and then see where it takes it.
Speaker 2:So I've been in workshops and things like that, where it says that people say no idea is a stupid idea.
Speaker 2:You know, no question is a stupid question, and so I think there are some cases where we can give AI that latitude, right, where no response is a stupid response. And again, talking about all the things we'll be talking about, it's up to us as humans, then, to understand and accept or reject what the AI is saying. I think some of these things continue to tell us that we have to keep these humans in the loop, right, we have to be able to be able to tell the AI when we think that it's gone wrong or that it could improve the way that it's done it or, like I said, change the way that we perceive some of the things, because the AI might have seen something that we haven't or, you know, it presents it in a way that we've never thought about before, and it all comes down to being able to build trust, right, being able to build trust between yourself and the responses, the outcomes that are coming out from these AI.
Speaker 1:I agree, but I see two different applications right now. The one it is about that type of communication, decision making, creative processes, and I think we can call it creativity, because actually it is helping us as well becoming more creative. And I remember one of my previous conversations on this podcast with a world-renowned artist who has been painting for Dalai Lama and Tiger Woods, and he was talking about how he is applying AI to develop his creativity and develop his talent. So Vilas Nayak is his name and it was a very inspiring conversation and I learned a lot about how we can develop our creativity as well, based on our interaction with ai. So I'm totally open to that and I think the aspect which requires more responsibility from us it is the automation aspect, the agentic application, because there we have truly much less insight and control compared to decision making, where we are still processing the outcomes and it's up to us to choose what resonates with us and what doesn't make sense and then proceed with that.
Speaker 2:Yeah, your point about creativity, though I think the point you're making is that, again, it's augmenting human creativity.
Speaker 2:Right, it's not being creative in its own self, it's presenting ideas.
Speaker 2:It's, you know, again, giving you different perspectives so that you can then do the creative bit. So I'm on the fence about whether we can call some of these AI models as being creative. They kind of the way that they do it is that they're obviously regurgitating information and putting bits of information together from other bits, and then there's a philosophical question about whether that's what humans do as well, and then that's what we call creativity. But I guess, yeah, I'm still not comfortable calling the AI creative, but I do subscribe to what you described in the second part of what you were saying is that it can help us be more creative or help adapt our creativity. Absolutely, I think we see that in creative writing. Lots of people use it to help you write better emails or write birthday messages and things like that. And so using it that way, where you're still in control of the creative process, and using it as a tool to inform your creative process, I think that's absolutely fine and that's the way we should be doing it.
Speaker 1:You are so right about it and still I can't let go the thought that you know, there are so many humans who are deeply operating with their left half of the brain, the logical one, and creativity is living on the right side.
Speaker 1:So some humans are still probably not much more creative than those models, and it always helps to get in the broader picture.
Speaker 1:I see it as a puzzle, with more pieces of information, and the more pieces you get on your table into your world, the easier it becomes to be more creative and come up with something new. So it just helps us broaden our vision and get in a broader perspective, which is enabling us to take the next step, which is amazing, and I'm using it oftentimes myself that way, because I always search for the new ways of getting more input and more data points into my own database, because actually, in a way, we are operating in a similar way to the AI solutions. We need that information into our database in order to create something, create that output, and the more data we have to work with, better the results can be. So it is really interesting. Yeah, and now, when we are talking about creativity and business, let's take a step back into the construction world. So, prakash, in your, where is the biggest current disconnect between AI innovation in construction and actual adoption on the ground?
Speaker 2:I think it's accessibility right. I guess the frontline workers, they come from a certain demographic, they have a certain literacy level when it comes to language, but also a certain literacy level when it comes to technology, and I know I'm generalizing here quite a lot, but what we see, particularly in this part of the world, is they come from the Indian subcontinent and you know some of the devices that they've got even internet connectivity right being able to access what we now consider to be almost a basic human right to be able to access the internet is not readily available. A lot of the workers here some of them don't have smartphones and so that limits them. Those who do have smartphones don't have the kind of mobile access and credit, so they have to wait until they get into a Wi-Fi zone and things like that. So all of these things that I guess a lot of us take for granted, just simply we can take out our smartphones, we've probably got an internet package with our mobile service providers and we can do what we want access ChatGPT or any of these apps, and even pay for the pro version of these right Give you better tools and things like that. It's not available to everybody. So accessibility, I think is a huge kind of barrier to getting people to adopt these technologies, getting people to understand them. You and I both know as well that the more you kind of work with these tools, the better they get right, the better they are for us right, and so if you don't have access to them, you're going to start getting left behind very, very quickly, and so that, for me, is the biggest kind of barrier to making sure that we're all as inclusive as possible is accessibility getting people access to the technology.
Speaker 2:Then the other part of what we're seeing is that a lot of the technology isn't even designed for the field workers, right? What we particularly I'm just talking about construction, I assume is probably similar in other industries is that a lot of the digital tools, whether they're AI or not, are designed for I'm going to class them as white collar workers, right? The the people like me and you who work in the office, you know, in an air-conditioned workstation with with like two screens, and the tools are designed for people like us to you know, create better dashboards to analyze data more efficiently and do all these, and and now AI is kind of being utilized to automate and make those things. Very, very few tools are looking at, you know, the people on the construction site themselves. How do we make tools like this available to them?
Speaker 2:Or even people thinking about whether the tools like this are going to be useful for this cohort of people in an organization, and so I think that's another element that's really kind of creating a barrier to getting adoption is the tools are not designed for them in the first place. They're not looking at. Does this cohort speak the language that we've designed the tool in? Right? Is their user experience, the way that they consume applications and that the same as everybody else? Likely not, and so I guess all of these things adding together means that we've got a potential to increase the digital divide.
Speaker 1:Right have have knots, and so we have to be really, really careful that the ai race and the and the kind of thing that we're doing at the moment doesn't disadvantage this group of people even more than they already are I couldn't agree more about this, because it is truly crucial to take care of humans and see them behind technologies as well, and create a better world through different projects, different types of collaborations, different types of areas where we can truly bring the best through AI, through our way of seeing the problematics first and then solving those problems with technologies, so that we see a sustainable uplift for the future.
Speaker 2:Yeah. That's truly important 100% right, and the word I was struggling to find before was inclusivity. Right, we need to make sure that AI adoption is inclusive. We cannot leave people behind, right? I think it's too important a transformation that anybody gets left behind.
Speaker 1:Yes, you are so right about it, and I see that the words when we're talking about ai, adoption are used sometimes in their own way, I would say because people is using them still to manipulate, to to run their own agenda. Right words run steps forward, so we also have to keep in mind that it is important to see the depth in it and become a little bit, at least a little bit wiser when we are adopting those technologies to truly create that inclusivity which is supporting those who really need that support. Prakash, can you share a few real stories from your professional experience around bridging the gaps we just mentioned?
Speaker 2:So we've got lots of examples where we've adapted our product because of the feedback that we've got or something we've seen within the construction industry. So I'll give you a perfect example. We built our application to be multilingual, so the AI allows you to communicate in multiple different languages. I think we're up to about 103 languages, right? So we're constantly adding new languages as we get comfortable that the accuracy of them is good, and what we initially designed it to do was for people to speak in one language and then get a response back in that same language, right? So? You know?
Speaker 2:Hindi, bengali, malayalam all of these languages are popular in this part of the world, and what found our one of our customers doing was they spoke in one language, so they spoke in arabic, they set the output language to be bengali and then got the output, and we were wondering why, why we never thought of this as a use case. When we asked them why, what they were doing was when they were on a construction site, they saw somebody walking on the wrong side of a barrier to an excavation, and because they don't speak the same language, all they could do was, you know, use arm movements, gesticulate, come back over, you know, bring them over. So job one done, right. You've stopped them from doing the unsafe act right, immediately stopped them. But because they don't speak the same language you haven't explained, he didn't explain to him how what he was doing was unsafe and most likely he'll do it again, right, five minutes later he'll go and do it again because he didn't know.
Speaker 2:But what they used our technology to do was. He then spoke in Arabic and explained what the dangers are of working near deep excavations, and then played the voice note to him in Bengali, right. The response that came back in his language, and so not only was he in that moment stopping him from doing something unsafe, but he was also hopefully now giving him the understanding of why he was doing unsafe in a language that he understood right, and so giving him that education to understand the risk next time around, with the hope that he's unlikely to do it again. I think that's a great example of how we can use this technology, make it accessible to basically help get better safety outcomes.
Speaker 1:I absolutely love your example and it is so good to remember that it is important to speak the same language and to explain not just get the results, get the solution, but also explain the why behind, so that you don't need to do it again. And that's exactly how that sustainable transformation is happening when I'm thinking about it in my world. It needs to be based on the deep understanding and acceptance, otherwise it's not going to work. It's going to just repeat and you will need to do the same thing over and over again. And what you are describing it is really something impactful, and imagine how many lives it can save as well, and probably already saved so it is absolutely amazing. Thank you so much for sharing this example. Thank you. What role do you believe AI will play in shaping the soul of the construction industry over the next three to five years?
Speaker 2:Oh, that's a deep question. I think the soul of construction, despite what people think is innovative, you can't get the types of buildings that we see in Dubai. You can't get the types of buildings that we see in Dubai If you looked at a really complex road intersection from the sky and see that complexity. That is innovation. Right, that's over years and being built. But I think we get a negative press as an industry for being, you know, archaic, backward and not being very innovative. But at its core we are. We're full of architects and engineers and people solving problems every single day.
Speaker 2:Innovation is about solving problems and people in construction solve hundreds of problems every single day, right, their conditions change, materials don't get delivered, the design changes, the weather changes.
Speaker 2:All of these things compound to mean that the construction workers, every single day, are having to make adjustments and make decisions to make sure they still reach the right outcome. So I think AI is going to potentially help bring that soul out right. It's helped us change the perception because it will help amplify that innovative soul that we've got within the construction industry. I think the other part of it is it's going to help us to hopefully be more productive, so do more with the resources that we've got and help us to be much more sustainable in terms of enable us to identify you know waste more effective and efficient ways of doing things, and so you talk about how it will shape or change. I think we're going to start because ai will help us become more more sustainable right both environmentally, um, but also from a social and a governance perspective, right the kind of truly becoming sustainable and ai will, I hope, will help accelerate that process.
Speaker 1:I think we're on the journey already, but I think ai is going to help us accelerate getting there well, definitely, definitely on the journey, but I'm also absolutely sure that artificial intelligence is going to help us both accelerate in those processes, but also enabling the aspects in ourselves which are truly needed for that transformation. And for the moment, most of the leaders don't even see that coming, but I see that coming already and I see the response, the inner response to certain things I'm talking about that we are truly longing to become that type of human being who can co-create together with AI and see those fantastic results in a totally different way. So I'm so looking forward to this development and this is an exciting journey.
Speaker 2:Definitely exciting times, yeah, and I think the other part of what AI will continue to help us do is be more collaborative, be more transparent and, hopefully, continue to help us be more adaptive. I think, in a changing world, there are changes in all aspects in what we have to build. You know, as we've got mass urbanization happening, as we've got, you know, smart cities and electrification happening all of these things there's external forces that are impacting the construction industry. I hope that AI can help us to translate those right, To be able to absorb those and be collaborative, be transparent and be adaptive to these things.
Speaker 1:And still, we have to make sure that we incorporate the ethical aspects early enough, because at a certain point there might be no way back, and we have to take care of this before that moment comes. And it's up to us, it's our responsibility, because there is still quite a small group of people in the world who is standing for this AI development truly, and we have to be that type of humans who are wise and deep and think not only about short-term profits and benefits, but also about long-term results and impact.
Speaker 2:You and I spoke about this when we first met. I'm really pleased that we are having these types of conversations on AI. I feel like the last kind of wave of transformation that happened to humans was social media. After the internet, and with social media, I think everybody went in feet first without really thinking about what the potential impacts and negative consequences of it might be, until it was too late. Right, we started talking about it when we saw social media influencing elections, mass collecting data and then selling it to others. Talking about it when we saw social media influencing elections, mass collecting data and then selling it to others, and by that time it was too late. Right, the horse has bolted here this time around.
Speaker 2:I'm really pleased that there are conversations about, you know, governance, the potential negative impacts of it On the other side, the flip side, the power of it, and so kind of finding this balance. I think there is a requirement for you know, governments and legislation to get involved, some guidelines and some controls to stop it from evolving into some potential negative aspects of it. And there are, right, it's. Look, at the end of the day, ai is a tool. In the same way, if I use the analogy of construction hammer is a tool that can be used to build a beautiful piece of furniture, but it could also be used to hurt somebody, right, right, you could hit somebody with a hammer and do some serious damage. I think AI is similarly a tool, and we've got a responsibility to make sure that we're using it morally and ethically in the right way to get some of those massive advantages that we've been talking about today. But I'll also talk about it more widely.
Speaker 1:This is gold and I feel exactly the same. I'm so proud and happy that we can have this type of conversations on the right level, with the impact we would like to create and this ripple effect. It's going to touch so many different parts of AI development and it is going to also help leaders change themselves and the ways they are working with technologies and developing their business and prioritizing things in their roadmaps. So this conversation is truly timely and needed. It is quite rare as well, because most part of AI conversations are focused on slightly different dimensions and aspects, and this is needed and we are opening up the space and letting others also start running these conversations, and it's unbelievable. I just ran two amazing events in Stockholm and I saw how those events were different from most part of other events, according to the feedback I've got, and it was a great surprise for those leaders who participated and the fact that they mentioned that this helps them. Standardizing and normalizing this type of conversations in other forums is also somewhat important, and that's what we are doing.
Speaker 2:Yeah, fantastic.
Speaker 1:I could run this conversation for so much longer, but I want your one piece of advice for leaders and business owners to mitigate risks and maximize the business outcomes in a sustainable way in this AI-powered world. Could you please share more of your wisdom?
Speaker 2:So, if it's one piece of advice, I think, don't adopt AI to impress investors or just for the sake of adopting AI. Adopt it to protect your people, right. So look at how it's going to improve something that your people are doing or make things safer, more efficient, more effective for them. Start with maybe one area of high friction within your organization or a real pain point, and then work with those people to come up with a solution that incorporates AI, that truly adds to that right, rather than just adding AI for AI's sake. I've seen so many examples of AI just being introduced because it's just all of a sudden appeared as a KPI from senior leadership, and so when setting these goals, I think it should be looking at being very outcome driven, rather than to tick a box and impress a senior or an investor or somebody like that.
Speaker 1:Brilliant advice. Thank you so much. Thank you for being here today, sharing your wisdom, your experience, and I'm sure that this is going to help so many leaders and experts working with AI developing new technologies to avoid the pitfalls, to prioritize what really matters and to help them build in business in a more powerful and sustainable way. Thank you for sharing your knowledge with us today and your vibes as well, of course.
Speaker 2:Thank you. Thank you for giving me the environment to do so and setting this thing up. I think what you're doing is fantastic. Having this conversation about the human elements of AI and the impact that it's going to have, I think, is great. We should be having more of these types of conversations. So thank you, and I appreciate you inviting me to do this.
Speaker 1:Thank you. We're definitely going to have more of this, so, to all the listeners and viewers, stay tuned. This is not the only one conversation we're going to have. Thank you for joining us on Digital Transformation and the Eye for Humans. I'm Amy and it was enriching to share this time with you. Remember, the core of any transformation lies in our human nature how we think, feel and connect with others. It is about enhancing our emotional intelligence, embracing the winning mindset and leading with empathy and insight. Subscribe and stay tuned for more episodes where we uncover the latest trends in digital business and explore the human side of technology and leadership. If this conversation resonated with you and you are a visionary leader, business owner or investor ready to shape what's next, consider joining the AI Game Changers Club. You will find more information in the description. Until next time, keep nurturing your mind, fostering your connections and leading with heart.