Energy Transition Talks
The energy industry is evolving—how will quantum computing, AI, and digital transformation shape the future? Join CGI’s experts as they discuss the latest trends in decarbonization, grid modernization, and disruptive technologies driving the energy transition.
Topics include:
- The impact of AI, quantum computing, and digital transformation
- Decarbonization strategies and the rise of green energy
- How utilities are modernizing power grids and improving resilience
- Innovations in battery storage, hydrogen, and renewables
Listen now and stay ahead of the energy transition.
Subscribe on Apple Podcasts, Spotify, or your favorite podcast app.
Energy Transition Talks
AI transforming energy: From IoT data to measurable value
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
How do you turn massive data sets into measurable value while managing the inherent risks of AI? In the second part of their discussion, CGI experts Peter Warren, Global Industry Lead for Energy & Utilities, and Dr. Diane Gutiw, Global AI Research Lead, cut through the noise to tackle the critical next steps in enterprise AI adoption.
They explore the essential balance between innovation and risk, detailing why establishing clear governance and putting guardrails in place is the fastest path to long-term success. Learn how AI is enabling a "conversation with your data," helping industries like energy and utilities improve safety, predict failures, and build a resilient operational future.
In this episode, you will learn about:
- The Four Pillars of AI Risk: A breakdown of the key risks in AI adoption—from bad actors and misuse to misunderstanding and the fear of missed opportunities.
- Guardrails as Accelerators: Why the most successful companies are the ones implementing standards and rules for AI use, leading to more reliable and trusted results.
- Unlocking IoT Data: How AI is turning data overload from industrial assets into actionable insights for predictive maintenance, fraud detection, and operational efficiency.
- Conversational AI: The future of dashboards and data analysis, where leaders can "phone their data" to ask complex what-if scenarios and get immediate answers.
- Real-World Impact: Practical examples of how AI is improving field safety, resource allocation, and performance in the energy and utilities industry.
Visit our Energy Transition Talks page
Welcome Back And Intros
Peter WarrenHello again, everyone, and welcome to part two of my interview with uh Dr. Diane Guchu from CGI. This is again a continuing uh segment of our ongoing podcast about energy transition and the things that are impacting our industry. Uh Diane, would you do uh me a favor of introducing yourself again?
Diane GutiwHi, nice to chat again with you, Peter. Uh my name is Diane Gucciou. As uh Peter mentioned, I'm the global AI research lead at CGI around as part of our CTO group. And a lot of my focus has been on both the responsible use of AI and advising different uh government organizations such as the EU AI Commission, the Welsh Government. I sit on the Strategic Advisory Council. And most recently in Canada, I'm the co-chair of the Federal AI Strategic Council and have been a member of the task force for the last couple of months, which are working with the government in refining our AI strategy moving into the future. So great to have this conversation. It's very topical and lots going on.
Peter WarrenYeah, it's happening in real time. And I always forget to introduce myself. I'm Peter Warren. I'm the global industry lead at CGI for energy and utilities. Diane, uh, we in the previous one we talked a lot about people getting their data right, changing things, the impact of the tools, how it's impacting folks. We didn't really talk about organizational change management. We really didn't talk about risk management and the impacts of risk, but also how AI is extending maybe right out into operations, into the IoT networks and so on. Where would you like to start on that subject list?
Diane GutiwYeah, well, I think it's important to talk about risk. Um, because a lot of uh jurisdictions and organizations are trying to balance their risks with innovation. You know, we know we're at a pivotal point in getting real value out of these technologies, and yet, you know, really understanding where the risks are and mitigating that risk is critical. And we're at a point in time, you know, you and I, I in a lot of conversations, have heard lots of fear on, you know, what happens when we move to artificial general intelligence, that, you know, and what do we do when this intelligence exceeds our intelligence? And so I'd love to have a conversation on that. So why don't we start with risk?
Framing AI Risk And AGI Fears
Peter WarrenOkay. So what do you what do you mean by the last statement you made there about general intelligence?
Four Buckets Of AI Risk
Diane GutiwYeah, we're not at that point yet. But as we see the rapid speed that these technologies are evolving, we definitely are at the point where it is able to do some things better than we can. It is able to process information quicker, it is able to find patterns and information quicker. All things that we we as humans can do, if we had unlimited time, unlimited data, and unlimited resources to do it, these technologies can do it better. I liken it to a crane can lift a heavy truck better than a human can. And we designed it to serve us because it can do it better, because mechanically it was built to do that. AI can do some things better for us. But the fear is what happens when it can use that knowledge on its own without the control of the masters? And we're not at that point yet. However, you know, and and I don't know when we'll be at that point that that it is able to go off and do things. But there are risks at this point that it's critical that we mitigate. And I can divide those into four. You know, first of all, there's the risk of the bad actors, those people that are using these technologies for bad purposes. No different than we had bad actors with SQL code and and other things. However, um, the fear that the bad actors can use these tools to be able to design things that could be used in warfare, could be used in hacking, we need to invest and we need to stay ahead of understanding how do we protect ourselves from these and understand the types of things that can be done. And the EU AI Commission in the code of conduct have put a lot of onus on the general purpose AI developers, the hyperscalers, to try to build in safeguards into the tools. But it it definitely is a worry. We need regulation, we need investment in research, and we need to focus on it. The second fear, which I think is more day-to-day, is the fear of misuse and the risk of misuse. So this is when either people don't understand how the tools work and they build something, they put in personal information that shouldn't be in there and expose it, or IP, or they develop something that's unreliable and it's giving you a bad answer to a question with the premise that it has been tested. And then there's the fear of using it because if there's not enough guidance on how to use it. And with that, we really again need guidance, guardrails. You know, this is how we deal with sensitive information. This is the type of information that's safe to use it in. This is how to protect your models so that you're locking down your parameters and your environment. The third is the fear of misunderstanding, you know, and this is one we're seeing where people are starting to use AI therapists and AI boyfriends and relying on it to write entire papers without checking the facts because we misunderstand that, you know, it is just a tool that is is reflecting back what you want to hear. And in the last podcast, you talked about exactly that. It sometimes tries to be flattering or it connects dots incorrectly. So we do need to refine that information and have a conversation as well as understand what these tools can do and what they can't do. AI literacy is critical to that. We need to make sure as citizens of public sector and organizations using these tools with our information, we understand what it is and isn't. So we can trust. And when we use it in a day-to-day, we can't over-rely on these tools. We can't take away our critical thinking. The last, and this was a long answer to your question, and I think this is very relevant probably to where you want to go next, is the fear of a missed and the risk of a missed opportunity. Because the benefit of these tools in AI for good is critical. The number of things that we're able to do to solve real-world problems, be able to do things more efficiency, resource capacity concerns, being able to provide more personalized services and directed services, there is so much good that we can do across different industries, uh, reducing the screening ages for cancers, finding new drug discoveries. So if we can manage all of these risks now, as we move forward and these tools evolve, we will have a really good foundation in designing safe technology that as the technology evolves, we still are the ones in control.
Guardrails That Speed Adoption
Peter WarrenYeah, I think that's an interesting point there. And we're talking about risk. And I know that uh when we've been out talking to clients, they talk a lot about the human in the middle. They talk about people managing systems, having those decisions brought up. What are the five things I should be worried about right now? What should I be looking at? We see uh organizational change going on to adapt to these things. The companies that are doing best, oddly enough, are actually putting in guardrails that are actually putting in standards, they're putting in saying from the IT department, here's what you can use, what you can't use, here's your rules and regulations to stop people from throwing up corporate data onto the system onto maybe a cloud that they're not supposed to, and a variety of those other functions. And it seems counterintuitive, but the ones that are actually taking a breath and putting in the guardrails and putting in rules and regulations are actually moving faster and having more long-term success than the ones that just jump into it, in fact, because they're they're actually having reliable data and they're getting answers they can they can trust. So when you get into the next part that you're talking about there, about how they actually want to move forward for innovation and have that next level of of trust. What are your thoughts on that? Because we we have some, you know, people think that they should just be jumping in right away and doing everything, and oh my god, my competitors must be doing create creative things. What do you what's the reality for you that you see?
Diane GutiwYou know, I think you need to drive these, I know you need to drive these tools with value, right? You need to you need to understand what is the value I intend to get out of this? What is my return on investment for this? Is where you're going to make a difference. We're looking at, and we talked about that, you know, the let's stop talking about AI and let's start talking about the problem we're solving. Value-driven investment is is going to be where it really moves the mark. And that includes am I getting the intended value? Did I do this faster? Is the quality improved? Did I provide a better service, provide better, quicker information to my staff is really critical. But also looking at the value of how do I scale? You know, okay, I don't want just a whole bunch of little one-ofs, but I want an ecosystem that's bringing end-to-end value and I want to be consistent and aligned. And that's where the AI governance is really coming in is how do I align these things? How do I reuse an agent? I'm rather than building 10 agents that are generally doing the same thing, developing a way that you're being efficient in the agents that you're doing so that the outputs are consistent, the processes are consistent, and are the all the guardrails that go with it, how we manage our data, how we provide information back, how what the user interface and and user experience is like. If we can be consistent and scale that, that's where you start to get the real true value.
Value First And Scalable Governance
Peter WarrenSo when we look in operations and we look at companies trying to do things, and of course you talked about risk in our previous uh call as well, uh, and uh that you know organizations, public sector are risk adverse, certainly the energy and utility industry is risk-adverse. We've we've done things a certain way for a couple hundred years because that way it doesn't catch fire and it doesn't blow up. Um so there's there's there's this real dichotomy of wanting to innovate, wanting to do things, but also realizing that they have to stay safe. How do you see reaching out into the IoT networks, reaching out into operations, how do you see AI actually helping in the field more than maybe in the in the uh top office?
Diane GutiwYeah, the IoT data from all of the design devices, particularly in utilities and energy, where we have IoT devices in all of our assets on our power poles, um, and it's collecting data by the second. You know, so we really up until now we're suffering from a huge amount of data overload and not really clear on how to use that data to be able to answer questions, how to process it. So we now have in our toolbox a fantastic new tool that can look through those huge volumes of information and gather insights. You know, you you can then, as we were talking earlier, have a conversation with that IoT data. You know, based on this data, why is, you know, why am I having significant failures in this type of assets in these conditions? Is there something that I'm missing? And how can I get ahead of that? You know, at what point, you know, can I be alerted that something needs to happen? The same would happen with fraud detection and all other things where we have just a mass amount of data coming in that in the past we're really just looking for alerts or when something exceeds a threshold. But how can we use that data to let us know in advance that something could use attention and we're gonna avoid downtime, or we're going to avoid hits, or we can refine our fraud detection in the future because we have better insights. Just like humans, the more information these technologies have available to them, the better insights that they can gather. And it's the same with AI, it's simulating human reasoning. The more information it has, the more rounded output it's gonna have. And IoT data to me is is one area where we can finally get some real value out of all that information that's been collected.
Safer Innovation In Risk‑Averse Industries
Peter WarrenIn one of our meetings, that's uh when you made that statement there about having a conversation with your data. Uh, that's when uh one of the gentlemen in one of our meetings said, now you're talking. He said, Now now you're making sense to me because they see a lot of people throwing things into the market saying, I made an AI something this or something that. And they said, in their personal view, that they could mimic all of that. In a few hours, they could duplicate that function. It's really getting into that conversation with data and having a trusted interaction with it. Now, in energy and utilities, we've been having a few people look at that. And you uh you looked at also like the cause and effect if something's been going on and going wrong and moving forward, and certainly uh where we've applied that in machine vision and machine learning, et cetera, in systems, we've even noticed a difference that even a software or firmware update on a piece of hardware totally changed its performance.
Turning IoT Data Into Foresight
Diane GutiwYeah, absolutely.
Peter WarrenAnd looking at this whole ecosystem, but you would normally see that. You mean when we do updates to our PCs, we always see, oh my, it's not running as fast as it did before. But this was really catastrophic that something was wrong in that. So what would be an example in your mind for a company in our type of industry that would have a conversation with their data?
Conversational Analytics And Agentic Models
Diane GutiwWell, I think you just it's like having an expert that has access to all of your corporate information, images, documents, guidelines at any time. And, you know, the ability to use that agentic model where you're having a conversation with an orchestrator that can send off his team of agents to say, well, let me see how my IoT data is related to this alert that came up last week and if I can get to the the bottom of it. Or if you're looking at the what if analysis, which to me is absolutely brilliant, you know, it if we know that you're going to be short of staff on this day, how do I allocate my resources so that I can I can predict and prescribe where to put people so that if there is downtime over Christmas, if there is, you know, how I can deploy the right people to the right place in a safe way, that's getting the best outcome, right? And those are the sorts of conversations that you would have with the data. If you had a whole group of experts sitting around the table that you were able to just throw these questions at, that really is what brings brings the value. And so we moved the dial a little bit in this as well, is just next generation dashboards where we have been collecting this data, creating this semantic layer of data that feeds these dashboards. Well, guess what? We can now have a conversation with that and develop the future of dashboards. We had a client who was an assistant deputy minister that used the example of, you know, I have to go talk to the deputy minister. I get six people going through my dashboards, getting me discrete answers of questions, guessing at what he wants. And when I get there, I've often missed the mark and they're now running around getting new information real time. He said, I want to phone my data. I want to phone and say, what's the attachment rate to primary care physicians in rural areas, what percentage of chronic diseases, who's going to acute care versus uh a walk-in clinic versus a nurse's clinic, and what's the cost of care? And what would happen if I added five more nurses clinics in that area? What would that do? Right. And that's the sort of conversation. So looking at your sector, it would be very much the same. You know, you could phone it up and say, okay, I predict that I'm going to have an overload of power needed in this area, and there's a storm coming in. What should I think about? What could I do? How could I deploy differently? And how do I get ahead of this potential impact to the best benefit? Also looking at health and safety, that's been a huge thing. How do I deploy people, particularly in Canada, where it's very rural areas, very remote, often disconnected? And how can I do better in that? What opportunities do I have to keep people safe in challenging situations? You know, we already saw the more use of drones. There's a great example. Have a conversation with your drone. So what did you see? What does that mean, right?
Peter WarrenWell, I'm going to leave it with that. I think that's a great idea for our audience to think about if they had access to all their data and had that virtual expert or virtual experts that they could have a conversation, what kind of questions would they ask? And if so, how would they respond? With that, Diane, I'd like to thank you very much for the second installment in this series. Uh, we'll pick you guys up in the next podcast. Thank you very much for joining. I'm Peter Warren, and you are Diane Gucci.
Diane GutiwIt was great chatting with you, Peter, as usual. And I'm sure in three months we'll talk and the whole ecosystem will have changed again.
Peter WarrenIt is constantly evolving. Thanks everyone. Bye bye.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Energy Transition Talks
CGI in Energy & Utilities