What’s the BUZZ? — AI in Business

Evolving Your Leadership for Hybrid Teams (Danielle Gifford)

Andreas Welsch

Agentic AI is pushing leaders to rethink roles, processes, and governance far beyond another automation wave.

In this episode, Andreas Welsch speaks with Danielle Gifford, PwC Managing Director of AI, about how organizations should prepare for agentic AI. Danielle draws on frontline experience with enterprise pilots and deployments to explain why agents require new infrastructure, clearer role boundaries, and fresh approaches to governance and workforce design.

Highlights from the conversation:

  • Why agents are different from classic rule-based automation: they’re goal-driven, context-aware and can act with autonomy, which creates both opportunity and risk.
  • Where companies (especially in Canada) are on the adoption curve: pilots and POCs are increasing, but full-scale deployments need better data, guardrails, and change planning.
  • How leaders should approach agent projects: start with the business problem, map processes, and decide where human + agent collaboration delivers the highest value.
  • Workforce design and the “digital coworker”: practical advice on defining role boundaries, delegation rules, and how to evaluate outcomes when humans and agents collaborate.
  • Multi-agent orchestration and governance: how to prevent agents from converging on weak solutions and how to build review, control, and accountability into agent systems.

Key takeaways:

  1. Business first: define the problem before choosing technology. Agents aren’t a silver bullet — they must solve a real, scoped pain point.
  2. Move from experimentation to implementation: Canadian enterprises are ready to progress beyond proofs of concept and invest in production-ready agent solutions with proper controls.
  3. Agents ≠ automation: treat agents as goal-based collaborators that need explicit boundaries, evaluation metrics, and workforce redesign.


If you lead teams, product strategy, or AI initiatives and want practical guidance for turning agent hype into measurable outcomes, this episode is for you. Listen now to get the full conversation and actionable next steps.

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today, we'll talk about how to evolve your leadership with Agentic AI and who better to talk about it than someone who's actually actively working on that. Danielle Gifford. Hey, Danielle. Thank you so much for joining.

Danielle Gifford:

Awesome. Thank you so much for having me here this morning.

Andreas Welsch:

Great. Why don't you tell us a little bit about yourself, who you are and what you do.

Danielle Gifford:

Okay, sounds good. So my name is Danielle Gifford. I am currently a Managing Director at PwC of AI. And I would say my role really focuses on two different things. One is our solutions and our products and what we're actually offering in markets. So everything from strategy to governance, to literacy, to actually building and deploying models from traditional AI still feels funny to say traditional AI to generative AI to agents. And then the other part of it is actually co-leading our emerging solutions. So as I'm sure you're well aware of the way that we do work, there's just a better way for cloud migrations, application migrations, data migrations their traditional kind of line by line code transformation. There's tools that can support us in that. And so those are the two hats that I wear at pwc. I also teach on the side at the University of Calgary, so I am a professor in their MBA course teaching an applied AI and business course. So helping business students actually understand what is and what isn't AI, how to critically evaluate it, and then how to actually look at scoping a use case.

Andreas Welsch:

That's awesome. And we talked a little bit about that before going live, so let's make sure we talk about that here on the air as well, because I think it's incredibly important to also prepare the next generation of leaders to be prepared for this transformational time that we're in and have an idea for what's, ahead and how can we lead that successfully. Yeah, absolutely. Yeah, so awesome folks. If, you're just joining the stream, drop a comment in the chat where you're joining us from. I'm always curious to see how global our audience is. And also don't forget to get your copy of the AI Leadership Handbook so you can learn how to turn technology hype into business outcomes. And yeah, what do you say? Should we play a little game to kick things off in good fashion?

Danielle Gifford:

Yes. I love a little game.

Andreas Welsch:

All right, so you'll you'll see the sentence here when I hit the buzzer. You'll also see the surprise word, and I would love for you to answer with the first thing that comes to mind. You'll have 60 seconds for your answer. And for those of you watching us live, drop your answer in the chat and why as well. Are you ready for, What's the BUZZ?

Danielle Gifford:

Yeah, I'm a little nervous and anxious, but I'm ready.

Andreas Welsch:

I'm sure you'll do just fine. So let's do this. If AI were a, let's see, if it were a color, what would it be? 60 seconds on the clock. Go.

Danielle Gifford:

Okay. Interesting. There's green, there's the primary colors. I feel like my mind immediately goes to yellow just because that is my favorite color. The reason I would say yellow is it's energetic. It brings light to things I would say, and being able to actually search and find and summarize, and it has most of the time, like a very good outcome. But then the other thing with yellow, and if you can even think about in stoplights, is it's a little bit of a sign of caution. And although it has this like enthusiasm and lightness and energy, there's also a bit of like caution towards how you're using it, when you're using it and what the actual application is. So final answer is yellow.

Andreas Welsch:

That's awesome. I, love it. I was a little concerned you might say. It's the association. But let's hope we we don't get to that part. So really good to, to see the, positivity and, the optimism and the sun is yellow, so it gives us some warmth and energy like you said, so awesome. I'm trying to figure out how to make a good transition to our topics. Because it seems a little abrupt with all that energy that that we see in the market and all that yellowness. What do you see from where you are in you're working in Canada, you're working with a lot of large companies, brands across different industries. You have a front row seat at what's happening, what people are actually doing. What are you seeing is the state of AI hype and adoption, and where are companies on their path?

Danielle Gifford:

Yeah, absolutely. I feel like it's a really good question in terms of what are we hearing like the actual, like noise that's in the system versus what's actually happening. I would say from a practical perspective, what we're seeing in Canada is we're actually seeing companies start to really push forward into the adoption of not only just AI and generative AI but also agents. Which is something that's really exciting. I'm sure, as or for anyone that's listening Canadians are good, they're kind, they're humble, but they're often a little bit risk aware. And so when it comes to the adoption of new technologies, we're not always the first to hop on the bandwagon. And so what I'm starting to see in companies is like not just the original approach of let's make sure that we have our data. Properly in the right systems and its cleanse and we can do the analytics on it. That's obviously one key part of the conversation, but it's actually where are the opportunities for us to leverage and look at AgTech or generative AI within our offices, both from a back office perspective and from a front office perspective. And so I would say this is the first time that I've started to see that real shift, especially having been in the AI space, I would say for the last six to seven years. We had the hype of AI and then we went through the AI winter and then ChatGPT launched on the scene in November of 2022. And now all of a sudden it's not just something that's a buzzword or people are looking at it and just being like, oh, I'll get to it. It's actually something that's serious that's on the agenda that executives have put budget towards. And they're putting the guardrails around it to actually start to pilot and then move into deployment with the certain opportunities that make the most sense for their organizations. Very long-winded way to say, from a Canadian perspective, we're starting to see a lot of movement from not just the top level, but throughout the organization. And then the actual adoption of it into some of the systems and processes.

Andreas Welsch:

So that makes me curious. First of all, I fully empathize with risk aware, coming from Germany. Yeah. There's a lot of times even risk averse, the next step up on that scale. From my perception, generative AI has become a lot more mature, whether you read the reports. Yeah. And it's the trough of disillusionment or the realization that things aren't as easy, aren't as simple as we initially thought. And we've gone through this before in other hype cycles, it seems that Agentic AI is, not just, coming up that, that slope and organizations are trying to figure out what do we do with this? How are you seeing this play out in Canada? Are there proofs of concept being spun up? Is it piloting, is it putting things into production specific areas of business where you see companies are looking at this more seriously than others?

Danielle Gifford:

Yeah. One of the things I would say on that question, or even on that topic is you mentioned like generative AI is a little bit more commonplace now, and I remember reading a report from Gartner said that like of May of 2025 about it's a thousand different vendors or platforms like the Workdays, the Salesforce, the SAPs, et cetera, have introduced some form of generative AI into their products. And so that means that you know it's gonna be here regardless of if you. Not if you want it or not, but like the applications are getting turned on in the systems that you're using every day. Now we're starting to see the same sort of trend I would see with agents where a lot of companies and platforms are starting to look at building out the correct like systems and infrastructure to actually support agents. One of the unique things, and I think we'll get to this a little bit later, is everyone thinks that it's just. Easy to drop in an agent. They're like, oh, I have an agent. I'm gonna give it a goal, and then it's gonna get me to that goal in simple form, and I'm gonna save money. I'm gonna save time, and I'm gonna save manpower. But it's not as easy as it seems, right? Like it's not that simple. And so when you're thinking about agents very much like you think of self-driving cars, you still need the infrastructure and the rules and the logic and the guardrails around it. If we think about Waymo, the way that it works today, right? Like you have all of the roads and the streets, like you have the certain logic around actually stop signs or you have stop lights in terms of what to do. So you know and understand what's there. But with agents, you still have to set up some of that infrastructure inside in order for them to be effective, in order for them to work well and in order for you to actually have the right guardrails, controls and processes around them to be effective actually within your systems. And I know that Microsoft has started to see. And like they've posted this, some early gains within their business, specifically within copilot. I think that they were saying that agents have supported such as like 9.4, 9.5% for higher revenue per seller, and the ability to actually customize some of the products and what they're actually doing. And so we are starting to see some of that from an Ag agent side. I will say, at least from a Canadian perspective. We do see companies that are pushing towards agents and that are wanting to do proof of concepts and pilot them, but we haven't seen as much in terms of that actual full deployment. Unless you're looking at a company like Cohere, which is a really famous company here in Canada that actually builds foundation models. Enterprises, both public and private are playing around with it, but they're still getting to the. Crossing the chasm, I would say, of actually bringing them into deployment. So it's good momentum and it's good trajectory, but it's actually just like when you make that leap to actually seeing the impact in your systems.

Andreas Welsch:

It seems like that makes perfect sense. Given where the industry is, where the technology is, what you mentioned, building the guardrails, building the infrastructure around it in your organization, maybe between organizations and it seems like that we're really at this early moment where we're seeing this potential and we're able to prove it in certain scenarios, but there's so much more to be done to really capitalize on it. There, I'm wondering. How, do you see leaders viewing this topic? What's the best, or what's the right way to think about it knowing that there is so much height? There is so much push, but there's also so much more to do. Yeah.

Danielle Gifford:

I don't know if there ever is like a right or wrong way to think about agents, to think about technology, but I always do think that it goes back to what is the problem that you are trying to solve for? And similar to what we were talking about before, leaders think that not all leaders, but leaders that maybe aren't as technical think that it's as simple as just dropping an agent in and then it will do what you need to do. But really what it means for businesses is that you need to actually take a look at like your processes the people that you have, the systems that you're working with, and almost do process mapping to understand what are all of the intricacies within that, where are the kind of high value areas where an agent. A single agent or either like an orchestration of agents working together could provide the most impact. That's where I'm seeing kind of leaders see a little bit of a miss from like the, actual like hype versus like where the, power of agents can be. The other thing too, which is unique, and I'm sure that you've heard it a lot and I'd love to get your take on it it's this whole notion of now we're gonna have digital coworkers, and so what does that actually mean? And so that's a lot. Even if we think about workforce transformation, if we think about. Learning and development, if we think about human resources. And so when you're working side and side with an agent, what does that actually mean for you and how do you have that kind of collaboration or that cooperation together between what is a, human doing and then what's an agent doing and where do you actually come together where it's human plus agent. And so I'm curious actually, if you've had conversations around that digital coworker and then how do you actually manage that?

Andreas Welsch:

So I, great point. I actually created two courses with LinkedIn learning on that topic. Perfect. Of how do you bring AI into your organization when it, becomes a coworker. So a little plug here to and recommendation follow you to take a look at those. But I think, we see a lot of times people and, vendors comparing AI to humans. I've seen this term of what was it, AI employee, come up lately. And to me, that's such a misnomer and a mislabeling and miscategorizing of what these tools are at the end of the day. They are tools. They have access to information. They're built into a software you use every day, but they're not a replacement or not equivalent to your human colleague. That's it. The next stake desk over in, in, in the office across from you or something. So I think there we need to be careful how we refer to them. In general to raise the right expectations. On the other hand though, and that's where I'm a little conflicted myself, is if you look at how work is done in a business. We see many parallels that we can now apply to agents as well. Here, there are guidelines, there are codes of conducts. There's stored operating procedures things like that. How do we divide work and what couldn't we really apply to agents? And I think especially the part of the collaboration, when do you hand something over? What do you ask? What information do you need to give is so important. As a leader yourself I'm sure you've gone through some kind of leadership training. I was fortunate to do that earlier in, in my career. And there's some basics about how do we delegate tasks to another person that we want to work with, or that's part of our team. It's usually about what is the goal that I want you to achieve? What is the context in which you work? Are there tools? Is there additional data to work with other people or other resources to use? And. The last one, which is the most important one for me, is how do you evaluate if the outcome is actually good? A lot of times you say, yeah, that isn't good enough. Or if, you're an employee and you manage this, that isn't good enough. You say why is it not good enough? Oh I don't know, but do it again. I feel we're seeing a lot of that now with AI, with us as individual users, almost becoming leaders. So we need to be clear of those things. But it's really this division of labor to come back to that point to say, where can I delegate something safely? And I know that I won't be compromising quality.

Danielle Gifford:

Absolutely.

Andreas Welsch:

And I won't be compromising accountability that is still emerging.

Danielle Gifford:

I think too, on that point, it almost goes back to the basics of workforce design. And so what are the role boundaries, like you were saying, what are the objectives? What are the tasks? And if we are gonna have these hybrid human and AI teams, what does that really mean? And I would say, and I'll go out on a little bit of a ledge here, but a lot of companies don't have great processes in place. Or when they have rules like. Everything that you named in terms of context. Structure, like what tools, what are the boundaries, how are you evaluated? What's not that cut and clear of what that actually is. And so I think what agents are doing are really actually forcing. Teams and leaderships to focus on what, are those boundaries and what does that look like? And so as companies and organizations start to implement agents or look at ag agentic AI, whether it be a solo agent or an orchestration, I think that they'll actually see a lot of business redesign. And that will allow them to have a lot more value in what they're doing because the way that things. Our done today isn't always necessarily the best way, but no one can point to what it is or how it's evaluated or who does what. It's always kinda living in people's heads, and so how do you actually get that information out that's in people's heads or in systems to support those hybrid workers.

Andreas Welsch:

I think that's an excellent point because a lot of these process, a lot of these systems, especially if they're the ones that your business runs on. Have been implemented have been designed 20, 25 years ago, maybe 15 if you're a little more modern or your first move to the cloud or something like that, if you are more on, the leading edge. But it's not the case that you can reconfigure these things so easily. And, one thing that comes to mind is we, talk a lot about and, just know as well workforce design. Depending on the size of the company and the industry that you're in, you might be going through this process every 18 to 24 months. Reorganization change, the world has changed outside. We need to change too. We need to adapt and I'm not seeing a lot of conversation about how do your agents need to adapt. Yes, there are still finance and finance tasks and it's still HR tasks, but once you get more towards go to market, you get more towards sales, you get more towards the consulting side. How are things going to change in the business on the customer facing side? And how will you be able to adapt that? I think that's probably the next frontier that we'll talk about soon once agents have become the norm in business.

Danielle Gifford:

Yeah, absolutely.

Andreas Welsch:

Which also reminds me. A couple months ago I was having a conversation with a former colleague of mine. And we were talking about this concept of agents. And I said, Hey, I think business is really changing fundamentally, and leadership is changing and leaders need to work differently with their teams. And my former colleague said, no, I don't think so. It's just another kind of automation. You're just automating a process. You used to do it with rules, used to do it with robotic process automation, maybe a little bit of machine learning. Now you do it with agents. What's the big deal? I'm wondering where do you stand on that?

Danielle Gifford:

Yeah, it's an interesting conversation and I can see the different points of it, especially if you are a technologist and like my fiance, like it's been in software engineering for a number of years and now is like working in another company where they're leveraging generative AI and agents for cyber security. And so it's at a very simple level. Maybe it is automation, but if we, go back to go what is automation? It's rule-based. It's logic. It's a predefined script. So if X happens, do y if the temperature drops. In Calgary, it's very cold. So if the temperature drops below X degrees, like turn on the heat, right? Whereas agents are different, right? They're goal-based, they have context, they have objects that they interact with, they have boundaries. And just because if X happens, do Y, just because they come up against a hurdle. Doesn't mean that they can't actually go around that. And so that's where the intelligence comes into play as an agent. And like very similar to you, what in reasoning models to an extent is thinking through what is the best way to solve this problem? And what information do I actually need and what do I have access to? And what is the what is my level of autonomy or authority to actually act on that action? And so that's where I would say there is that big discrepancy between what. Automation is, and then what agents are, is you have that flexibility and then you have almost not a lack of supervision, but for models that are a little bit more advanced, you don't have that same supervision like you would over automation, where you know, if the script breaks you figure out a way to fix it, right? There's a very different confine and kind of set of instructions. And boundaries and guardrails and access that you would give an agent versus automation. And that's where I think the kind of difference is. And if you think of that's just like one agent. Now, what if you actually paired a couple agents together and how do they work together? And if they come up with an idea, how do they critically evaluate it and not just all agree that's the best idea? And so there's different layer layers of intelligence to this that make it so much more than just automation in my mind.

Andreas Welsch:

I love that. So the two points that I think I heard. Where there's more autonomy in the decision making in addition to the level of automation that you have. And then on the other side, when it's about multi-agent, how do you make sure you really get the best outcome and, the best output if you pair them and have them review and and critique each other's results?

Danielle Gifford:

Yeah, and I know that, there's been levels or examples of agents within like scientific research where you have someone that goes out and does the research. You have someone that acts as like the pure reviewer. You have someone that you know sorts for something else, all these different types of agents, but how do you put in the right boundaries and confines so they don't all just agree on the first answer that is given to them. What does that look like? And so to me that level of complexity really puts it on another. Level of like how you need to be thinking about them and how you might actually need to be governing them. And not every company, or not every area is at that specific state. But as we make this evolution, as we are saying like generative AI is now more commonplace classic prediction models are now traditional AI, which is hilarious. And as it becomes more commonplace and people get more comfortable with them, what does that actually look like? And I still think we're very early days of understanding how to set that up and have it in the right confines.

Andreas Welsch:

So agents agreeing with each other, sounds like we're converging towards the mean, like we're doing in so many other areas. When we use generative AI, when many people use gene AI. And. I'm curious how, so you mentioned you're a professor at the university. You teach MBA students. How, do you balance that and what do you teach students? What do they need to know, especially as they're going for the MBA as, as they move into leadership roles and higher levels of accountability in leadership in general?

Danielle Gifford:

Yeah. I would say with that there's so much that you can cover when it comes to AI. But one of the big things that we try and teach in the class and try and focus on is like what is and what isn't AI. What is the problem that you're trying to solve, first and foremost, and then how do you actually critically evaluate the different models or tools that are out there? You have models like Notebook, lm, you have Synthesia. You have Rep and so how are you actually looking at what was the model trained on? Who are the main creators of this? What is the actual price for this? Where does your data go? What level of access or control do you have over that? It's almost like the critical thinking around how do you use and leverage AI and then how do you think about the guardrails that need to be around it from a business perspective, I think traditionally in business schools. There really was always just a focus on theory, which is great. I think theory is important for a lot of things, but what I'm seeing now, more than ever, and this is myself included, so maybe I am biased in this, but people wanna be hands on. They wanna learn, and we were talking where it's students today. They already know and understand prompt engineering. So they're like, teach me how to build an agent, or show me how I can vibe code something on lovable or cursor. And I think they're really surprised, especially as people that are non-technical or don't have that technical training just at what they can actually build in a short amount of time. And I remember even in my startup days, when we are building a website, like if you're using wic Wix or if you even have to outsource and find a developer, you're like do I get a full stack developer? Do I get a friend end? Do I get a backend? What does that look like? And I think I want it to look like this. And it's like weeks and like tens of thousands of dollars. And then you might not always get what you want, whereas now you have these tools. And so as business students think about not just in their corporate lives, but as they also think about entrepreneurship or entrepreneurship. What does that mean to them?

Andreas Welsch:

I think that's a really great way to, to frame it, make it hands-on, go beyond prompt engineering and most of all, teach the critical thinking and that's really what matters When we, have so much choice. That's, one of the things for me just because he can generate a lot more information doesn't mean that the, quality is better. And you end up with decision fatigue and need to figure out which of these 10, 12 30 different versions do I really and so but more on the critical thinking skills. What is real, what makes sense, what is logical, what is plausible? And how do you. How do you check some of the facts? How do you, do that too? So great to hear how, you're approaching that.

Danielle Gifford:

Yeah. I would say too, just before we leave that, one of the things, and I'm sure you're seeing this in your conversations, or at least with some of your clients, is like there now is starting to be a little bit more signal, even, at least from the Canadian government. Around like what rules and regulations are gonna come for corporations when they're thinking about leveraging AI. And so if we go back to the EU AI Act there's literacy in place for anyone that is accessing, developing, using, or deploying AI. And so these are the types of things that are also gonna start to become commonplace, at least in, you know, Canadian companies. And like in Canadian society, as we have a new minister of artificial intelligence, Evan Solomon, which is fantastic. And there are these things that are coming into place that business students, or even anyone in general, needs to see these signals and understand how they're gonna impact businesses down the line. And so when you are building, making sure that you have, and we say the word guardrails, I feel like it's so overused, but making sure that you have the right guardrails and controls and processes in place to support the technology that is in fact solving a problem.

Andreas Welsch:

To me, that sounds like a very responsible approach to take. It's not just the survival of the fittest. If you're cutting edge and leading edge, great, we we want to work with you. If you need a little more guidance or you're in different sectors where it's not all about tech and software then, sorry, you're on your own. I think it makes sense to take people along on, on the journey so that everybody in the, country as a whole and citizens as a whole benefit, so we've touched on a lot of different topics, starting with the hype where adoption is, you said companies are looking at this generative AI, a little more established agent AI. We're getting there. Leaders should be aware of what's happening and, what's real, what's not. We need to teach our MBA students. So we've covered a lot of ground, but I'm curious in your own words, what would you say are the key three takeaways from our show today for our audience?

Danielle Gifford:

Oh, key three takeaways. That's a good question. I would say one focus on business problems over technology. So business first, technology second. Any type of AI or any type of technology like very much applies to, because sometimes when you're looking at a problem and it's almost like a flower, you realize that the petal isn't the problem. It might be the root. The second thing that I would say is. For Canadian enterprises and for Canadian leaders. Not just like playing around and like piloting really actually starting to take the next steps of actually implementing different forms of AI, whether it be traditional AI, generative AI or agents into their businesses. I know that we're risk aware and I know that we have, different rules and obligations and processes that we like to go go against, which is important and be in compliant. But if we think about competition and if we think about how we can build a better community and a safer business for all, that's something that's really important. And then the final part of it is. AI is not just automation. Agents actually have autonomy. They have goals, they have purpose. They're gonna work in certain con confines. They're gonna have different actions that they're taking. And so it's really important that when you're looking at problems and you're figuring out what technology to actually use to solve or get 60, 70% of the way in the solution for a problem that you understand what type of technology and how it should actually be built and deployed.

Andreas Welsch:

Awesome. Wonderful. Thank you so much for summarizing what you as listeners, as viewers should keep in mind. Danielle it's been great having you on the show. Thank you so much for joining us and for sharing your experience with us.

Danielle Gifford:

Thank you so much for having me. I'm looking forward to reading the book and yeah, it's been a pleasure.