The Talent Sherpa Podcast

The Orchestration Layer Nobody Designed

Jackson O. Lynch Season 2 Episode 108

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 45:51

Send us Fan Mail

92% of companies are investing in AI, but only 7% are generating returns. The gap isn't technology. Organizations are automating broken structures instead of redesigning work. McKinsey found that high performers are 3x more likely to have fundamentally redesigned workflows before adding AI. Everyone else is bolting AI onto existing processes and wondering why nothing changed.

So here's the tough question: Are CHROs ready to be architects, or are they about to become implementers of very expensive dysfunction?

What You'll Learn

Why the playbook isn't new:

  • Strategy first, structure and roles second, talent third
  • The same principles that worked in digital transformation apply to AI
  • Three types of AI strategy require three different organizational structures

Why agents need job descriptions:

  • Role clarity becomes more important, not less, when you add AI
  • Now we're designing roles for humans and AI agents
  • Transparency matters: what the agent does, what we expect, how we give it feedback

The orchestration layer:

  • Coordinating humans and AI toward outcomes is the new work
  • The number and frequency of handoffs between humans and agents matters
  • Your orchestration layer may be your most valuable IP

The trust problem:

  • 45% of employees are hiding their AI use from employers
  • Gen Z in particular feels like using AI is cheating
  • This is a mindset shift we have to train them on

The junior pipeline problem:

  • 37% of companies plan to replace entry level roles with AI
  • Entry level jobs are where people learn the business
  • The answer: accelerated apprenticeship, getting people to higher value work faster

Key Quotes

"You take a bad process, you add technology, and now you have a faster bad process."

"If AI hits fog, it's going to scale the fog. You've got to get the fog out of the way."

"Agents need job descriptions so that the humans can know and have full transparency in what the agent is there to do."

"We're not choosing among different human talent, we're choosing human talent versus AI talent. What is the best horse for the course?"

"The CHRO who figures this out is going to become indispensable. The one who doesn't is just going to be an implementer of really, really expensive dysfunction."

The Diagnostic Questions

  • Is your AI strategy clear: new product, new channels, or back office efficiency?
  • Have you designed the organizational structure to match that strategy?
  • Do your AI agents have job descriptions?
  • Are you tracking the number of handoffs between humans and agents?
  • What is your data privacy bill of rights for employees?

Support the show

Resources

  • CHRO Ascent Academy — Jackson's cohort-based program for sitting CHROs and leaders actively preparing to step into the role. A practical, peer-driven experience designed to build altitude, mandate clarity, and the strategic relationships the role requires. Currently building the next cohort — sign up for the wait list at mytalentsherpa.com
  • getpropulsion.ai — AI teammates that enable leadership to focus on the work that actually drives business outcomes. Recommended for organizations where role clarity is the starting constraint.
  • Talent Sherpa Substack — Jackson's newsletter on human capital, CHRO altitude, and enterprise leadership at talentsherpa.substack.com

Talent Sherpa Podcast Transcript

Episode: Organizational Architecture in the Age of AI

Guest: Stephanie Birenbaum

Stephanie Birenbaum: Now we're not only designing roles for humans, we are designing roles for AI and for agents as well. I am of the belief, this may not be a fully popular belief yet, but I think agents need job descriptions for exactly the role clarity mention that you said, Scott, the reason that you said that agents need role descriptions so that the humans can know and have full transparency in what the agent is there to do, what we can expect from it, how we can give it feedback if it's not doing its job, right? I think that the role clarity is just as important, but that's a brand new dimension that I don't think any of us have dealt with in our careers.

Jackson Lynch: Hey there, senior leader, and welcome to the Talent Sherpa Podcast. This is where senior leaders come to rethink how human capital really works. I'm your host, Jackson Lynch, and today I'm joined by my co-host, my friend, a leader that I have on speed dial whenever I need a good mind to help solve problems. His name is Scott Morris. He's a former CHRO with the battle scars to prove it, and he's currently the CEO of Propulsion AI. Now, Scott, here is a stat that should make every CEO nervous. 92% are investing in AI, but only 7% are generating returns. And I don't know about the last 1% because that math doesn't even add up to 100. Well, the gap isn't technology. No, of course not. Organizations are automating broken structures instead of redesigning work. McKinsey's latest state of AI report, which is a really catchy name, by the way, they found that high performers, the 6% that are actually seeing positive EBITDA impact, are like three times more likely to have fundamentally redesigned the workflows in advance. Everyone else is bolting AI onto existing processes and wondering why nothing's changed.

Scott Morris: You know, it's a classic mistake, Jackson. You take a bad process, you add technology, and now you have a faster bad process, or like I like to call it, Workday.

Jackson Lynch: Yeah, I guess we can write off them ever sponsoring the pod. Thanks for that. Meanwhile, though, CHROs are being handed the keys to the transformation. The mandate expanded in 2025, future of work and AI enablement, sometimes even the entire IT function. But here I think is the tough question. Are CHROs ready to be architects or are they about to become implementers of very expensive dysfunction?

Scott Morris: You know, I think this is where it gets interesting, Jackson, because the playbook for this kind of transformation isn't actually new. The methodology already exists. We used it in digital transformations a decade ago. I think the question is whether the CHRO will know how to apply it and whether CHROs broadly are being given room to do it right.

Jackson Lynch: That is an excellent point, Scott. So today, since you and I aren't smart enough to figure it out, we have a guest to help us pull this thing apart. And she is someone who's actually done it. Her name is Stephanie Birenbaum. She spent eight years at McKinsey advising CEOs on enterprise transformation, operating model redesign. Most recently, and this is where she and I got to know each other, she was the global head of HR at Heinz. That's one of the world's largest private real estate investors. It's about $100 billion in assets. It feels like a lot. 4,600 employees. That is a lot. And they do it across 30 countries, which is almost as many as I've visited. So she built her first global HR function from scratch when the firm was transforming. And we did a little research and we talked to some of her friends. And what they say is that she calls herself an organizational architect, which thankfully is exactly what we're going to be talking about on today's episode. So, Stephanie, welcome. Welcome to the show.

Stephanie Birenbaum: Thank you for the very kind introduction. And I'm a huge fan of the show, frequent listener, and very happy to be here. Thanks.

Jackson Lynch: Yeah, it's gonna be fun. And you heard the setup. So I guess the lead-in question is does the playbook transfer from digital transformation to AI, or is this more genuinely different in your view?

Stephanie Birenbaum: Well, like Scott said, I think there are core aspects of the playbook that are evergreen that are the same. Now, I know that what we're going through now, I don't want to downplay how transformational and revolutionary what we're going through now is. The impact that it's expected to have on the workforce will be, I think, much greater than what the digital revolution had in the early 20s and going through that decade. But that doesn't mean that we don't already know a lot of really core lessons about how to do this well. And I think one of them is to be top-down about it and not only bottom up. So you mention, you know, people are just dealing with the same broken processes and automating them rather than thinking, taking a step back, looking with a clean sheet of paper, thinking top-down, what are the biggest places where this technology can help us create differential value? And that may not be just a thousand flowers blooming across the organization. That's going to be in very targeted ways that we need to be thoughtful about from a top-down approach rather than just throwing a bunch of tools at a bunch of processes.

And I think that was the case back in 2015 when, you know, my industry, financial services investing, at the time, everyone was talking about becoming the first digital bank or the second or the third. And everyone was talking about how ING had completely blown apart its organizational structure to look more like Spotify, to look more like a tech company, and to have scrums and cross-functional teams and all of that. And so that was, you know, the questions we were answering back then were okay, if you're going to transform for the digital revolution from the top down, what should the org structure look like? Who leads it? Where do they report? How are teams organized? I actually think that a lot of the same fundamental answers to those questions back from the digital revolution in 2015-ish can be applied today.

Scott Morris: Yeah, Stephanie, let me push on something here for a second, because I think most organizations tend to skip right now the structure and roles step. And I think they do it because it's laborious. They go straight from we have an AI strategy kind of to let's deploy the tools. What's your point of view on what happens when you skip that step?

Stephanie Birenbaum: Well, I think that depending on what type of AI strategy you have, I think it's going to require a different type of work structure to execute it. And again, I'll pull on lessons learned from the digital transformation, but push on me on if these still apply, because I'm still forming my thinking on this too.

Back then, we sort of said, look, if you just simplify your digital strategy, think of a framework of three different categories. Are you most trying to make money from digital technology by having a more efficient backend infrastructure? You know, you're gonna use it to automate new customer onboarding and compliance and checks and all of that, and you're gonna reduce cost on the inside. Or are you using digital technology to take your existing product and reach more customers, expand your channels, have new ways of reaching new customers, new segments, hyperpersonalization, all of that. Or a different option still, are you using digital technology to have a completely different product that you are taking to market than you did before? Like John Deere, who used GPS technology and implanted that into its tractors so that you could have this automated, directed, weather and GPS enabled, much more efficient way to do the work of the tractor. That's a completely different product that potentially even cannibalizes the old product that they brought to market.

So three different types of digital strategy: new product that could cannibalize your existing thing, taking your same product to new channels, new customers, or being much more efficient on the internal infrastructure. Some companies try to do all three at the same time, right? But what we found and what I really strongly believe is that the right organizational structure and the right leadership roles that you need are different to do each of those three things. So you have to be clear about that, and then you have to choose, you have to architect your organization in the right way to enable that strategy based on what you're trying to do.

I mean, look, let's pressure test this together, but I think the same is true for AI. I think that if you're trying to use AI to bring a new product to market that doesn't exist, you're going to create a sidecar organization, you're going to give it independence, you're going to, you know, maybe even have shadow functions over there so that it can really be protected from the core, versus if you're using AI to just take out cost from the back office. You're probably going to have a cross-functional transformation center and group that operates across the silos. You're going to have very different boxes and lines. So that's why I think there is a structure and roles question that has to be answered after you know your strategy and before you start deploying tools.

Jackson Lynch: You know, it's interesting, Stephanie, that you start there. Because when I started my practice 10 years ago, I got brought into one SMB organization after the next, all of whom wanted help with their org design. And what I found is most of them have never thought about what they're solving for. They just wanted help with the org design. Like I was working with one client who said, I need you to help us figure this out. I'm like, great, are you a premium product or are you a low-cost provider? They had never, the publicly traded company, they had never had that level of conversation. And so just to double-click on what you just shared, making sure that there's that clarity up front, even as to what you're solving for, should not be assumed. Because there are really good, successful, profitable companies that are still kind of struggling through that. And bolting on to AI technology is all that's going to do is make it harder for people to execute. I did a pod a couple weeks ago. If AI hits fog, it's going to scale the fog. You got to get the fog out of the way.

Stephanie Birenbaum: Yeah. Yeah. I loved that pod. And I think this is an area where just the evergreen principles of org design are true. You always have to clarify what you're solving for, what your design principles are, what outcomes are going to be different organizationally because you've redesigned. And you don't just need that in order to come up with the right answer. You need it to be able to do the change management and communications with your people to be able to explain to them why are we doing this? And they need something more than just, well, AI is changing the world and it's changing the future of work. So we have to change. No, really, why? Why is this going to help us go to market better, bring a new product, reach more customers in new ways, be less cost heavy in our back office, whatever it is, why? And so I think that's important for any transformation and restructure.

Scott Morris: You know, a topic that we talk a lot on this pod about is role clarity. And it sounds like one of the things that's implicit in what you're saying is role clarity actually becomes more important rather than less when you add AI into the mix because you can't orchestrate or design around what you haven't defined.

Stephanie Birenbaum: Yeah. And you know what? What is different, very different, from the digital transformation is that now we're not only designing roles for humans, we are designing roles for AI and for agents as well. I am of the belief, this may not be a fully popular belief yet, but I think agents need job descriptions for exactly the role clarity mention that you said, Scott, the reason that you said that agents need role descriptions so that the humans can know and have full transparency in what the agent is there to do, what we can expect from it, how we can give it feedback if it's not doing its job, right? I think that the role clarity is just as important, but that's a brand new dimension that I don't think any of us have dealt with in our careers, at least from the HR side.

Scott Morris: I love that you said that. We're excited at Propulsion AI. We're building a set of AI teammates. We do actually have job descriptions for them. We wrote them with our platform, and we're excited for the first time that somebody actually puts one of them on an org chart and acts as if they are a part of the organization. I think the point you're making is really interesting because every other technology that has come to market, even really cutting-edge technology, they really have been ways of automating and making work faster. But the interesting thing about AI is that it fundamentally calls for us to redesign the distribution of work between people and machines. And that is a place I don't think we found ourselves in previously.

Stephanie Birenbaum: Yeah, I agree. I agree. And I think that there's also a layer to redesign where we have to, we have to not only stay attuned to the number of people we have, so the number of agents, the number of people, the ratio between the two, but the number and frequency of handoffs between humans and agents. Because if we don't design to that, I think we risk adding even more complexity, more risk. Our compliance partners will go crazy if we're constantly handing off chunks of work between humans and AI and humans and AI. Now, some of that, of course, is going to be inevitable, and we're figuring that out as we go. But I think for organizational simplicity, you want to attune, stay attuned in your design to number of handoffs as well, which is somewhat of a new dimension.

Scott Morris: You know, Jackson and I have a couple of subjects that are really thematic for us on the pod. One of them is role clarity, I said it a second ago. The other one's trust. And there's research out there right now that's showing that an estimate of about like 45% of employees are hiding their AI use from their employers, Gen Z in particular. And not because they're afraid per se of replacement, but they're afraid of being judged for cutting corners. What does that tell you about trust and the trust environment in most organizations?

Stephanie Birenbaum: Yeah, I've noticed this with Gen Z in particular outside of corporate organizations as well. A lot of them do seem to have that instinct of, well, it's cheating. You know, if I use AI, that's cheating. And I've thought about that a lot. I'm not sure personally if it has to do with the state of trust in organizations or if it's more just about stages of maturity for younger workers versus older workers.

I mean, I think for almost all of us, when we were younger workers, we had the imposter syndrome. We felt like we had to know everything. We were always trying to keep up. There was always a fear of being the one who looked like you didn't know enough or have the knowledge. And then you become more senior, you get a lot more knowledge through your pattern recognition. You also get the humility to know that you don't have to know everything and that success is finding the people who do know and using your resources and leveraging all that.

In any case, I think you've got a situation where more tenured people, senior people who are confident in their own knowledge are more willing to say, yeah, I use ChatGPT. And by the way, here's where ChatGPT was wrong, and I corrected it, and here's my response, which is 50% me, 50% ChatGPT. Whereas a younger worker isn't yet confident enough to do that because they're trying to prove their own self-worth. They're trying to prove, look, I'm knowledgeable, I'm capable, I didn't need ChatGPT to do this for me. So that's what I think is maybe at the core of that issue. I'm sure there are trust dynamics to it as well.

Jackson Lynch: Yeah, I'd double click on that a little too, though, because one of the... this generation is entering the workforce after coming out of college. And you know what college said? Using AI is cheating. So like we're asking them to change something that we literally just trained them on.

Stephanie Birenbaum: Yeah.

Jackson Lynch: And I think that might be a contributor as well.

Stephanie Birenbaum: Yeah, that's that's right. That's a mindset shift that we have to train them on. Is that point that we all had to learn in our own way too, that success is about getting it done by using your resources. And it's okay if one of your resources is ChatGPT, as long as you've checked it and you understand it.

I will say I judge, I've gotten into judging HR case competitions for HR, you know, students from different university HR programs, which has been really rewarding, fascinating, so inspiring and impressive to see the next gen of HR leaders and get to work with them. But I have noticed that, you know, they'll have to have made their presentation and they'll be defending it, and we'll get the chance as judges to ask them questions. And I'll ask them a question about something that was on their slide, they'll look at the slide and it's as if they've never seen those words before. You know, they can't... It's like, okay, well, clearly ChatGPT wrote that slide. Okay, but you need to be, you need to understand it. If you're gonna use what ChatGPT gave you, you need to make sure that you actually understand it and can defend it. It doesn't just sound good, so you're gonna slap it up there. Yeah.

Scott Morris: I think it'll be interesting to see if we have kind of like we do today with financial literacy being taught in college classrooms, I think it'll be interesting to see if we have AI usage eventually taught.

Stephanie Birenbaum: I think it should be. I think it should be.

Jackson Lynch: Here's something I think that worries me a little bit. And building on the topic that we were just talking about about the younger generation that's coming into the workforce, and they're so different from those of us who've been around here for a while. Like, I'm Generation X, I was raised on hose water and neglect. We go figure it out was in fact the thing we had to go do. And now we have an entire generation that is... they were just raised differently. Helicopter parenting, they had the ability to do, you know, online shopping, instant, you know, feedback on social media, they had same-day shipping on everything. And so you think all the way back to kind of where we grew up and how we made mistakes and how we stubbed our toe. That was a really important part of my development. Like if I hadn't screwed up so much when I was, you know, growing up professionally, I wouldn't have been average today, like I am.

So but if you look at... thank you for laughing... but if you look at what Korn Ferry just put out there, 37% of companies plan to replace entry-level roles with AI. And if you're looking at the back office, including human capital, that's even higher. It's closer to 58, 59, 60%. But entry-level jobs are where people learn the business. So, like, what do we do about it? Like, if we cut that pipeline, where do the future leaders come from? Stephanie, give us the answer to this question.

Stephanie Birenbaum: My answer is accelerated apprenticeship. That is my answer. So I think we still need the kind of apprenticeship that you talk about that I experienced in my own journey. You will skip some steps in that. So it needs to be a deliberately accelerated journey.

One example, I was at an AI conference and I met a senior banker, private banker at one of the world's largest private banks. And he was telling me juniors in private banking usually go three years in their career without ever meeting a client, without ever seeing a client. But now a lot of the work that they're doing in years one, two, and three is getting automated by AI. What do you do about that? Well, you have to get them seeing clients faster, which means they have to be skilled and prepared and trained to be seeing clients faster than three years in. So that's a different training journey. That is a different way of being thrown into the fire. That's gonna be a big adjustment for the business, especially given how obviously appropriately sensitive everyone is about who gets put in front of the client. But re-architecting that apprenticeship timeline is gonna have to happen.

I have to say though, I'm optimistic about this one. I really do think we will solve this. If I think back to my own journey and how when I was an associate at McKinsey, you know, I was using Google to learn and to look up and to research. And classes of associates a decade before me did not have that. They had to, I don't know what they did, go to libraries. I'm not sure. But you know, we but we still got to the same place. I still got apprenticed. I still, you know, I may have, I may have been able to get more reps in because of it, right? Because it took me a shorter amount of time to research an item and then move on to the next item. So maybe it actually benefited me that I could skip some steps. I am actually optimistic about this one.

Jackson Lynch: Yeah, speaking for the generation that came before you, what we would do is we would just take the tablets and then cuneiform. But once the Rosetta stuff came and we could translate hieroglyphics and Sanskrit into something that we could... made things so much easier. Okay.

Stephanie Birenbaum: Yeah, well, we consult the Oracle, right? Yeah, no, one of my favorite partners that I worked with in my early days used to always say, consult the Google.

Jackson Lynch: Yeah, they moved from Delphi to Mission Valley.

Scott Morris: So, you know, Stephanie, I know you were with McKinsey, and so sorry to bring up PwC, but their response to this has been smaller cohorts of associates, but a deeper investment in each person. And I wonder if that's sustainable and whether organizations can afford to do that, but I guess we will see.

Stephanie Birenbaum: Yeah. So I... this is news to me, and I'm really interested in it. So smaller cohorts of entry-level associates coming in who will progress eventually.

Scott Morris: But a deeper investment in each one of them. Yeah.

Stephanie Birenbaum: Yeah. Well, professional services is undergoing a lot of changes right now, but that's pretty risky in consulting because consulting historically has a very, very high attrition rate from associate to partner because of the lifestyle, the intensity, fragmentation, and a lot of the value prop for young talent going into consulting is they're looking for a great exit to their clients eventually. Not everyone wants to be the partner long term. So if you're hiring smaller cohorts of associates and hoping that a larger proportion of them will make it to partner, that's risky, I think. But there's a lot, there's a lot else about that business that's changing right now. So I'm not sure.

Scott Morris: I also think we have to rethink what an entry-level role means. You know, in HR, it used to mean you started as a very low-level generalist and then you became a more senior generalist, and maybe you got some specialist track. But I wrote an article, and one of the points that I tried to make in that article was that there are going to be new skill sets that are parts of functions that didn't exist before. So in that article, I'm arguing that in HR, after we cull out all of the transactional and compliance elements, what we're left with, the strategic elements aren't enough anymore. That we need product management thinking, data science, user experience, things that traditionally have been a part of other functions. I think those are some of the new entry points for HR. And what I actually think we're gonna see at the more senior levels is lateral entries from other parts of the business and in particular from consulting companies.

You know, one of the things I think maybe we have to question is whether the entire chain of kind of what an entry-level job is isn't also morphing, right? It's not just that that work is going away, but the entire, like, what the pathway is is morphing. It used to be that in HR, at least, you started out as a low-level generalist and then you became a more senior generalist, and maybe you went into a center of excellence and you got some specialist expertise. But you know, I wrote an article, and one of the points I tried to make in the article recently was that there are going to be disciplinary specialties that need to get included in functions that haven't existed there before.

In this particular article, I argue that once you cull out all of the transactional and compliance aspects of HR, what you're left with, the strategic elements of HR, they're not enough alone. You have to bake in today data science, customer experience, product management, lines of thinking and expertise that traditionally existed in other functions. And I'm not saying they don't exist over there. I'm just saying that there are new HR-focused roles that relate to those disciplines that exist now. And I wonder whether those are the new entry points for junior people and whether the more senior roles are fed by lateral entries. I think perhaps from other parts of the business, and in some cases from people that are leaving consulting and joining firms and bringing the expertise that they gained in consulting to bear.

Stephanie Birenbaum: I am inspired by the university HR talent that I've met and how much they are learning about AI. This whole case competition that I judged was about an AI transformation enablement. And so we do have, thankfully, a cohort of young people who are coming in ready to do the kinds of new jobs that you've described. And I would encourage any of them that are listening, take your cross-listed courses in data science and all of that. Take those cross-listed courses because I agree with you, Scott. I think those will be the important entry-level jobs of the future.

Jackson Lynch: Yeah, I'm gonna go back. I was in a factory in a paper mill in rural Texas where everyone got pneumonia because we'd go from hot to cold to hot to cold to hot to cold. You had these jobs where you had to rod the boilers, which is like the most unsafe and awful experience I've ever seen anyone in my life, where they're like literally in the boilers while they're going through and they're pushing the rods to make sure nothing gets a fire starter on the side of it. But as an HR person, we were equally there. Like I remember distinctly doing EEO-1 reports by hand. I remember having to sit down with the union guys and managing through labor relations, like real time without a textbook, none of that stuff came in the university, but that was the experience that we all got.

Here's my concern. My concern is that we can automate most of that stuff, and in fact, a lot of it already has, but we'll lose the wisdom that comes from recognizing the right answer versus the wrong answer. And we've got to be able to use AI, especially with the generations that are coming up, to help us reinforce the wisdom, to test the thinking, to really push on where we can play. Because ultimately, and Stephanie, I think this is where it gets really unprecedented, is we're not going to just be managing a workforce anymore. So the old playbook around how to handle people is only partial of the equation. Because we're going to be coordinating humans and AI agents and the handoffs, as you mentioned, between them.

I was reading a guy named Kevin Oakes. He calls it the rise of digital work twins, which is AI that's trained on the employees' expertise that becomes semi-autonomous and focused on collaboration. So thanks for listening to my TED Talk introduction to the question, but how should we be thinking about this?

Stephanie Birenbaum: Well, with digital twins built on employee expertise, I am very concerned about the data privacy and IP implications of this. And I do advocate for HR and legal and compliance to really get out in front of a data and digital privacy bill of rights for employees and be open, honest, transparent, go on the record with employees to say this is how we're using your data, this is not how we're using your data.

I don't know of anyone who's trying to build digital twins based on individual employees, other than I know of some people who are trying to do it based on US presidents. But otherwise, you know, look, I would get pretty, I would get pretty insecure if someone said, okay, we want to build a digital twin, Stephanie Birnbaum. That's everything that you know. We want to feed into it, everything, all your handwritten notes from grad school, every paper you've ever written, you know, all the major communications, everything. This is you. Who owns that IP? Do I own that IP? Does the company own that IP? I think I should own it. And frankly, I would be interested in investing in that if I could be confident that I would own the IP. But I don't want the company to own that IP. I don't think any employee would.

So I think there are some really important, you know, legal and ethical questions that need to be sorted. And in the absence of a lot of regulation around this, that does leave it to companies and to HR leaders to shape these things at an institutional level, at a company level. What are we going to promise our employees in terms of privacy, in terms of data ownership and all of that?

Jackson Lynch: Yeah, it's interesting as we've gotten more exposure with this pod and things that we've written. I've had a number of companies that have reached out to me that said, okay, here's what we'll do. Let me take everything you've ever written, everything you've ever put out into the universe, podcasts and speeches I've done and keynotes and everything else. We'll put it into this, into this soup, we'll stir it up, and then that digital twin of yours can in fact answer questions in a coaching way with all of your stuff, real time 24-7, all around the globe.

And that blew my mind, right? And it scared me because, like, what if it turns out I'm a jerk? That would be awful. You know, interact with a digital twin and you find out that you really don't like yourself very much. But that's for me and my therapist to talk about at a later time. But like there's some stuff that's going on there that is like we've never, ever, ever thought about before.

And even at more senior levels, I think I read something from Gartner that predicted by 2028, something like 15% of day-to-day decisions are going to be made autonomously by AI agents. And that's not just low-level people. I will worry for the last CEO that thinks that they can be a better CEO than the AI agent who has a much higher IQ to the factor of, you know, 500 times smarter and has the ability to adjust much more data to make the right business decision. But even bringing it back to 2028, like how do we figure out what 15% is the one we want to automate? Is that like... it's our job?

Stephanie Birenbaum: I think that is a very important question and that it is a shared leadership team's job to decide which business decisions it's going to allow AI to do in a fully automated way. Really, first and foremost, the business needs to make that call. But depending on the type of decision, there's gonna be important legal implications of that. HR and IT are gonna be able to weigh in on the efficacy of how that's gonna go.

But I think by the whole leadership team doing it together, you can take it back to where we started this conversation, take it all the way back to the top and to the value tree of how this organization makes money and where it's gonna get the most value from using AI. A lot of companies in my space in investing are finding, gosh, if we could spend our, we can get so much more bang for our buck spending our money on our investment acumen, like making a digital twin of an investor for us that can be that much better at investment decision making than if we spent the same amount of money automating finance and cutting our number of accountants. Right. There is so much more value to be gained in making the better investment decision. So you have to have that decision first, I think, to decide what's the 15%, right? Where are you really going to focus, where you get most bang for your buck.

Scott Morris: It's such a good point, Stephanie. You raised some legal arguments or some legal questions that organizations have to figure out a minute ago. I think there are some performance management implications to this too, because when we're talking about that kind of collaboration or digital twin or whatever we call it, we're dealing today with a performance management system that kind of wasn't built for this kind of interaction. You know, when the human-AI collaboration fails, who's responsible? And when it succeeds, how do you attribute credit? Have you seen anybody... I mean, nobody has the answer to this yet, but have you seen anybody sort of leaning toward things that look promising on that front?

Stephanie Birenbaum: I haven't seen it yet. I haven't seen it yet. But I wanted to ask you when you were talking about how your agents in Propulsion AI have role descriptions and you're excited to see them show up on the org chart someday. Do you envision a world where they report to a human manager? And then therefore the answer to who's accountable is really that manager who's overseeing the functioning of those agents.

Scott Morris: Yeah, 100% we do. And we've taken the point of view that we shouldn't be looking, as an organization, we shouldn't be looking to automate jobs. We should be looking to automate parts of jobs. And then we should be redesigning the work that the remit, you know, when a human releases that work, now they have capacity to do something different. And we should look at, again, I think your point is so well made. What is really driving value in the organization? How do we redesign the human role for that condition?

And so, you know, the agents that we build, they're very, you know, focused on delivering value in a certain way. And as they do that, a part of the human responsibility or the human involvement can be let go, but that human can now adopt something that they've always said, hey, wouldn't it be great if we could do this, but we just don't have the capacity to do it. So they're actually force multipliers.

And the point I made a second ago about the user experience being built into HR, well, it has to be built in because it's not enough to deploy a system. It's not enough to have an AI agent. You have to have something that's actually being used and creating value. The overall HR leader or the leader of that function has to be thinking in those terms. I have a digital teammate on my team. Is that digital teammate delivering value in the way that I want, which is to free my human teammates to do the things that they haven't been able to do in the past?

Stephanie Birenbaum: Yeah. Yeah.

Jackson Lynch: Let me lean into this one though, because that requires a skill set that almost nobody in human capital has ever been trained on. And that is systems thinking. And you've got to be able to look at workflows, starting with the outcomes that you have in mind and working their way all the way back. And if you doubt that we're not good at this, look at job descriptions for 98% of the companies out there today that don't use Propulsion AI. You're welcome for the ad. But we don't do this naturally today. And as we kind of think about shifting from owner of talent to orchestrator of capability, that's gotta be something that we have to change how we design the work.

Right now, I don't know a lot of HR folks that do that naturally or even maybe see that that is part of their accountability. And yet I think that's gonna be one of the most important elements of their job as we move forward. What do you guys think?

Stephanie Birenbaum: I think you're right. This is something that is new for this transformation. I've heard it called, and by the way, I took myself back to school on this. I went to Columbia Business School Exec Education, I took a course. You know, there's a lot that we're all needing to learn and upskill ourselves on this.

One of the things I learned, and the professor talked about the orchestration layer probably being a company's most valuable IP when it comes to how they're using AI at their company. Not the agents themselves, because the agents themselves are probably going to be doing similar kinds of jobs to be done and tasks. I mean, we'll use companies like Propulsion AI, like others, in order to fill in some of those functional pieces of work. But connecting that functional group of agents to the humans, how that feedback loop works, how that performance management works, all of that, that is called the orchestration layer. And it requires systems thinking to see it, to design it, to manage it.

And that is an important, and the professors, this isn't just me talking, my professors at Columbia Business School saying that is one of the most important tasks of getting this right, and where an organization can differentiate from others in how well they get that orchestration layer right.

Scott Morris: You know, so I think that says that the role of the manager is actually shifting from owner of talent to orchestrator of maybe capability and coordinating humans and AI together toward outcomes. Stephanie, what do you think that means insofar as how we develop managers?

Stephanie Birenbaum: Yeah. Well, we could see it as a natural, I see it a little bit as a natural progression of how we have developed managers in the past. We've always prepared managers to see their work as a system, delegate, break apart tasks, delegate them, think about the quality of talent differentially and staff their teams appropriately. But then we move more towards not just picking the best horse, but choosing the horse for the course. How can you, as a manager, choose the right talent for the task that has to be done?

And now just the natural progression of that is we're not choosing among different human talent, we're choosing human talent versus AI talent. What is the best horse for the course, the work that needs to be done? Maybe it's a natural progression and just a natural complication to what we have been developing managers to do.

Jackson Lynch: Yeah, I think that's exactly right. And I'd add in there that using AI to sharpen our thinking, specifically around systems thinking, specifically around kind of workflow design, I think that's gonna become increasingly important for every manager to have a sense of what that is and what that looks like. And it is gonna be something that is gonna be so new that we'll have to have some sort of agentic solution to training managers on how to think about how to use the agentic solution. So that's gonna be a lot of fun.

Stephanie Birenbaum: It's meta thinking, it's next level thinking. And I think we've just hit on what is one of the most important skills humans need to actively develop in order to stay one step ahead of where this technology is going.

Jackson Lynch: So if you had to give a CHRO just one piece of advice on building this orchestration layer, the orchestration capability, something that they can not start on in '27, '28, like this quarter, this month, next week, what would that be?

Stephanie Birenbaum: Make it a full team effort, make it top down, start with strategy first. And I know that's not easy. All of those things are really hard, and they're not a hundred percent in the CHRO's control. They are inherently... but that's the point. It needs to be a shared general leadership task to first figure out how are we gonna make money with this technology, and then how are we gonna restructure in order to do that? And it's gonna be a shared task to answer those questions.

Jackson Lynch: And I can't think of a better way to wrap up that segment. And so, what that means, of course, is if you're still drinking coffee and it's still warm, congratulations. You probably hit pause and refresh because we've been talking for a while. But don't worry, here is your Talent Sherpa summary. Or, as Scott always says, the org chart is a screenshot of yesterday's best guess. AI is about to hit refresh every six weeks. Maybe we should stop laminating them, and I think HR is gonna need more wine. I mean, you are quite loquacious on this one.

Scott Morris: I've literally never said that, but maybe I should. I don't know. All right, Jackson. Here is what people should remember. This playbook isn't new. Strategy first, structure and roles second, talent third. It worked in digital transformation, it's gonna work for AI. What is new is the orchestration layer. CHROs are architects for humans, AI agents, and the governance that connects them, designing the handoffs intentionally, discovering where the gaps are when something breaks. That's the new work or part of it for CHROs.

Trust has expanded. Employees want to know what AI sees and what it decides and what it means for their future. And 45%, remember, are hiding their AI use. This is a bit of a trust problem. It's not a technology problem.

Last piece of the summary I'd say is that the junior pipeline is shifting and morphing. And if AI handles entry-level work, then you have to be thoughtful about building what the apprenticeships used to provide. Judgment, intuition, relationships. It doesn't happen by accident. We have to be intentional with it. How'd I do, Jackson?

Jackson Lynch: Yeah, for me, the big takeaway is this: the CHRO who figures this out is going to become indispensable. The one who doesn't is just gonna be an implementer of really, really expensive dysfunction.

Scott Morris: I think this has been a great conversation. Stephanie, you're amazing. We look forward to having you on. Thank you so much for joining us.

Stephanie Birenbaum: Well, thank you so much for having me. This was a lot of fun. And I agree, HR is probably gonna need a little bit more wine before this is all over with.

Jackson Lynch: A lot more wine. So, and thank you to everyone out in either YouTube land or listening to us wherever you get your podcast. I just want to say thank you so much for tuning in to the Talent Sherpa podcast. This is where senior leaders come to rethink how human capital really works. And I think this episode is a great example of where we're trying to push the thinking. And by the way, this is a lot of fun to do with all of you, whether you're listening in Chicago or Madrid or in Singapore, which by the way still blows my mind.

But before we go, a quick shout out to one of our favorite listeners, Rachel in Denver. Thank you for being a part of the Talent Sherpa community and for joining the CHRO Ascent Academy. So this is really exciting stuff.

Scott Morris: If you enjoyed today's episode, please do us a favor and hit the like button on it, or even better, subscribe. Leave us a review on your favorite platform, whether that's Apple Podcasts or Spotify or YouTube. That's how we grow the pod, how the content gets shared with other senior leaders.

Jackson Lynch: And if you're a CHRO wondering, where do I start my AI journey? Because your CEO is asking what's your strategy for AI, yeah, you probably already guessed this. But I think you should check out Propulsion AI at the very cleverly titled www.getpropulsion.ai. 92% investing in AI, only 7% seeing returns. The gap's not technology, it's automating a broken structure. So Propulsion AI is going to help you redesign that work before you automate it. Visit www.getpropulsion.ai and let Scott know that I suggested it.

Scott Morris: We talk about tough topics on this podcast, and they are subjects that are not just conceptually difficult, they're difficult to implement. But the good news is you do not have to do it alone. If you're an emerging CHRO or if you are ready to accelerate your impact, head over to mytalentsherpa.com. Jackson and his team have curated a set of resources that'll help you navigate, elevate, and deliver results from day one. So you can sign up for the Talent Sherpa Substack. Head to myTalentsherpa.com, find both of those there, plus a link to Jackson's coaching offerings. And I say it a lot because I mean it. You don't have to do this alone. Jackson's a great guide.

Jackson Lynch: I appreciate that, Scott. And let me tell you, you know, I've loved this episode. Stephanie, thank you for joining us. Scott, we'll see you next week, and that's it for now. Until next time, keep raising the bar. Architect the work before you automate it, and keep on climbing.

End of Transcript



Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Future of HR Artwork

Future of HR

JP Elliott
Hacking HR Artwork

Hacking HR

Hacking HR