Virtually Anything Goes - a Made To See Podcast
This podcast focuses on interesting conversations. Each series has a different theme. Episodes cover a wide range of topics including marketing, AI, communications, healthcare, addiction, sleep & insomnia, sales, livestreaming and many more.
Get to know our guest, their story, and their expertise. Learn something new, be entertained, and discover something new.
It's in th ename: Virtually Anything Goes.
Virtually Anything Goes is a MadeToSee.com podcast
Virtually Anything Goes - a Made To See Podcast
AI, Autonomy, and Leadership: Chris Knerr on the Future of Decision-Making
In this episode of Virtually Anything Goes, we dive deep into the remarkable leadership journey of Chris Knerr, VP of Strategy (MedTech) at Veeva Systems, Chris is a leader whose path spans entrepreneurship, Fortune 500 executive roles, and a lifelong love of philosophy.
Chris shares how his early academic pursuit of philosophy shaped his thinking, his leadership style, and his ability to navigate complex human dynamics inside some of the world’s largest organizations. From his 14-year rise at Johnson & Johnson to building and exiting his own company to guiding digital transformation for global medtech leaders, Chris opens up about the turning points that defined his professional identity.
You’ll hear candid stories from inside high-stakes transformation programs, including the pivotal moments where mentors challenged him, encouraged him, or changed the course of his career with a single sentence. Chris reflects on the transition we all face when moving from individual contributor to leader: the moment you stop “knowing everything” and start empowering others who may know more than you. His honesty about fear, confidence, and learning to “lighten up” makes this one of the most relatable leadership conversations in the series.
We also explore the political realities of leadership, how to spot the difference between visionaries and survivors at senior levels, and the underestimated role of luck and timing in every success story. Chris brings a rare blend of philosophical depth and practical experience as he unpacks what leaders really need to understand about influence, organizational behavior, and decision-making under pressure.
Finally, the conversation shifts into AI, autonomy, and the future of leadership. Chris shares thoughtful, and sometimes provocative, insights on how algorithms shape human choice, what worries him most about invisible automation, and how leaders should prepare for a world where AI is embedded in every workflow.
If you’re a leader, aspiring leader, or simply fascinated by how people grow, decide, and influence at scale, this episode is a must-listen. Subscribe for more conversations that go beyond the surface.
Chris Knerr is the VP Technology Strategy (MedTech) at Veeva Systems.
He is a 20+ year multi-disciplinary Life Sciences industry veteran (Med Device, Pharma, Consumer OTC) driving change and key results in growth, operating leverage, differentiation and competitive position at enterprise scale; Chris is a former J&J VP, has founded and exited his own company, and his experience spans Fortune 50, management consulting, portfolio company, and tech start-ups.
Chris is also an industry thought leader and writer; Cornell MBA; frequent guest lecturer at Cornell MBA program in Digital Strategy & Strategic Brand/Product Immersions
Connect with Chris Knerr on Linkedin: https://www.linkedin.com/in/chrisknerr/
Lev Cribb is the Founder and Managing Director of Made To See, a UK-based Video and Livestreaming Agency, specialising in the strategic and tactical use of video across B2B organisations. Lev is also the host of the Virtually Anything Goes podcast.
Made To See: https://madetosee.com/
For more information, content, and podcast episodes go to https://www.madetosee.com or our YouTube channel @madetoseemedia
I was sitting in her office and I was like, Mary, I don't think I can do this anymore. This is so frustrating. I I just don't fit into this organization. I you know don't have the same style. And she just paused and she looked at me and she said, Chris, that's why I hired you.
SPEAKER_01:Hello audio listener. This is your host, Lev Cribb. Thank you for choosing this episode featuring our guest Chris Nur. If you prefer a video, you can also find all of our podcast episodes on YouTube or on our website at MateTe.com. But now I'll get out of your way and hand you over to Well, me. Hello and welcome to the Virtually Anything Goes Podcast. This episode is part of our leadership story series where we speak to leaders from a variety of different backgrounds, including AI, healthcare, software, strategy, executive coaching, and others. And if you like what you hear in this episode, be sure to subscribe and check out our other episodes too. Today I'm talking to Chris Ner about his leadership story, how he got to where he is today, and whether any obstacles to derail his journey. Chris is the VP technology strategy MedTech at Viva Systems. He's responsible for helping MedTech CIO and IT leadership customers maximize the value of their digital transformation strategy. Chris has worked in an advisory role with executive teams of Fortune 500 and Global 2000 companies. He was the chief digital officer at Cinity and before that started and exited his own business, as well as spending 14 years at Johnson ⁇ Johnson, where he, amongst others, held the position of VP Enterprise Supply Chain Business System Strategy. If that wasn't enough, Chris holds an MBA MBA with distinction from Cornell Johnson Graduate School of Management and was class valedictorian. And I also understand that you started a second degree recently, a master's degree at the University of Pennsylvania, focused on history and philosophy of science and technology with a focus on artificial intelligence. Chris Le, very warm welcome to you. It's a pleasure to have you on the show.
SPEAKER_02:Thanks, Rob. Great to be here.
SPEAKER_01:Excellent. If this is your first time listening or watching the Virtually Anything Goes podcast, stick around until the very end when I turn control over to Chris and he can't he can ask me his Virtually Anything Goes question. This question can be any question at all. I won't know what it is until Chris asks me. So it could literally be about anything. The only caveat is that Chris will have to answer the same question after I have given my answer as well. So Chris, I mean, judging by the introduction and your career to date, you have all the traits of a high-achieving leader. But one thing I didn't mention was that you also studied philosophy and still consider yourself a philosopher. I know also that you have 10,000 books at home, and all of that points towards you being an academic thinker. Was philosophy a passion that could have let you down a different path entirely, or did you always know early on that leadership was something you aspired to?
SPEAKER_02:So uh when I was in undergraduate at Columbia, um, my original intention was to become an academic philosopher. And uh that didn't work out for a variety of reasons. Um I it has an interesting leadership component. I think part of what led me down a different path was that uh all the role models that I had who were not the older professors, but the the sort of younger ones didn't seem very happy um in their work. And that was uh, I guess a uh a bit discouraging and um led me down a different path into uh an initial initial venture into entrepreneurship and then uh eventually into business school and into a business career. Um but it I found philosophy to be um an excellent background um for uh for any kind of work and something, as you mentioned, that I have uh quite a lot of interest in and still still read and write about uh today.
SPEAKER_01:Excellent. Um I'm sure we'll dive into that uh in a bit more detail as we go through our conversation as well. But um maybe give us a little bit of an insight to start with. What was the the school age, Chris know like? Um was he a leader from the very beginning, or was it more about living life in the moment?
SPEAKER_02:Uh well, maybe both to a certain extent. Um I had a uh an interesting and and kind of complicated uh childhood. Um I'd like to say that I've uh, or I guess my observation is I've always um had a an effect on people, um, whether to the good or to the bad. Um as a kid, I was uh I was kind of an outsider to a certain extent. Um it my parents split up when I was very young, and this was one that wasn't terribly common. So, you know, among other things, I was the kid who liked to read a lot, um the weird kid who had divorced parents. Um so that there was um uh I think it an interesting texture to my childhood in that way, that uh some of these traits, you know, persisted into adulthood and and to some extent into my my business career and my style as a leader, which um would you know which we can talk about.
SPEAKER_01:Well, I mean, I'd I'd I'd love to dive into that actually. Uh I mean back in 2001, you started your career as an intern at Johnson ⁇ Johnson. Uh 14 years later, you leave the company as vice president. Um what what job, I mean, generally speaking, what job or role taught you most about people before you ever even had to lead them?
SPEAKER_02:So kind of going back to the beginning of my my story at uh at JJ, you know, in a in a way, I think like a lot of successful people do, I had I had quite good luck falling into uh a kind of work that I turned out to be good at, and there was kind of a burning need for it, which was uh which was project management and eventually um program management. You know, I and I I think from a transitional standpoint, everyone who graduates into leadership eventually has to make um has to make a key transition between doing work where you know everything about it and know all the details and are completely conversant to doing work where you're managing other people and you are the the scope of the work is beyond your capacity to understand all the details of the work that you're leading. So it I had a progressive series of roles doing um, you know, kind of process re-engineering, um, testing management for systems implementation before finally graduating into um, you know, running bigger projects and programs for myself. And it was that kind of those transitional jobs where I had to personally make the transition from, you know, being very detail-oriented, very analytically, knowing everything about the work that I was doing to leading and managing other people. Um, at the time I was like the young guy too. So I was I was leading people who were some cases 10, 20 years older than I was, who had you know much more experience. So there's sort of a there's both a uh a level of abstraction transition and then kind of a confidence transition in terms of you know one's own perceived credibility to manage people who know a lot more than you do and kind of become comfortable with that. Um this is something that I I often talk to emerging leaders about these days. And and, you know, of course, I I suppose now I come across as being very confident. Um, I'm often reminding myself and in coaching people, reminding them like when this happened to me, I was so terrified. I, you know, I had I had I was very uncomfortable. I had no idea what to do. And and it's interesting, you can kind of see there are people who can't make that transition. And those people will tend to become, you know, sort of stabilize at some um level, which isn't necessarily a bad thing. But those early roles where I was transitioning to taking on more scope and then managing people who were older and had much more experience than I did, um, were you know very foundational to, I think, kind of the first stage of my um development as a as a leader, as a business leader in particular.
SPEAKER_01:Yeah. I mean I'm intrigued. I mean, what was there a moment in your professional career or in your personal life where you recall having a conversation that changed that trajectory of your career? And if there was, what do you recall of such a conversation?
SPEAKER_02:Certainly. I mean, that there are a number of key conversations. I and I could I could give you a couple of examples. And I always find this interesting to think about, you know, in retrospect, like what were sort of the the key moments? And and you know, these are examples of of good leadership that you can emulate as well. So one was um I had a I I was very fortunate in Johnson ⁇ Johnson to have a series of amazing um managers. Um and and you know, J and J, I'd say overall, has got a very, very strong general management culture and genuinely values people and leadership and and people development. And you know, interestingly, on a side note, in the the industry that I work in now, there are a tremendous number of JJ alumni who are CEOs of med tech companies. Right. And you sort of wonder why that is. Well, they, you know, they got that uh they got that training. So I had a in uh one of the the first kind of marquee program that I was leading at at J involved a tremendous amount of change management, difficult change management that was sort of good for the organization, but no one really wanted to do it in reality. That's of course you know the nature of change management. And I I got I got so frustrated um, you know, over a period of of months and you know, probably years. And I had a conversation um with uh one of my my um most wonderful um managers and leaders who's named Mary Churso. And the conversation went, I was kind of on the fence, like I was almost gonna give up. And I was sitting in her office and I was like, Mary, I don't think I can do this anymore. Um this is so frustrating. I I just don't fit into this organization. I, you know, I don't have the same style. And she just paused and she looked at me and she said, Chris, that's why I hired you. Right. So, so this kind of goes back to where we started the conversation, you know, a bit of an out like an outsider disruptor um personality. So that was that was um, you know, it was just one little sentence she said. And I I knew her very well, of course, by this point. And I I really respected her, and I was like, okay, I get it, you know, and and I probably really wasn't gonna give up. Um, but it was nice that I could, you know, again, if you think about leadership characteristics to emulate, um, it was nice that I knew her well enough to be able to be open and just kind of be like, I don't know if I can take this anymore. Um, it's awful. And then her her response was, you know, so so thoughtful and also kind of gave me confidence and and support to uh to continue. Um, you know, uh another one, I won't say who this was because I I found him to be a rather uh dislikable person, um, but but a good executive leader. Um he gave me, you know, maybe one of the top few pieces of career advice I ever had, which is like, he's like, you're too serious, lighten up. So this was in a an even bigger program later on. And you know, he's the feedback was, you know, it's like you come and and like just get right into, you know, like what's the agenda, what are the objectives, and and you're making everybody anxious. At this point, I was dealing with very senior people. And he's like, so just you know, be a little bit less serious. You know, it's fine, like talk about football, talk about the weather, you know, just start out, like, you know, break the ice in these conversations. And and um, you know, it was a good insight because that I was so outcome driven, especially in, you know, that those mega programs that I was doing later that were very, very complicated, very high risk, that I was um sort of loot, perhaps lost sight of how I was um showing up. So, you know, the um lighten up advice was another very good piece of advice. And and you know, interest interestingly, it came from someone, as I said, I I found I didn't ever perceive him to be a particularly good leader. And he was, I think, a rather abrasive character overall. But, you know, you you want to take this advice as it as it comes to you and take it, take it seriously. Of course, he's someone who'd been very successful too. So, you know, my personal um point of view about him was in a way neither here nor there.
SPEAKER_01:Yeah. But I suppose both of those came from I suppose from different angles or perhaps perhaps different motivations. I'm not sure. But certainly uh I think it's interesting. We you know, we've had a several I've had several conversations uh with different leaders for this series. And one thing that you mentioned, which seems to come up time and time again, is the importance of the leaders that you reported to or that you worked with earlier on in your career and the effect that had on you know your own career progression. And that seems to be the case here as well. Um if I understand the the kind of the stories correctly, it it helped you to really, I guess, one be validated, but also continue on the path and and accelerate on that along that path.
SPEAKER_02:Yeah, certainly. And and I mean, this is another thing where I think that there's a huge amount of luck involved. So some organizations do this very well. Um J, you know, there are there are a lot of people who are are terrific leaders. Um but you know, whether you can find kind of the right mentors at a particular moment in your career, um, you know, to some extent you can you can seek those out. And I highly recommend that everybody do that. Um I I guess my personal experience, and I don't I don't want to over-index on this, I don't know if this is true universally. I I think I've found it more difficult as I've grown in my career to find those mentors at a senior level. Uh I still have yeah, I still have um other leaders, other executives who I worked worked for at JJ who are still mentors, and they might uh be some of the best mentors uh that I have even now. So, you know, the other thing is they're long-term relationships that you can form. And uh and there are actually quite a number of people who used to work for me at J who you know I I still help and kind of serve as a mentor to them. But I I I definitely think that's a good observation that, you know, what who you end up getting earlier in your career, um, you know, there there are these kind of transition moments, as I mentioned, you know, as you get sort of more, more, more scope in terms of work, more scope in terms of people management. Um, these aren't things that in my view necessarily come naturally. You know, one may have characteristics that sort of line themselves to being good good at it or or not. Um, but there's a there's a huge situational aspect of it as well. So I I definitely agree with that observation.
SPEAKER_01:Yeah. Uh you you you may have sort of alluded to it already, but who is or or was perhaps a role model of yours? And and you talked about emulating characteristics. Who who is or was a role model of yours? And and and do you feel you've emulated their approach to leadership, or did you develop something else, your own style based on um that role model? Can you think of anybody that comes to mind there?
SPEAKER_02:Yeah, well, I mean, it's uh Mary, who I mentioned before, is you know, is a very good example. And and I think temperamentally, you know, one of the things I learned from her was um empathy, you know, as a leader. And again, as I mentioned, you know, I was I've always been ambitious. I'm very outcome focused. And I think by by personality, um earlier on I tended more you know, there's there's a thing in these these um self-evaluations you can do, right? And it's kind of a typical question is like, do you see people or do you see problems? Right. And and earlier on, I was definitely like the guy who saw problems, and people were sort of incidental to it. So um, you know, something I I and and uh Mary was so good from an empathy and a people leadership standpoint, um, that you know, that was something that I really picked up on when we worked together um and and when I worked for her. Um so it, you know, I think if one's if one's fortunate to have a a number of different um role models over the course of of a career, you know, you can kind of pick up different things from them. And then, you know, to a certain extent, I don't whatever leadership is, it's not something that I I view as being generic at all, right? Like so, you know, it's very much about individuals, organizations, individual leaders, and uh and their their uh their characteristics. You know, I I had I think other role models earlier on who I think sort of encouraged that outcome focused and and accountability behavior. Maybe that was something that came like a little bit more naturally to me. Um I do view that as a hugely important characteristic, especially if you're working in in companies and especially in large companies where um accountability tends to be diffused. And part of how you can get things done is by doing, you know, sort of an unpopular thing, which is actually holding people accountable to what they've notionally um agreed to do. Um, I think there were other um other senior executives that I worked with at uh at J and J who, you know, kind of taught me a lot about as I got more senior, about the part the part of this that I consider more difficult is the political part. Right. So so far we, you know, mainly what we've talked about, you might call like managing, leading down, right? Like leading up influencing laterally, um, influencing senior management that often has you know a lot going on and possibly competing priorities is you know, is a very difficult thing as well. So um I I had other uh other senior executives who I worked for at J and J and afterwards as well, from whom I learned a lot about how do you, you know, what when you don't have any authority, how do you create influence um in order to get the outcomes that you're um that you're uh that you're that you're striving for? Um you know, in my view, that part's a lot more difficult. Um, in my experience, that part is a lot more difficult. And there's a lot more that you can't really control in it.
SPEAKER_01:Do you do you think do you think it's more difficult uh based on how you are as a person and and and what you described earlier? Or do you think it's generally just it's just harder because there's there's more unknowns and and there's more moving parts to it?
SPEAKER_02:I think it's inherently more difficult. Um and um I mean I'll give you s if I may, uh you know, and this is sort of philosophical, I'll give you a kind of a pessimistic observation, um, which is that senior senior leadership, and and like perhaps let's talk about politics, you know, maybe more than business, uh to avoid casting any aspersion. So so political leadership clearly has what economists call an adverse selection problem, right? Adverse selection is when you have a a pool where negative traits are encouraged. Right. So, you know, what you find in in politics, in my view, and and you know, to some extent what you find in large organizations is that you end up with a lot of people in senior leadership positions who are very political. And what is what that's an example of the adverse selection. So, what does that mean, like in practical terms? It means that they're more interested in themselves than they are in the outcomes that they're trying to drive. And and so, you know, earlier on, I think I I I had this very um naive view of the world of being quite meritocratic. And of course, it's mer, you know, and and my observation sort of empirically over the course of my career is the the more you get promoted, the less meritocratic it is on balance, right? And of course, these are like big generalizations. So um but I I think that's you know a worthwhile thing to think about. And and I don't think I got until like much later in life when people talk about work or business being political, what that really means. And it literally means the same thing when you, you know, if you look at any government, you know, or political party system and you talk about them being political, like what does that mean? It means that it means quantitatively that a lot of their actions are driven by self-interest. Right. And and so, you know, I I think that the as you as you rise in in many organizations, what you find is that the mix of, let's say, um, visionaries to survivors becomes um less favorable. Uh so it, you know, in my in my view, this is it's it's inherently a more difficult matrix. Um, you know, the more that you are dealing with very senior people, the more you're dealing with their um kind of competing incentive structures. And by the way, this is not, this is this is an empirical observation to a certain extent, not a value judgment. Um, you know, everyone does what they need to do in order to to survive in in their career. You know, I I think the really good executive leaders who I I was fortunate to work for. Um, I was I also worked for some who were not great. And, you know, we don't necessarily need to get into that in any specifics. Um, you know, the the I think the the characteristic of senior leaders to me that's most important is the willingness to make decisions that may not be good for them and may not turn out well. Right. And and you know, again, you sort of think you it's an interesting mental model of the world. If you work with senior people, you kind of look at them and you can, you know, of course it's more complicated than this, but you can say, you know, is this person a visionary or is this person a survivor? Right. And and you know, those type those, if you let's think of those as archetypes, and of course, real life is more complicated, but those archetypes act in quite different ways. And and to me, you know, what I would aspire to be is the archetype of the visionary, um, who's willing to do things that are unpopular and willing to do things at personal risk that that may not turn out well.
SPEAKER_01:Would you would you say the visionary is sort of outwardly focused in terms of decision making and the survivor is more internally focused for themselves in terms of decision making?
unknown:Yeah.
SPEAKER_02:Yeah, yeah. I mean, it it so this get this sort of gets to, you know, like why are why are you doing, why are you taking the actions that you're taking? Was it is it for is it for good reasons or or not? And and and again, I you know, I want to be careful. I'm not suggesting that that self-interest is inherently negative. It's not. I think that, you know, our there's a there's a lot of the world and a lot of companies and competition that functions by self-interest. If you over-index on self-interest, my view, that's unhealthy, right? And then what you end up with is things that are highly politicized. And again, you know, without getting into a whole political conversation, I think anyone from with any different view can kind of look at um the state of many countries and and and see this exemplified very clearly. And I think, you know, the insight that I had after a long time, and it's one of those things, like at least once I thought of it, it seemed quite quite obvious to me that, you know, I never really internalized what it meant for things to be, quote, political in business. And it's like the same as it is in politics.
SPEAKER_01:Yeah. Yeah, and that's interesting. I I mean you you touched on the fact that leadership isn't always easy. Um, I'm wondering, is there is there a decision or an example of a decision that you've had to make that wasn't easy? Um one perhaps that kept you up at night, and and I suppose interested to hear whether you would make that same decision again today um with where you are today?
SPEAKER_02:Yeah. Uh yeah, I'll give you an example and then I'll I'll want to come back and say something else about this uh as well. So um in this sort of back to you know, my first major program leadership role at uh at J and J and the you know a large, large transformation effort, and for a variety of reasons, we really fell behind. Um we fell behind schedule, and there wasn't gonna be any more money. Um, so uh I asked everyone to stay stay after school, basically. I asked everybody to stay late on Tuesday night and Thursday night uh until nine o'clock for two years. Um I didn't know at the time I wasn't like, hey, we're gonna do this for two years. At the time I was like, you know, we need to do this. Um and you know, we brought in dinner and um you know, it was a complicated decision because of course, you know, it's like everyone's got a life, everybody's got a family. Um, you know, on the other hand, sort of looking at the bigger picture, if we failed in what we were trying to do, it would have been bad for everyone. Um the other part of this is I stayed all those nights, you know, unless I was traveling. So I was, you know, I I kind of made sure as the leader of this and the person who had requested it, I didn't want to do anything. I didn't want to ask the team to do anything I wasn't willing to sign up for myself. So, you know, we and it actually turned out to be a really cool thing in terms of, you know, the team building and the team dynamic. And and you you could also interestingly like get a sense of, you know, it's like who's gonna stay for Mexican takeout and flee right afterwards at like 715? Who's gonna actually stick around until nine as requested and and you know do their work? Um and uh you know, but I was asking a lot of people too, right? That's all that's a lot to ask of people who were already working hard. And and also when I, you know, when I made the request, it was open-ended. And it did turn out to be for two years that we did that. And we did catch up and we didn't have to ask for any more money, and we'd, you know, we delivered a good result. And um, I think everyone was very proud of what we'd done at the end. Um the thing I was gonna say about this is I, you know, talking to leaders and talking to entrepreneurs, I think it's I think um generally everyone who's successful, in my view, tends to understate the role of of luck and and things turning out in their favor that they can't really control. So I almost think of these just you you could you could draw kind of a two by two consulting matrix, right? Of that goes like, was the decision a good decision or a bad decision inherently? And did the decision turn out well or did it not turn out well? And one of the things kind of going going back to what we just talked about, you know, being willing to take on some risks, part of the willingness to take on risks is um taking actions that may not turn out well, that have a lot of things in them that you can't control. So I'm sure like anyone you interview is gonna have a good answer to that, right? Like, did you just did you do something smart and it worked out well? And you know, in a way, a more interesting question is did you do something smart and it backfired? And would you do the same thing again? And if you're if you're kind of living into that um, you know, accountability as a leader in doing things for the right reason, um, that you know, there should be things that you've done that that backfired. Um that you, you know, maybe you wish you could rewrite them, but you know, of course you can't. Um so these things sort of you know sort of tie together if that uh if that makes sense. Yeah. There's a for for anyone who's interested, there, you know, this is sort of a sort of a deep cut, but there are there are two um essays that are often called the vocation lectures um by Max Weber, who's you know one of the founders of modern sociology. That uh one's called Politics as a Vocation and one is called Science as a Vocation. And a big theme, I I happen to reread both of these after quite a few years for my my work at UPenn. Um one of the things that that Weber really emphasizes about leadership is you know how difficult the decisions can be, and that good leaders are willing to accept unintended consequences.
SPEAKER_01:Yeah, that's interesting. That's um I I I think I mean that that topic in itself, or everything we just spoke about in this last section, is is something that we could probably unpack for much longer. Um,
SPEAKER_02:But you know, maybe if there's a if there's a key insight in this, you know, as you're as you're anyone who's talking to people about leadership or success, I find this to be particularly true of entrepreneurs, by the way. Like if you like entreprene like because I was an entrepreneur, as you mentioned, um actually multiple times. And uh other, you know, all other entrepreneurs are going to give you most of them will give you this story as if they did everything right. And tell tell you why you should do what they did, because they did everything right. Now, in fact, if you look at um empirical, there's there's some very interesting work that's been done in in uh Silicon Valley on startups and key success factors. And essentially what what all the empirical studies are, you know, longitudinally across a big range of startups have found is that the key factor is timing. Right. So and and timing, like so to me, that's sort of like luck and did the things that you can't control go in a favorable way. And you could, there are also very interesting examples. Um, you know, one of the things I I I wrote about recently in one of my classes at UPenn is you sort of think about technologies that were great, but weren't quite the market wasn't quite quite ready. And a very famous example is the the Palm Pilot, right? Which was you know, a the sort of the original what they called PDA personal digital assistant um in the late 90s, 2000s. And you know, it was a kind of a good technology product, but it wasn't, it wasn't this quite the smartphone. But it's sort of, you know, I I wonder, would there really have been smartphones? You know, one, the the way that they ended up being designed and going to market. And two, would they have been so successful if the groundwork hadn't been laid by the palm pilot and the BlackBerry, right? Which is sort of an early version of that. So you can kind of see these these cycles. So there's, you know, like the the the palm pilot was an excellent product. Um, it wasn't a blockbuster product, really, not the way that the iPhone was. And, you know, there are a number of factors to that, but I think if you if you look at history of technology and then you look at sort of what people are telling you about their endeavors, I think thoughtful people will try to articulate why they had good luck as well as good execution.
SPEAKER_01:Yeah. I mean, I I I can I can speak from our own experience. We obviously I I started made to see back then it was webinar experts um in 2016. Uh and you know, for for for several years it it it went very well. You know, we we we grew and things were going well, and then and then COVID hit. Uh and you know, as a company that you know helped other companies run webinars, the timing was couldn't have been any better. Now COVID was you know horrendous, obviously, and and many, many, many people affected. But from a from a business perspective, it helped us grow even further and support our customers when they needed it, and we were well placed. But that you know, I certainly didn't plan COVID and and I didn't plan for anything like that to happen, but the timing was fortunate uh from a from a growth perspective for us, uh completely relate to what you're saying there.
SPEAKER_02:Yeah, it's a that's that's very interesting. And and you know, there's a lot of um, you know, in in in enterprise software, right? There was like this big dip, and then there was this big peak, right, you know, almost like the supply chain disruption. So everything, you know, so much macroeconomic uncertainty, everything went on hold, right? And then everything came like roaring back at the you know, the end of 2020 and 2021. And and certainly we saw uh Cinity is a uh you know data services and software company um in the ERP market primarily. So we saw that, you know, very clearly that like a lot of headwinds, of course, when the pandemic happened, but then uh you know a huge, uh, huge bounce back from that. Interestingly, this is sort of a you know a major footnote uh for anyone who's interested. If you look back at the the history of major companies, you will find that there's there's a correlation between certain kinds of innovation and macroeconomic crises. And of course, the reason for that is that a crisis forces you to innovate. The example that you just gave is a very good example of that, right? So I, you know, it was often said in 2020, 2021 that that we we achieved um, you know, five, 10 years of digitization in two years. And that was forced by the circumstances, right? Everybody had to figure out how to work remotely. So then, you know, what happened? Your business Zoom, Zoom zoomed, right? That was you know, really, really like uh uh rocket fuel for uh for Zoom and you know, a number of other uh associated businesses like that. So um, you know, a good entrepreneurs will also, this is sort of like the converse of this point about luck, will also see a crisis as an opportunity. You know, what thing doesn't work anymore, and you know, what's the what's the new innovation or what's the new product market that I can bring to a changed market um, you know, in order to create some innovation in it. I think this is all this is all uh quite interesting.
SPEAKER_01:Yeah. No, I mean it's it's fascinating to to think and talk about. And uh there'll be yeah, many, many examples, I think, where where that plays out. I'm sure there's books written about it already, and uh perhaps some of those are part of your 10,000 book collection as well. But I'm sure there'll be plenty more written about them as we go forward, especially during this time as well, and probably over the next sort of five, ten years of you know the innovation that's happening and perhaps some crises that that come from that as well. But I mean we we we we touched on sort of several stages, I suppose, of your journey. And we you know we spoke about you know the COVID and and you mentioned sinity there. And and I'm just wondering, uh is your has your do you think your leadership styles changed over time? Uh did you start out in a certain way when you first were in leadership and do you think it's different now? And and to what extent has it changed if it did?
SPEAKER_02:Definitely it's changed. Uh well, I mean, so one thing is that you know I spent a long time leading very large organizations, right? So, I mean, my extended team when I left JJ, you know, including all the Matrix teams and all the contractors, was like 1,200 people. So it's, you know, what let's say larger than most American companies. So so quite large. Um, and then I and then I had my startup um and I had co-founders in that, of course. And that was uh, you know, the maximum size of that was close to 100 people. So, you know, my much uh much smaller. And um most of the roles that I've had since then have been as kind of, you know, like senior leadership, but individual contributor. So Sinity, I had no organization. Um, Viva, I have no organization. I work for the president of Viva MedTech and I work with a lot of our executives. And so I do leadership, but it's sort of more um more diffuse uh in a way than when you're directly leading a large organization. Um I think we, you know, we taught in just in in terms of my um my evolution as a as a leader, I kind of touched on two components that I think are are interesting and important already. One is being more empathetic to humans, um, you know, it kind of in a balance with making sure that drive accountability, um, you know, achieve the outcomes that you're trying to achieve, but but bring people uh along with you. And then I think you you know, the other one is the is the you know, the political part of it, right? So I had I I had a conversation with um, so I I also teach sometimes uh for the past few years, guest lectures at the the graduate school that uh at Cornell. I I got to do one on AI last year at U Penn as well. So I really like working with you know with MBA and and graduate students. And so there's a thing like all the all the old people will go back to the business schools and they'll be like, the most important class you're gonna take is organizational behavior, right? Because that's all about, you know, kind of how to do this. But the thing that I that I realized is like a lot of that, I'm not sure it can really be taught. Like you can read about these stories of senior people and you know how they succeed or how they fail and all their trials and tribulations, but the you know, the political and the alignment and like how to influence people, to some extent, I think you have to learn that by um by experience, um, you know, both positive and negative experience. So that, you know, that you know, really understanding um kind of how to navigate that. And I guess the other thing that that you know, maybe maybe this is particularly evident to me in in my current role is that I'm kind of always thinking about, you know, when I work with with CIOs and and other senior leaders, you know, in large med tech organizations, like what do they really care about? Um because what people say they care about isn't always what they actually care about. Um and you know, maybe uh maybe maybe a final observation on on this is, and this is a you know very business specific uh observation. I think maintaining a really keen financial focus is critical. Right. So, you know and and of when I did my MBA, I did um I did a lot of finance coursework. Like my background in a way is pretty similar to if I'd gone into investment banking because I felt that that was very important in in terms of being able to become a senior leader in business, which was always my aspiration. Um, and I think that was that was a really good decision. I uh you know, interestingly, I think that this financial focus often gets diluted in people's minds. So, you know, I I have a this actually partly came out of out of JMJ, but also subsequent experience, you know, like what do companies really care about? Right? Growth, having an efficient cost structure, and what I generally call quality, which can also mean differentiation in the market. And I, you know, I now like the the more I do this and the more I think about it, people I sometimes say, like people say a lot of words and paragraphs about what they want to do. I think if you decode them in business context, generally what you find is that they're always talking about one of those three things, right? They want more growth, they want a more efficient cost structure, they want, um, you know, in life sciences, quality and compliance is obviously particularly important. But generally you can kind of think about, you know, differentiation, how do you compete? And that that all adds up to how do you compete in the marketplace and and win in the marketplace. And of course, there are other aspects too, you know, like we would like to have people have a good working experience. So there's sort of, you know, the team thing. But I I'm often coaching people that, you know, if you if you want to invest in a certain area, um, and this, you know, this is perhaps another pessimistic observation, but I think it's a true one. Like you're not going to get investment by telling by telling the people who are making the investment that that the team is going to be happy with the investment. Right. They want the team to be happy, but really what they want is to make some money. And it's like, again, this is like perhaps perhaps this seems like childishly simple, but you know, again, I've I've been working in this area uh in particular of you know transformation, digital transformation. And it's expensive and it requires lots of investment dollars and lots of change management. And then like, how do you motivate people to do this? So, you know, in an interesting way, this sort of like late later phase of my career where I don't have an organization right now, maybe I will again in in the in the future, um, to be determined. Um, you know, really focusing on incentives and why. And I, you know, again, this is sort of evolutionary. Like, of course, I knew this all along in a way, but this is something that's really, you know, maybe sort of the third major component if you're trying, especially when you're when you're you know leading up and leading laterally, understand like what is people's motivation? What do they want? This can be at the organizational level or it can be at the individual level.
SPEAKER_01:Yeah, that's uh it reminds me of something. And and uh, you know, I th I think what you're alluding to there as well, it's not necessarily always financial incentive. You know, it people are motivated by different things, and of course we need to all pay pay the mortgage, but uh, I saw an advert, somebody shared it with me the other day, and I don't remember the with the company's name, but um it it was like a sort of marine engineering company, I think, and and it was a it was a talent and recruitment type advert. Uh and it showed various different situations. You know, if you uh if you want to sit in the middle of the North Atlantic lashing with rain, freezing cold, you know, join us. If you if you if you if you want to dive you know 100 meters down, stay there for three hours and weld something, join us. Um and it was basically for 99% of people, probably more than that, yeah, the scenarios that were being laid out in this advert were entirely unappealing. Um, but it appealed to the people that you know this company wanted to recruit, um, that you know thrived on the adversity and wanted to uh find solutions for things that would be difficult.
SPEAKER_02:I thought that's it. You'll have to send me that web in case the uh can you know in case the VLA thing eventually doesn't work doesn't work out. Uh I remember listening to this, I'm like, so that sounds cool. Sign me up. Uh yeah. Uh I used I used to be really into uh years ago, into uh metalworking um and welding. So but I never I never did underwater welding.
SPEAKER_01:Right.
SPEAKER_02:It sounds compelling. I I think just to pick up on your point, right? You know, so so this, you know, like growth, cost efficiency, quality, differentiation, like these are things that that businesses care about. And they're they're helpful in terms of making a case for transformation and making a case for change management. And when I say helpful, maybe essential is a better word, right? Like I don't really think you're not gonna really get investment to do things unless you can kind of explain. And I know this is a very American-sounding thing. I've been saying it more and more. Show me the money, right? And you know, you go to senior people and they they're they're very, you know, very, very polished, but you do want to show them the money. The the political thing about individuals, you know, to your point, is like, can I pay the mortgage? And you sort of worry about um, you know, th think about um how people respond in terms of, you know, this is kind of again back to a mental model that's useful. Like, who is the person? Are they a survivor? Are they a visionary? And, you know, and again, I'm not really not placing a value judgment on this particularly. It's it's it's a it's a useful thing to be able to understand in uh in decoding people's behavior.
SPEAKER_01:Yeah, that's interesting. I hadn't I hadn't thought about it like that. But I mean you yeah, for the last for the last for the throughout this conversation, we spoke about touched on humans, you know, as part of this overall equation or part of leadership. And I suppose leadership itself is inherently human. Um I remember reading an article, I have to kind of refer to my notes. I remember reading an article that you wrote for Forbes, and you described the the effect algorithm has algorithms have on human choice and preference, uh, and how these are truncated by algorithms that users can't control. You I'll say that again because it's it's uh it's something that I think it's important to take in. It's uh the effect algorithms have on human choice uh and preference, and how these are truncated by algorithms that users can't control. And it you you had an example about entering a lift uh in a hotel uh and you wrote this article in 2021. And uh since then we've seen a steep uh increase in AI, and obviously you mentioned this, you know, UPenn the work that you're doing there as well in machine learning. Um Do you have the do you have the same criticism of AI systems uh that you know human decision making or choice is reduced through implementing those?
SPEAKER_02:Absolutely. I mean I think the the I think that this has been there for a while to see. And um the specific story that that motivated this is I was in a in a hotel and the the elevators had uh an optimization algorithm so it would route the elevators most efficiently. So I was chatting with someone waiting for the elevator, and it put us on two separate elevators to optimize an outcome, which is you know getting the person to their floor most quickly. And and so then the question, and and in, you know, in machine learning speak, right? Like, what's the objective function? Like, what is it that you're trying to optimize? So, you know, to like I would have rather talked to the guy. I would I would have rather and and again, like it just it seemed like a powerful metaphor because one's always talking about the elevator conversation, right? The elevator pitch. It's like, what do you say to somebody in 60 seconds in the elevator? And I was already, you know, chatting with him. Um so I, you know, I thought that was sort of an interesting metaphor, you know, not that it's like wrong to optimize travel time in elevators, but are you necessarily optimizing the you know, the right thing? And and you know, all of these um, I mean, this this goes back to this behavioral economics idea of of nudges, right? Where you're opt-in versus opt-out. And um, yeah, so I I hugely still have this same concern. And, you know, if you know a bit about the way machine learning and and algorithms work, and and by the way, this is the same, the same thing applies to large language models in a in a slightly more complicated way. Um, they're they're they're optimizing for outcomes that are statistically typical, right? So, and and you know, famously, right, and I think you know, most of the you're aware, and most of the audience is probably aware, um, you know, AI is not very good at what's called the long tail. Like it's not very good at corner cases. Humans can be pretty good at corner cases, right? So, you know, the concern, you know, maybe put broadly is if if you have algorithms that are are enforcing statistically common outcomes, and you and you insert those into a choice architecture that relates to humans, you're you're actually abrogating human choice and human autonomy in that. And um, I was obviously I was concerned about it in you know 2020, 2021. The the software company that I had was machine learning and you know, sort of early AI. So I, you know, I know a fair amount kind of under the hood about how uh how these systems and how these algorithms work. Um, I think that we should all be hugely concerned about this. And and in fact, I I um I'm I'm more concerned about this than I am about, you know, sort of the Skynet rogue AI. Um because what what do these algorithms do? Like, in effect, they're codified bureaucratic rules. And you can you like you can see this um, you know, you can see the effect of these nudges and how these have been institutionalized. You know, it's like the way like every website, right? It's like they're always opting you into something and you have to say no to it. And it one, one, it's a waste of time. And two, it's just this constant um land grab for your attention and to get you to kind of behave in default ways statistically, which is to me like not the good part of being a human. That's not the part I like about being a human. I, you know, I like the part where you get to choose stuff and you get to think about stuff and you know, be thoughtful. So um yeah, I hadn't thought about that article for for a while. Um, and I I think it was I think it was called ironically, um, I was optimized. Right. And of course, the point the point is that it was like, you know, it was a st actually a stupid thing that happened. It's really well actually the the irony of this too is they had this in a hotel, and the hotel had like six floors, so it's like who cares? Yeah, like I get that I you know I've stayed in like the Times Square Marriott, which is like 60 stories and has like 23 elevators or something. So yeah, like you you have kind of a more complicated traffic routing problem there. You know, in this situation, like you know, why?
SPEAKER_01:Yeah, no, six floors is the it's not worth probably, but uh um it's not no and and you know, I don't know.
SPEAKER_02:I I I you know maybe um maybe that guy would have invested in my company. You know, you never I'll never know because I was opt I was optimized out of the possibility of having that conversation.
SPEAKER_01:Uh I mean it is it is it is interesting to think about that and and the impact it has, and especially because a lot of it happens without us probably even being aware. I mean you you didn't know that that's how the other well you clearly did know that how the elevators work, but you know, if I'd walked in there, I probably wouldn't have been non-twiser myself, and and I think it'll be the same for many other people.
SPEAKER_02:Yeah, I mean, I think, you know, and and and like the the elevator, like it maybe it doesn't seem particularly important, but you know, let's think about how, you know, like so so these algorithms are going to be, and we'll talk a little bit about my my industry, right? It's like, you know, you look at the evolution of of healthcare, like evidently these algorithms um to some extent already are and in the future will be used more for um for care decisions, right? Like, think about it, it's not gonna be very surprising if there's, you know, in some markets, right, there's gonna be care rationing and it's gonna be driven by algorithms. Like that to me seems pretty bad. Like that, or or you know, so if we're providing decision support and it's useful, great, but like let's not let it become this invisible thing. Like we we don't want decision support to be invisible the way electricity is invisible. You know, compute is now invisible, right? Like it's a commodity. Like, you know, you you go to AWS and you turn some stuff on, you pay the money, and it's a utility. I think it's a I think it's uh uh honestly a very concerning aspect of our human future if that kind of decision support and optimization becomes invisible in the sense of a utility. And, you know, there are lots of areas in which, and I think I I pick the you know, care apportionment or or you know, care adjudication as a particularly consequential one. I think we should be, you know, concerned about that and make sure that we understand and that there, you know, it's this is has the, I don't really love the name, but sort of, you know, human in the loop idea. But essentially make sure I'm I'm more concerned about creeping bureaucratization than I am about Skynet Rogue AI, personally. Um, because it's more, you know, it seems very and of course it's along the lines of optimization that you know is very um it's a optimization's a good thing, right? I like to optimize things too. Um I like to optimize them in a way that that we retain control and that we're again, this is very important, right? And anyone could sort of read about what this means, like when you when you when you program algorithms or when you set up neural nets, like you're defining what's called an objective function. That's the thing that you're trying to get to. So um, you know, maintain control of like what are the correct objective functions? And are we are we creating, you know, kind of a monster of bureaucratized, codified decision support that's the you know, that's invisible, that's going to govern healthcare decisions, insurance reimbursement decisions, employment decisions, actually HR. You know, this is a this is a very interesting example too. Like, you know, what advice do I give people on their resumes these days? I'm like, keyword optimize it because you know the first set of screening is done by algorithms. I mean, are these algorithms like really that good? You know, and we're talking, remember, we're talking about humans here. We're not talking about like, you know, like hex nuts or something where you could machine scan them and do like this one's good, this one's not good. You're talking about humans, and you've you we basically in in the HR world, in the talent world, we've already created that bureaucratized nightmare. If you you talk to people and college students, graduate students about their experience trying to get jobs in the job market, it is a horrible experience. Why is it a horrible experience? It's for exactly the same thing that I was writing re reason that I was writing about in 2021. It's been fully outsourced to algorithms for decision support for the you know the first few tiers of the process. Like personally, I think that's terrible.
SPEAKER_01:Yeah.
SPEAKER_02:Um so yeah, what kind of a long-winded answer, and obviously I I have some energy on this topic, but but it was a concern five years ago, and I think it's an even bigger concern. And again, you know, maybe if there was a key insight that I could distill, like I think you phrased it very well, that the concern is about you know human choice and human autonomy being abrogated by algorithms. And that's much more the case than it was now, than it was five years ago. And you know, the trend is that that will continue to be the case um going forward. And and I think that's that's hugely something to for us to pay attention to. Um, you know, as as citizens, as as you know, as leaders, as business people, especially people working in in these areas.
SPEAKER_01:Yeah. I I mean I uh obviously Chat GPT was was the you know one of the starting points for you know the the masses to understand and interact with AI, but AI is is obviously under the hood in so many more places and has for so much longer. Uh let me bring this back to I guess AI and leadership together, because we've spoken about both. If we look at the impact of AI and how individuals use it within the business, particularly, um, whether the organization knows it or not, how teams use it, how entire organizations use AI, um I suppose we could extend it to the entire markets. Well will AI, do you think, make leadership or the the role of leadership easier, or do you think it'll make it harder? And you know, is is there is because I mean we spoke about the inter interpersonal interaction and your mentors and the the leaders that you worked with. Is AI gonna ruin that? Uh or or is leadership entirely independent from AI?
SPEAKER_02:It's an interesting question. I I maybe I'll try to answer this in a in a couple of different components. Um so so one is um let's look at this through the lens of of history. And you know, in my I had a a colleague who talked for us at one of our Navar conferences, um, who had a great phrase, AI is not a fad. Um so it, you know, it's it's a real technological transformation of world historical consequence. So in that sense, anything that that's that big a change poses inherent leadership challenges and inherent leadership opportunities. And, you know, maybe if you were to go back and think about things that rhyme, so you know, in like right before I went to to B school, you know, the internet was like a thing, but it wasn't clear what it was gonna be. I graduated from Cornell in in 2002, um, you know, right after the dot-com crash. So I've I've observed, you know, the feeling that I have about AI is kind of like the feeling that I had in the late 90s about the internet. Like it was clearly big, it was clearly not gonna go away. It wasn't really clear to anyone what it was yet, other than that it was gonna be um hugely consequential. And and you could you could decompose that into, you know, um, I think it's highly likely we're gonna see changes in the labor market, right? You know, certainly in in developed uh economies. And, you know, those macro macroeconomic changes, again, those structurally, those inherently pose both leadership challenges and leadership opportunities. So just in terms of, you know, what business leaders are dealing with in terms of change, um, yes, I think that, you know, it it creates new challenges and new opportunities that are are kind of structurally inherent, any kind of a uh a technological revolution, which evidently, you know, large language models and what's happened with AI generally is is uh counts as that in in my view. Um so that's you know sort of sort of one one dimension. Um there's an there's another dimension, maybe that's a you know a little bit more um subtle. So, you know, going back, like you know, if we end up in a world where algorithms and AI largely mediate performance reviews of humans, that seems bad to me, you know, very much along the lines that we just talked about, as it's kind of another example of where I think that would that would be bad and will sort of tend to dumb down. And I, you know, saying that provocatively on purpose, like that'll dumb down the process, right? The process will be worse. Um, it'll tend to, I think, diminish the positive effects of of the human aspect of this, which is a lot of what we've emphasized, a lot of what, you know, I've found important to me personally and and you know, hopefully what people I work with who are emerging leaders like about talking to me, right? Or find you find valuable about, you know, about me as a mentor, you know, sort of somewhat later in my career. So I I do think um I do think that's a concern. Um I think, you know, maybe the the third thing that is gonna that poses a challenge is um, you know, that I I worry about. And this is a little bit more specific to to large language models in particular. Um I think there's a a valid concern that over time those are going to dilute aggregate critical thinking skills. Right. So, you know, and again, imagine, you know, it's like everyone worries I this is sort of a people talk about this example, right? It's like, well, we still teach people how to do arithmetic. Right. But you know, like so you know how to do it and you know how it works, but then you know, I go to multiply, you know, most two-digit numbers together and I use a calculator. I use a calculator on my phone, right? And that's not particularly significant. When you're getting into things that are are, you know, kind of higher up the stack, if you will, in terms of of importance and the kind of work that humans have uh traditionally done. I mean, I'm already seeing examples where people are sending me stuff that was obviously written by, you know, Chat GPT or one of the other engines, and it's not very good. And and so, you know, I I do worry that um laziness and habituation are going to creep into critical thinking skills. And I, you know, I think in in you know consequential business role, I mean, business is very complex, right? Like there, there are a lot of moving pieces to it. And, you know, again, kind of going back to like, what do I like about philosophy? How have I kind of applied that throughout my career? Critical thinking is a huge part of it. Like understanding a lot of the things that we've talked about, like why are people doing what they're doing? What are the assumptions? Like, how can you optimize things? What's the what's the analysis? If we if we over-index on um outsourcing too much of that to the next generation of widgets, which are large language models, um, that strikes me as being likely detrimental to critical thinking skills going forward. And that that's gonna pose uh that's gonna pose a huge challenge. And I think maybe I'll I'll if I can take a philosophical detour for a moment. Um, one of the papers that I wrote for UPAN, I was sort of exploring um deep fakes, right? And and images in particular. And and of course, like this is sort of another element of perhaps not so much related to leadership, but um you care if an image is fake, right? It's like that's the thing, you know, the thing about an image, if it's fake, is that it's fake. It's not real. So there's sort of both in in you know what philosophers would call an epistemological impact of this, which is how we acquire knowledge, and an and and an ontological aspect of it, which is what kinds of things exist. So in in in now, I'm gonna come back to the critical thinking thing in in a moment, but if you if you consider you know a world we may live in in five years, is that your our thought process around every image that we see in the media is going to be, is it real or is it fake? That's a huge change, again, philosophical terms, epistemologically, in other words, how we know what we know, right? And ontologically, kinds of things exist. That's huge, right? So I that's not how I grew up, right? Basically, like you knew there was trick photography, but basically, like if you saw images, the images are in effect real. It it the same thing develops for all kinds of intellectual products that were typically produced by humans, right? Like I was saying, someone sent me something, I'm basically like, this is poor, right? It was obviously written by a chatbot. You know, and then it's like then you have to send it back. And then it's sort of this whole calculation of am I gonna be like, dude, you know, don't send me chatbot crap. I didn't ask if I was allowed to swear. I've been very well behaved so far. Um, you know, don't send me chatbot stuff, right? Or, you know, do you just sort of coach and be like, hey, you know, these are the things I think should you should think about when you're trying to work on on this problem. So, you know, it it the the the analogy of the, you know, in in five years are we gonna be, we probably are going to change our relationship to all images and wonder if all of them are fake. In five years, we're going to probably change our relationship to all work products and wonder how much human is in them. And then of course, you know, like I I use um I use the chatbots myself in kind of certain targeted ways, but I try to use them in a way that's very focused on things that I already know about, where I can kind of sanity check and I I'm gonna be able to detect if errors are creeping in. Maybe they're gonna get better, um, but just the way that they work, you know, it is there, they are just statistical still, um, in my view. Um, so you know, that uh that remains to be seen. So, so yeah, I mean, I think that the um if I sort of go but you're I I gave you three different answers in three different dimensions to this question about does AI make leadership more um more complex? Um absolutely. I I think it does. I I think it also, you know, again, uh I I try to avoid binary answers to things. So like it makes it more complex, but in a sense, it also makes it more important, right? And in particular, the human aspects of it that we've been uh that we've been talking about today, right? So let's let's not, as leaders, let's not lose that part of it because that part of it is um is very important.
SPEAKER_01:I th I think that's um we're not finished yet, but I think that's a great point to end on. I I have so many more questions, but um I think we we are starting to run out of time because we do also still have the the virtually anything goes question. Um before we get to that, first of all, obviously, Chris, some really interesting conversations here and and and really interesting answers to questions I thought might be straightforward, but actually I think you've you've nice nicely uh showed actually that there are nuances to it, and and it's important to consider those and keep those in mind as as we go forward because it's not necessarily binary, as you said.
SPEAKER_02:The problem with talking to philosophers, right? You know, you can get complicated answers to things because of course part of it is always like, well, what's behind the question? But uh I I find all this very, very interesting.
SPEAKER_01:I absolutely I totally agree. And then um I I I appreciate and and think it's great that you answered in the way you did because uh it makes everything more considered and and interesting. Um but we we we get to a point where we now have the virtually anything goes question. Um, if it's your first time listening to this podcast or watching it, uh this is where I control turn control over to Chris. He can ask me any question he wants. I've been you know badgering him with questions for the last uh hour or so. Um, but you can ask me any question you want. I don't know what the question is. I do have to answer it no matter what it is. Uh but my safety in that is that once I've given my answer, then Chris, you need to also answer the same question. So I'll I'll hand the reins over to you. Uh please take control.
SPEAKER_02:Okay, so I love this, by the way. This is a great great premise for a podcast, too. So um we've been talking about artificial intelligence, and we talked a little bit about you know so-called artificial general intelligence. Um can machines think the way that you and I think in the sense of being self-conscious agents.
SPEAKER_01:Um I think there's a lot that's talked about in terms of the the you know being sentient and and whether uh AI is is is aware of itself. And I I think to a degree, I mean I'm no I AI expert, but from what I know and understand and think, uh, you know, there is an element to that. But I think uh humans are uh inherently flawed, aren't they? Um we are we are not perfect, we'll never be perfect, and and and to a degree we don't want to be perfect. Uh uh I suppose depending on who you are, but I I know that being perfect probably wouldn't be what I would aspire to. But I I want to be human and to be human is um become becoming a philosopher now myself, I guess, but to be human is is is to be flawed and and to be quirky and and and have different areas of interest. And I suppose unless there's some kind of uh chaos theory that can be programmed into the uh AI models or can be learned by it, although there's no rhyme to that chaos, then I don't think they can be like humans um or be as self-aware that it would emulate. And I suppose back to your point, it's it's it's the kind of statistically most uh uh expected outcome is is what AI currently answers uh or or provides, and that may well get better, but I just don't think that's as flawed as we are as humans, that uh an AI would be able to emulate that. That's my opinion anyway.
SPEAKER_02:I think that's an excellent answer, and and I I very much agree with that. And and uh and I think you you hit on so I I answered the question now? Sure, yeah, go ahead, yeah. Yeah, so so I I don't I don't think that we have any any algorithms or machines that are self-conscious and can think in this in the same way that that humans do. And I think you hit on a key aspect of it, and and let's take a brief detour into the famous Turing test. But what's interesting about the Turing test is it's about fooling you, right? So the the Turing test is is it's it's about can you create a this is my translation of it, and this will probably cause a lot of arguments, and you know, we can we can get get into the arguments, but it's really about can you create a convincing simulation? But in you know, in my view, a convincing simulation is not doesn't mean that the underlying thing is the same. Also, I I think you hit on a very key point, which is that you know, the the AI systems that we have now, I mean, one, they're just statistical engines. So they're basically very big, very complicated nonlinear distance functions with different kinds of architectural layers in them that create convincing dialogue outputs in particular. So certainly they passed the original uh Turing test. Um, but I I don't see any evidence that they have feelings. Right. And and of course, um, you know, there's a this goes back to a philosophical point. There's a huge tradition in the West of what I would call rationality bias, going back to the ancient Greek. So we like to talk about, you know, it's like what's special about humans, humans are rational. I don't really agree with that as stated. I think um I've met humans, as I like to say, and they're not rational. They're capable, they're not, right? I mean, they're capable of rationality. So, you know, what what exists in a human, and I think this is sort of interesting for the whole leadership conversation, it is it it's a it's a mixture of humanness or what you were saying are our flawed nature, uh, our emotions, and the ability to be rational, the ability to be rational, which is different than quote, being rational. Um and I don't see um any evidence so far that any system is exhibiting those traits. And again, if you ask the question in a different way, you know, does it like you you talk to Claude or you talk to Chat GPT? Like, does it care about the conversation? Like if you phrase the question that way, I think most people believe like, no, it doesn't care. But yeah, if you ask if it's intelligent, you know, generally people answer that they think that it is intelligent. And so I think that, you know, to me, our um the the beauty of being human, exactly as you said, is in part in our imperfect nature and in this, you know, this interesting mix of um emotion and rationality and and our relationship to other humans who are uh the the same as us. There's maybe one more observation I'd offer on this. And this is I I'm I'm still thinking about this. And you know, the uh perhaps the audience can look forward to to my publishing something on on this at some point. Uh if we if we were to say that, you know, the the Turing test is a good test in terms of output, right? So it's measuring the ability to simulate. But if we don't think that that's satisfactory in terms of whether, quote, something is really intelligent. Um, I don't rule out, by the way, I don't rule out that it's possible. I don't claim it's impossible. Claim that, you know, we're we're in my view, there's no evidence that we're there yet. If we get there, I think it'll be very difficult to tell. Right. Right. It it's got it's going, it's going to be kind of a deep mystery because of course, like what you, you know, in the what you're judging on is also the outputs, right? So when people respond to this and they're like, the Turing test is not a bad test, you know, it's totally aligned with cognitive neuroscience and like there are lots of things, and I I do understand that argument. Um, I d I think it'll be, you know, it'll be very, very difficult to tell. And kind of how I think about this is there's a there's a thing in philosophy and cognitive neuroscience called the hard problem of consciousness, which is sort of explaining consciousness itself in humans. Um, in fact, no one's really been able to agree on this in several thousand years of our you know, best, wisest, smartest people thinking about it. I don't know if we think we should be able to agree on it with respect to machines. I think there's actually what I call the harder problem of consciousness, which is if you have consciousness that's non-human, this could, by the way, could also be animal consciousness, which is a very interesting field, or machine consciousness. Like, how do you know? If there's a hard problem just related to humans, I think when you get outside of humans, it's what I call the harder problem of consciousness. So even though I my hypothesis or my strong hypothesis is that we don't have this now, um, and there's no there's no evidence of it. If at one point it arrives, I think it's going to be very unclear how to decide if it's real or not.
SPEAKER_01:Yeah. No, I I I can I can see that. And especially if it happens over time and perhaps not with a big bang, then it will just probably just creep and seep in, won't it? Um it's absolutely fascinating. And um, you know, uh I I could I could talk much longer about all of this because I I think it's fascinating. Maybe I should have studied philosophy as well. And uh maybe you've you set me off on something there. But uh looking at the Pardon?
SPEAKER_02:It's never too late.
SPEAKER_01:Never too late. No, indeed. Indeed, it is not, as as you've as you've mentioned with um with doing another degree now, uh while while you're while you're you're still working as well. So um Chris, uh absolutely fascinating. Uh and I really appreciate your time uh in your talking to us and and giving your insights, giving your opinions and and your views on leadership, of course, and and you know everything that surrounds it. And I think I've learned today that there is a lot more around it than perhaps I previously thought I knew. Um thank you very much uh for being here and sharing that. And of course, thank you to our audience as well. Uh we always love having you with us. If you liked what you heard just towards the end, especially in the virtually anything goes question, we have a great episode with uh Ben Field, who's the CEO of Fusion Films. Um it's the company behind Virtually Parkinson, uh, the late talk show host, um, and where they created a AI an AI version of Sir Michael Parkinson, uh, who interviews humans. Uh very fascinating, related to much of what we spoke here about today as well. But in the meantime, thank you so much for listening. If you enjoyed this episode, please share it with somebody else who you think would be interested as well. Um and until next time, we'll see you in another episode. Take care.
SPEAKER_00:Thank you for joining us on this podcast. We hope you enjoyed it as much as we did. For other interesting topics, go to your favourite podcast platform or watch the video version on YouTube. Just search the Virtually Everything Goes podcast. See you next time.