
AI and the Future of Work: Artificial Intelligence in the Workplace, Business, Ethics, HR, and IT for AI Enthusiasts, Leaders and Academics
Host Dan Turchin, PeopleReign CEO, explores how AI is changing the workplace. He interviews thought leaders and technologists from industry and academia who share their experiences and insights about artificial intelligence and what it means to be human in the era of AI-driven automation. Learn more about PeopleReign, the system of intelligence for IT and HR employee service: http://www.peoplereign.io.
AI and the Future of Work: Artificial Intelligence in the Workplace, Business, Ethics, HR, and IT for AI Enthusiasts, Leaders and Academics
Exploring AI's impact on humanity: A conversation with author, philosopher, and futurist Gary F. Bengier
What does it mean to be human when your colleague's a bot? Can AI ever truly understand us? This week, we're thrilled to welcome Gary F Bengier, eBay's first CFO and author of the award-winning novel, Unfettered Journey, as we dive into the future of work and the role of AI. Gary's background in Silicon Valley and his understanding of AI and technology make him the perfect guest to shed light on the ethical implications of AI, the potential impact of large language models on business, and the crucial differences between symbolic software and large language models.
As we unpack the World Economic Forum's prediction that AI will generate 97 million new jobs while eliminating 85 million in the next three years, Gary and I contemplate the implications of machines and humans working together. We discuss the possibility that robots could eventually build robot factories, detaching the output of the economic system from labor hours, and explore the question of sentience in the age of advanced technology. Join us for an important conversation and peer into the mind of one of the great philosophers and technologists of our time.
Oh, and learn what Gary says is a better definition for the acronym "LLM" :).
References in this episode:
Good morning, good afternoon or good evening, depending on where you're listening. Welcome back to AI and the Future of Work. Thanks for making this one of the most downloaded podcasts about the Future of Work. If you enjoy what we do, please like, comment and share in your favorite podcast app And we'll keep sharing amazing conversations like the one we have for today.
Speaker 1:I'm your host, dan Turchin, ceo of PeopleRain, the AI platform for IT and HR employee service. I'm also an investor in an advisor to more than 30 AI-first companies And, as you know, a firm believer in the power of technology to make humans better. If you're passionate about changing the world with AI, or maybe just looking for your next adventure, let's talk Now. We learn weekly from AI thought leaders, but, of course, the added bonus is you get one AI fun fact each week. Today's fun fact comes from an article published this week by Noor Alsabai in The Bite Online titled ChatGPT is consuming a staggering amount of water. The headline popped my eye. Researchers from the University of California Riverside and the University of Texas Arlington did a study that revealed that, just in training GPT-free alone, microsoft, which is, of course, partnered with open AI to the tune of $10 billion, consumed a whopping 185,000 gallons of water, which is, per their calculations, equivalent to the amount of water needed to cool a nuclear reactor. What's more, ChatGPT needs to drink the equivalent of a 500 milliliter bottle of water for a simple conversation of roughly 20 to 50 questions and answers. As always, we'll link to that intriguing article in today's show notes. I encourage you to go read it. But now, shifting to this week's conversation, fewer is qualified to have opinions about AI in the future of work. As today's guest, gary F Benger, was the first CFO at eBay and has had a hand in bringing technologies to market as a Silicon Valley exec, including hard drives, printers and even video streaming.
Speaker 1:Gary is also the author of the recently published, award-winning Unfettered Journey. It's a fascinating read. I'm not all the way through it, but quite dense and a great read. It's a fascinating work of fiction that leaves philosophy and AI into, of all things, a love story about what it means to be human. Gary received his MBA from Harvard And gosh without further ado. Gary, it's my pleasure to welcome you to the podcast. Let's get started by having you share a bit more about your background and how you got into this space.
Speaker 2:Well, Dan, I'm delighted to join you and your audience today. Thanks for that intro. You covered a lot of the material. The yes, I spent about 30-year career focused in Silicon Valley on all kinds of the ground, the bottom baseline technologies, everything from a half a dozen years in biosciences to computer peripherals, chip design, video on the internet and the internet, of course, as you mentioned, And so that gave me an idea about a lot of technologies and, I think, about how they will come together as we look over this next century, And so that that idea of what will the future most likely look like was one of the impetus behind writing my book on Federer Journey your Renaissance man, a deep thinker, a philosopher.
Speaker 1:What inspired you to go the route of fiction when you authored Man Fetter Journey?
Speaker 2:Okay. Well, if you look at futurism and science fiction books, I actually think a lot of them get it wrong, perhaps because many writers don't have the benefit and the luxury of being in so many of the technologies, and so it's really hard to predict the future. And if you had written a book in, say, 2007, you might have missed the iPhone right And think about how much that has fundamentally changed our experience, and so your book would be wrong right away. And so many writers, i think, give up. They have dragons, they have very unscientific ideas, faster than light spaceships, and so I think the average person has a warped idea of the future. You know, it's a Hollywood version, that won't really happen.
Speaker 2:And yet I think, and it is very hard to predict the future I mean, a year ago, who would have predicted that Putin would invade Ukraine, right?
Speaker 2:So the world has changed upside down in many ways with one event, so I can't tell you what will happen tomorrow. It's just, it's a very hard, and so actually it's very hard, for example, to predict what AI and LLM large learning models might do over this next decade. But I do think we can look at the trends of these technologies over the longer term And we can look at scenarios about what is highly likely. And if you look out a little bit farther than this next 10 or 20 years, i think you can say some things with more certainty. And so that was one of the impetus behind my book, and that's why I place it in the quote unquote near future, which is the year 2161, roughly 140 years in the future, because I think we can say some things about what the world would be like then that are highly likely. And so the important thing about that approach is that I think it perhaps helps focus us on what are the major challenges for humankind in this next century plus.
Speaker 1:So, gary, no dragons and no faster than the speed of light spaceships in unfettered journey. But it does force you to ask the question what does it mean to be human? What does it mean to be human? I'm just a question?
Speaker 2:Yes, well, in fact, my book has a philosophical approach. Actually, it's won several awards for best in as a spiritual fiction as well as hard science. So bringing those together. So you know, what is the essence of humanity? Well, i'll fall back on quoting Shakespeare and Hamlet. What is a piece of work as man? How noble and reason, how infinite in faculty, in form and moving, how express and admirable in action, how like an angel in apprehension, how like a God, the beauty of the world, the paragon of animals, and yet, to me, what is this essence of dust? So what is Shakespeare and Hamlet doing there? You know they're saying something fundamental about humanity and asking you know, what are we really? Why are we here? They're asking very deep questions And you know, since the topic of Dan, your podcast, is AI and the future of work, that's something that I think distinguishes up from our AIs and will continue to do so for a long time.
Speaker 2:So you asked a question about my background. One of the things I did after my career in technology is I went back to school, back filled an astrophysics degree, back filled an undergraduate philosophy degree. Then I got interested in that and I got a master's in philosophy, focusing on philosophy of mind And I was really spent a lot of time thinking about what is consciousness, what is the eye that is the center of you And, as related to that, can our machines ever be conscious? Can they ever be sentient? And that conversation, it turns out, is coming up faster than any of us imagine right now in the context of machine learning, llms and AIs in general.
Speaker 1:Gary, we've published about 200 episodes of this podcast and that's the first time that a guest has quoted Shakespeare. That was beautiful. I'm humble thanks. That was so appropriate and so beautiful And the best way to answer a question about what does it mean to be human? So I'm gonna ask maybe a harder version of that question. In the era we'll call PG or pre-GPT, or at least pre-chat GPT, which was last year, we had some vigorous discussions on this show about whether or not The Google bot was in fact sentient. An engineer at Google named Blake Lemoine claimed it was. Most of the tech community claimed it wasn't, and it forced us all to ask what is sentience? And I think that's you know. It relates closely to what does it mean to be human? I've been eager to get your definition of sentience And maybe, if I could ask one related question, should we care.
Speaker 2:I think we should care. I would say that consciousness is what differentiates us from mostly every other species. There are arguably several that are conscious. So definitions and I actually I mentioned these in my book Unfedder Journey. The main character is Joe, an AI scientist whose job at the government ministry is to create robot consciousness. And Joe has been frustrated in his job because he doesn't think he can, and so he goes off on a sabbatical to small college where he's trying to work through what is consciousness. So you can figure out if this is actually possible And just a little bit about the plot. Then he meets a woman who is a crusader for social justice in that year, 140 years from now, and then the story ensues, which is an adventure and love story with quite a dollop of philosophy, and it asks this question So in the definitions there is sentience, which is the ability to feel.
Speaker 2:So there's something called. There is a layer below that in terms of the pyramid up to full consciousness, where you don't even feel right And at some point then it's like digestion right, it's a chemical process, but then at some point, beyond a sea slug, you can feel right. So you're. So a rabbit suddenly sees something, a hawk chasing it and it runs like heck. Well, it's experiencing fear. It is sentient. It may not be conscious. Conscious is another level above that, and so I do distinguish between sentience Can you feel something And consciousness, which is a different state. And actually I explore that in the book in terms of asking can even these machines be sentient? Can they feel So?
Speaker 2:for example, there's a debate in the book early on where another sort of obnoxious academic says well, we walk around and they seem to be conscious. Isn't that true, joe? And Joe goes no, it's actually. The truth is that they're not conscious. It's cheap tricks. All the way down, okay, because? and the obnoxious academic says well, we met the robots that have AI's embedded. Have those foreheads that glow with various colors. They turn pink when they're embarrassed. And Joe says no, that's again a cheap trick. Sure, we have this counter inside. It goes from zero to 100. And when it gets to 100, the bot shuts off. Okay, it's like a kill switch. So, yes, it's programmed to avoid this.
Speaker 2:You might characterize that as a fear or pain. Pain more particularly. But what is it like to feel pain? In philosophy of mind, there's a concept called qualia. It is the. What is it like to feel a certain way, right? You know what is it like to eat an apple, right? What's the embodying is experience of that, and so that feeling is a sentient experience, and it's a conscious experience, in that case, that it's extraordinarily hard to think how we will embody that And, in fact, how we will even know when our AI's and our robots might have that.
Speaker 1:As machines get increasingly complex, both in terms of their processing capacity, their speed, their manual dexterity, et cetera. Couldn't you also say that humans act very mechanically And there's a way to decode the computational process of biology? And right now we just agree that humans are sentient, because the complications are too complex to even fathom. We don't know how to compute what a human's gonna do based on a certain set of sensory inputs, but just as a thought experiment, could there be a time when we've decoded that and the capabilities of humans and machines, call it sentience or whatever you want, but they start to converge? Well, that's a very important point.
Speaker 2:So how are you going to make this possible if we need to refer to the data of an individual via our machine learning? mundial test doesn't work. If your chip eats a zebra from the mother, what advice does anyone have for an AI and does your programmer be able to manage? Well, there's a debate I'm having with another person. At least over half a billion no, excuse me, half a trillion, over 500 billion different essentially dimensions that they aggregate all together, okay, that many dimensions of computations So, and that high dimensionality in the way the computer programs work is what makes them so astonishing, and that's actually the fundamental to how chat GPT has become so astonishing in this last year, right? So That's fundamental.
Speaker 1:So, if we think about the fear mongers among us and you and I, before we were recording, we talked about the fact that we are, generally speaking, optimists with respect to AI and the future of work or humanity. But there are a lot of fear mongers who say eventually, when nefarious people get in front of these bots and start having them break bank accounts and murder humans and eradicate species and various things like that, we will lose control of these bots. I am continuing to be optimistic. I feel like if humans are smart enough to develop these intelligent, possibly even sentient bots, I think we're also smart enough to be able to rein in the potential harm that they could do. But I would love your perspective on that. Is there a time or a way that you envision where the bots may go rogue?
Speaker 2:Okay, well, let me try to parse that apart because there are so many levels there now And before we started this, i mentioned to you I've just come back from the Santa Fe Institute where I'm on the board there and we had a series of science meetings on the complexity of knowledge And the topic involved a lot about chat, gpt and what that means, and we had quite a number of very smart people who are experts in that field to convert. And also several years ago there was a workshop that I attended there called AI and the barrier of meaning, dealing with can these bots ever, you know, jump the cash and cash them to be conscious, etc. And just a couple weeks ago they had the 2.0 version of that same workshop. So I can I can report right now on sort of what's happening, at least some of the thinkers on this. So, okay, let me go back and forth between the pluses and minuses of what we think about chat GPT today.
Speaker 2:So, on the, on the one hand, just today, dr Joffrey Hinton, you know, resigned from Google. He was the top person, the father of AI. Because he's concerned about the future and the bad things that may happen and not working for the company, he can speak freely, and there was a. There was a letter signed by, i think, 19 of the major researchers who are cautioning about moving ahead with AI at the speed we're doing now because of so many unintended consequences that might result, particularly because we have two of the largest companies in the in the internet space who are moving ahead because of competitive pressures to develop their AI, and we can think of lots of bad things that can happen with moving this ahead really fast without thinking the consequences. Dan, you just mentioned the bad actors. You can do an enormous amount of things. I mean, for example, between chat, gpt and dolly. We can imagine lots of ways that that images and videos will be engineered that appear to be real and they aren't. And so we've we've got this enormously powerful tool that so many people can use for bad things, and that is really deeply worrying. So that's one, and so I do worry about that a lot more now than I did a year ago, because, because of the power of these tools And then I'm going to, i'm going to ask for a set, because I'd like to talk about the impact on work in just a minute and more depth The other side of this now is, and so, by the way, because of what's at GPT, these LLM sees large language models what they can do, i think that they're going to be tremendously impactful, certainly over this next decade. I think lots of people will make lots of money off of this because of the disruption that it will cause.
Speaker 2:But now let me switch to other sides. Will this keep on going? And you know, is this, is this going to fundamentally change everything right away? So one of the things that happened several years ago on that first workshop that I mentioned at Santa Fe Institute is that the experts there, to my shock, were very, very negative on where a was going, because they were worried about AI winters, where everyone gets hyped up about the, the AI's, and then suddenly they hit a wall and they can't come in for them. What was reported This will pass weekend on on the 2.0 version is that several leading fields, including leading researchers, including people at those major companies, have said we won't be talking about LLMs very much in a few years, in the next few days. There's a wall out there, okay, and and it's hard to go the next step. So so let me explain why I think there's a wall, and can I get a little technical for a moment on the code, because I'm a I'm a computer science decision theory undergraduate, so I haven't coded a lot for a whole long time.
Speaker 2:But but you know how does the code really work? Let me, let me explain that a little bit. I'm going to use the example. You know Google, google translate, right when Google was inventing translation, and we have these wonderful tools. If you've used those things, they're great, right, you know you? you, you put in jack fell down And you know what happens? is you? the Spanish comes out my Spanish is horrible, but it comes out And I'm going to say that's a sick hi, okay, jack fell. Okay.
Speaker 2:The first way that those worked is that they had a bunch of programmers who coded in the the, the rules of grammar. They were basically if, then statements, and they had to code the grammar. And as these machine learning algorithms started to develop what they realized, if you use a lot of data, you don't need that And basically you can get rid of all the, those programmers doing the grammar rules. You don't put in any grammar rules, you let the data models spit out what is the the next thing to happen, okay, the next next word, phrase, etc. And so what happens? there are no grammar rules in Google translate today, as I understand. Okay, so because the large learning models, the machine learning algorithms, do that. Okay, so now let me distinguish between symbolic software and these llms. Symbolic software, you know, it's like how you code up wolf rooms And so math pieces of software And so their algorithmic, you know you start with certain premises and it absolutely gets to the logic at the end, so gives you the right answer.
Speaker 2:So some people have pointed out, with chat GPT at least chat GPT three you could put in simple math issues. You know, if Ron has is traveling to the east at 50 miles an hour and Mary's going the other, etc. Those kind of typical things we had in high school, and it would give you the entirely the wrong number, the answer. It would just be wrong. You know anyone can look this up. You could. You could look up the, the, the, the, the stuff on Google maps and say, yeah, there's only 50 miles between them and it's, you know, totally BS. And because the, the large language models, were not incorporated with the symbolic software. So it can give you really bad answers. And, by the way, that's one of the problems. Now Someone put in what are the three scandals associated with acts, right And chat? TPD proceeded to make up three scandals and gave references. Okay, so one of the folks that I understand was at the one of the conferences lawyer and she said LLM stands for large libel models because the attorneys are gonna go crazy and they're already because there are lawsuits where, as one example, some college university checked chat GPT about some candidate and got completely erroneous, scandalous background and denied a job and they're suing. So there are a lot of problems with this.
Speaker 2:Now, why did I distinguish between LLMs and the symbolic?
Speaker 2:Well, you can imagine, as a first step, you can fix some of this by let's just put a layer on top of an if then statement.
Speaker 2:If it's a kind of this kind of problem, then go down into the symbolic data stuff where we know that there's the right answer, and then pull that out, right, okay.
Speaker 2:But I think it's really hard to do that because the reason that Google abandoned their first approach with the Google Translate, which is basically a bunch of stands statements, think about as a jumble of code right, that may or may not be right And they replaced it with this one large way to do stuff is that it sort of doesn't have that kind of ad hoc feel to the thing.
Speaker 2:But if you add a layer on top, well, let's make chat GPT not answer any question about Biden or Trump, so that it and then the chat bot says I'm sorry, but I don't deal with that. You've just added an if then layer on top of the thing, and so my question into the next iteration of the LLMs is can they figure out a way to overcome that particular issue? Or will we find that chat GPT sort of what it does now is phenomenal and I can talk about what that will change, but maybe it's not gonna continue forever into a singularity and the AIs are all gonna take over. So the long explanation, but I think it was important to get into the details to make that point.
Speaker 1:The World Economic Forum says within the next three years, AI will generate a net new 12 million jobs, because it's gonna lead to the elimination of about 85 million, but it will lead to the creation of about 97 million. You taking the over or the under on that stat Oh?
Speaker 2:geez, i agree with the magnitude. I'll tell you that because let me give you an example. GitHub has code, lots of code, okay, and it's been used to speed up coding because you can just grab this piece and incorporate it. I heard again this weekend, ed, there's some company, they do a Windows version and they do an Apple Mac version, and they had lots of programmers that were making both versions. Well, they suddenly realized we can fire half the folks, make one version, then use GitHub with these GitHub, with these machine learning to create the other one, okay, without all those programmers. So now you've got a very high end job, computer programming, which is filling the universities today. And yet here you got a high end job that may be subject to a lot of compression into the number of jobs. So the title of your podcast, dan, can we kind of switch to jobs, i think.
Speaker 2:We just did Okay so my book Unfettered Journey, actually really worries about this a lot, because here's my. Let me give you the span why I reached out over a century. So we can see what's happening with robots and AI already With robots, if you look at the Boston Dynamics examples that are throwing free throws from the center line of the court and can dance, it looks like that's gonna happen tomorrow. So I don't think so. I think it'll take longer for robots to really be great. We've got to lower the weight of the batteries, as just one example, but I think, given the economics, it is almost impossible to imagine that they won't be ubiquitous in the century, century and a half. They'll be walking around among us and the AIs will be the brain inside of the robot.
Speaker 2:So when we talk about AI and robotics important to talk about those they're following each other, and what does that mean is that we will continue to see jobs disappear. And where's the end result of this? The end of this is will happen, i believe, is that robots will build the robot factories, and when robots build the robot factories, then the number of robots per person is a lot And that, for the first time in human history, the output of the economic system is not tied to labor hours or labor productivity or anything that has to do with human labor. So we'll have a ton of robots, we'll have a ton of stuff And we'll have very few jobs. And the big challenge for humanity is that I think that that disappearance of jobs will be faster than the increase in stuff. And then it leads to an enormous question of how you allocate the stuff.
Speaker 2:Because we have an economic system and I'm a market capitalist. I believe that market capitalism has been instrumental to bringing all this huge number of people out of poverty and bringing the standard of living for humans to a very high level. The example I use is that the folks who left Castro's Cuba and went to Miami in the US the statistic is that they're, on average, seven times wealthier than the folks they left behind. So that brand of communism and socialism is not effective to creating enough stuff. When you have a market capitalist system, it causes people to work long hours. It has been working 80 hour weeks over quite a number of jobs in my career.
Speaker 2:Someone may say that is really dumb. You only have so many hours on earth. Why did you do that silly thing in terms of how you think about human life and what you're gonna do. But that's what the system does, and it produces a lot of stuff, and more efficiently, and so that's good. But now that system's gonna change because the exponential ability of the technology to concentrate wealth will continue, so that will be a big challenge too. How do we think about hopefully evolving the system that we now have that's been very effective, to something that isn't even more orders of magnitude unequal than the one we have today?
Speaker 1:Biggest challenge Gary, i gotta get you off the hot seat, but not before you answer one last question, and I wanna come back to a theme that we were talking about at the start of the conversation. So in that future, like you just mentioned, where, let's say, your colleagues, a bot and a lot of the output, the economic output that traditionally you, a human, were responsible for delivering, can now be delivered much more efficiently by a bot. So, just kind of at an existential level, what do we do? What are the things that we continue to be uniquely good at in an era where all the mundane stuff, presumably all the stuff that generates the most economic output we can outsource to a bot?
Speaker 2:But now you're getting to another one of the key themes in my book How do we find meaning and purpose in the world where there is jobs? don't dictate who we are because there are so few jobs. And yet it can be in utopia if we can figure out how to equitably share the economic fruits of that system. And it's hard to say what the answers are because creativity, some folks say. Oh, everyone will become poets. I think there will be some of that If you look back to the Greeks and the Romans and we sort of idealize what came out of those cultures. Well, part of that was because in both those cases, i think 40% of the populations were slaves, so they had this unconscionable moral issue with how they accomplished that And this argument that you mentioned in the beginning about our robots, conscious that I, because of my thinking about this in terms of philosophy of mind, i think that that is certainly not an issue that I think we really will come to in this next century, even if ever.
Speaker 2:But I think it's a long time off. And that means that think about the AIs as more like the Google Maps voice that you say, you type in or you say drive me to X, and then the voice talks to you, and I think none of us really think that that voice is some other agent, right? And so if that continues to be true which I believe it would be true then I don't think we'll have a moral issue with having these AIs and these robots doing our work. So we have sort of that again, the Greek culture, without any of the moral downside, because there will just be machines that do our stuff for us And that will free up a lot of our time to do a whole lot of things. But there'll still be the problem, because we, even today, have too little time to take in all of the creative aspects of life out there. Dan, five times more people should be listening to your podcast, okay, but they're busy, okay. There are 4,000 books that come out every day. There are over a hundred thousand songs that are uploaded to Spotify and the other platforms every day, so there's enormous amount of creative output already that can't be consumed already, so, okay.
Speaker 2:So, again, i don't know what the answer is. I think that some folks that say that, oh, we'll become poets, well, maybe not So, and so one of the last challenges I'll mention, though, is because these chat, gpt and the others mean that figuring out how to use them. You know writing prompts will be really important. In fact, i suspect that you know in the average office, the thing that will change most is people will write a prompt. They'll use a prompt to write to inquire, chat, gpt Before they write any email. That's anything important, right? When you go to university, one of the important things you'll learn is how to write good prompts okay, and that will actually add to the digital divide, because if you can't do that well, you will lose this enormous competitive advantage and intellectual advantage over everyone else. So I worry a lot about the social justice issues, about folks, because this adds to folks who are on the higher end of intelligence just way more leverage. So how do we deal with that? That's also my book on Federer Journey.
Speaker 1:And that sounds like where we're gonna pick up in the next version of this conversation. Gary, We are just getting started, but gosh, it's all the time we have for today. Would you mind coming back another time?
Speaker 2:I'd be happy to yeah. Delighted Dan. Thank you.
Speaker 1:Fascinating discussion To our listeners. Go out and buy the book On Federer, journey and Gary. Where else can our audience learn more about you and your work?
Speaker 2:They can find it at my author website, garyfbenger b-e-n-g-i-e-r dot com, and you can find it anywhere you buy books. Thank you very much.
Speaker 1:That's all the time we have for this week on AI and the Future of Work. As always, I'm your host, Dan Turchin from People Rain, And of course, we're back next week with another fascinating guest.