HR Data Labs podcast

Beth White - The Evolution of AI in the Workplace

David Turetsky Season 9 Episode 11

Send us a text

Beth White, Founder and CEO of MeBeBot, joins us this episode to discuss lessons learned from working with AI. She also shares concerns regarding the continued widespread integration of AI in organizations as well as hopes for the future of AI. 


[0:00] Introduction

  • Welcome, Beth!
  • Today’s Topic: The Evolution of AI in the Workplace

[5:21] What has Beth learned over the years working with AI?

  • How AI has evolved from basic if-then statements
  • The journey toward developing conversational AI

[10:20] What are Beth’s concerns regarding AI?

  • Generative AI consumes an unprecedented amount of energy
  • AI regulation remain largely unstructured, especially within organizations
  • The importance of grounding the AI with human checks and balances

[26:23] What are Beth’s hopes for the future of AI?

  • Educational programs developed by a diverse set of professionals
  • The natural integration of AI into HR practices

[34:56] Closing

  • Thanks for listening!


Quick Quote

“[If we] help educate and train people on AI today, we’ll have a more employable workforce for the future.”

Contact:
Beth's LinkedIn
David's LinkedIn
Podcast Manager: Karissa Harris
Email us!

Production by Affogato Media

Announcer:

The world of business is more complex than ever. The world of human resources and compensation is also getting more complex. Welcome to the HR Data Labs podcast, your direct source for the latest trends from experts inside and outside the world of human resources. Listen as we explore the impact that compensation strategy, data and people analytics can have on your organization. This podcast is sponsored by Salary.com, your source for data technology and consulting for compensation and beyond. Now here are your hosts, David Turetsky and Dwight Brown.

David Turetsky:

Hello and welcome to the HR Data Labs podcast. I'm your host, David Turetsky and like always, we try and find and find the greatest minds inside and outside the world of human resources to bring you the latest on what's happening today. We have with us Beth White from MeBeBot. Beth, how are you?

Beth White:

Hey, David. I'm doing well. It's great to see you again. I know viewers can't see us, but we had a chance to meet at HR tech this year.

David Turetsky:

We did, and it was an extremely busy HR tech, if you remember correctly!

Beth White:

Oh yes, you were, you were non stop doing your podcast.

David Turetsky:

Yes, yes. And the good news is, is that they've ended now finally, we've we've produced and we've published the last of them. So now we're looking forward to next HR Technology Conference, 2025!

Beth White:

Wonderful

David Turetsky:

Beth, why don't you tell everybody a little bit about you and MeBeBot?

Beth White:

It's great to be here today, and I'm Beth White, founder and CEO of MeBeBot. At times I called myself the chief bot because it was a lot of fun in the world of HR.

David Turetsky:

And you're very much a real person.

Beth White:

I'm a real person and but, you know, there are people that operate AI, so that's one thing to always keep in mind, and that's part of like, what, what I'm sure we'll discuss today. But I spent my early career in HR, working in all different facets of HR. Frankly, I left the profession for years being a little bit frustrated and came back to bring solutions. And really it was the advent of seeing a lot of different types of AI technology kind of popping up, literally on consumer facing websites in the forms of chat bots, that truly inspired me to say, Hey, this is a time where we can bring solutions to HR for HR that are really designed to help improve operational efficiencies and free up some of that valuable time that HR needs to, frankly, be more strategic to the business and provide that overall value, and now more than ever with the World of AI, shaping our businesses and our daily lives. You know, we're out there evangelizing and educating often about, you know, meeting people where they are on their journeys to learn about AI.

David Turetsky:

Perfect. That's wonderful. So Beth, we ask everybody this, what's one fun thing that no one knows about you?

Beth White:

You know, David. Everyone has these great career paths. Mine has taken a lot of different turns over time. But one fun fact is, when I left college, or after I graduated from college, I moved out to the Pacific Northwest, and frankly, was a little bit lost as to what to do next with thinking about law school, thinking about other things, and had a chance to work on a fishing boat in Alaska,

David Turetsky:

wow!

Beth White:

and help pay off my student loans! So, so it's one of those fun facts that you know, I'm still looking back thinking that was pretty crazy. It's not as you know, the dangerous catches TV shows. I was actually on one of those larger fishing boats that, frankly, operate with, you know, moving factories. And talk about cold. That was very cold experience, you know, in the middle of the Bering Sea off of the coast of Alaska. Yeah.

David Turetsky:

Wow! That sounds really cool, well, and actually really cold as well.

Beth White:

Yeah, exactly it is.

David Turetsky:

The reason why Beth's mentioning this is because it's December here in Massachusetts, and it is freezing cold in my office right now. So, yeah, yeah. Well, anyways, I actually feel even colder thinking about the Bering Strait as well as being in Alaska on a fishing boat. I'm sure you've got some really cool stories, which

Beth White:

I do. I mean, there's you meet a lot of interesting people in those experiences, and you learn some some different things about, you know, professions and ways of life and even food production.

David Turetsky:

So, and let's just say you were not the HR person on the boat, because that was very different job.

Beth White:

I was not! That's right, that would have been a different job. So

David Turetsky:

Yes, there you go. Well, today we've got a very interesting, I would say, not very different topic than we've been talking about over the last six months, and it's a very important one. In fact, I think this harkens back to one of the first podcasts on HR Data Labs, which is ensuring ethical AI for HR by ensuring that humans are in the loop to provide supervised training to AI, whether it's people analytics or in the data sets. So Beth. Our first question is, how long have you been working in artificial intelligence, and what have you learned over the years that can help us with this?

Beth White:

Oh, well, there's so much that I'm continually learning. And I started the process of really digging into AI back in 2017 and as I mentioned, I was starting to see, you know, what was called at the time, conversational chat bots

David Turetsky:

right

Beth White:

You know, popping up on different types of consumer facing websites. And I thought, how do these really work? And trying to unravel it a little bit or not or don't work, which is the, which was the case at that point in time. There was so many use cases for AI on, you know, for example, your bank's website, or, you know, a cellular carrier's website. And you thought it would be helpful, but yet you were spun into a circle. And so in digging into the technology more, it's really a matter of, you know, there was a movement from if then statements, meaning decision trees, that were essentially the first iteration of conversational chat to natural language processing and machine learning, and the basis of the technology that was brought into tools that we use daily, like Siri or Alexa or Google Home.

David Turetsky:

right

Beth White:

And the more you know, companies or entities started to develop a natural language processing there is this component of machine learning, and machines only learn with the humans involved. And so that was the point in time when I was really able to see, how do you train AI to respond more accurately? Because in what I foresaw as how I thought MeBeBot could come to life, the only reason it would succeed is if there was a greater level of accuracy than what I was seeing today or at the time seven years ago now, in the consumer facing website chat bots, right? Because they weren't accurate. They did not build trust or loyalty. And so that was a big kind of aha moment, is to really start to uncover, you know, how do you train AI, where does a human in the loop come into the process so that you can have a solution that people want to use and want to come back to using time and time again.

David Turetsky:

But I think if we look at even the current iterations of Siri and Alexa and some of the others, even to the extent of which, if you're using chat bots today inside of web applications, they're still not conversational. In fact, they're barely reactive. So having second or third challenge questions. It's not even, frankly, if then, it's asking a question and then trying to follow up, the threat is lost because the first question doesn't get followed up at all. And so I guess the question is, how have we evolved at all in the, even in the chat bot technologies to be able to answer questions better by being able to get that secondary or tertiary, at least explanatory or revision that the technology can understand so it makes it easier for the consumer?

Beth White:

Yeah, that's a great question, David, and I think right now, we're in a really cool space. We're at a tipping point where we're going to see a whole next iteration of conversational chat that actually works. And really what's going to make that possible is the evolution of generative AI has been able to produce ways that you can actually inject generative AI into conversational flows so that you can continue down the process with a user or an employee that's interacting with the chat bot to guide them through a process, to guide them to getting more answers to their questions. So you're right, it's been challenging because of the ways that AI learns. A lot of times, conversations kind of stopped flat. When that was happening in the early days of MeBeBot, what we would do is we'd use AI to surface related topics and and then even in companies that were using AI chat bots on their websites, they were doing escalation pass, so that's how it was kind of being handled. But now, in the era of AI with different what's called semantic kernel and other types of interjecting of code or algorithms into the process, we're going to see a huge surgence of new activities that are going to drive the adoption and usage of conversational AI in the coming year. I think it's definitely going to happen in 2025.

Announcer:

Like what you hear so far? Make sure you never miss a show by clicking subscribe. This podcast is made possible by Salary.com, now back to the show.

David Turetsky:

So what are your biggest concerns currently about artificial intelligence?

Beth White:

Oh, David, there's, there's number of them. I mean, being in this space for a number of years, everything from, frankly, I posted something today about the environmental impact of AI and the energy usage of AI. It's a sustainability issue, right? We have, we see Microsoft, Google and Amazon, all within the last six months have purchased nuclear power to be able to, you know, get in supercharge AI to the level of capacity that's needed to process AI in the generative sense, it's really kind of sucking more and more of the energy needs to do so.

David Turetsky:

it's fascinating, isn't it, that we've gotten to a place with Moore's law, where our processing power is so amazing that we're literally able to create almost neural networks for our computing systems. The watches we have on our wrists are more powerful than any computer we had that existed in the entire world before, you know, the 1980s but what we're doing now is creating these things that are requiring us to create or to innovate, or even to go backwards in our energy production to be able to withstand the energy needs that this processing power is going to require. It sounds crazy, but it's actually very true. It's just mind boggling.

Beth White:

It is.

David Turetsky:

I mean, you're bringing up sustainability, yeah, well, and who would have thought that it's not even the internet, but it's it's AI that's going to draw all of these resources for just processing and reprocessing data?

Beth White:

Yeah, I read a statistic recently that said, for every 100 training AI tests, it's like having left your, you know, hair dryer on for a couple hours. You know, it's just the kind of usage of energy that is just required for some simple, you know, algorithmic calls, you know, is again consuming the energy. So what do we do about it? I know that it is not being lost on even large, you know, the entities purchasing these additional sources of power from whatever source they may choose, but I do think that there's ways that people who are developing within this space their own technology solutions can come up with methodologies where It's not actively calling the AI for everything possible, and that's really where we are at the era of large language models. And what we'll likely see in 2025 is more of small language models existing that do not require as much energy consumption, but yet can do the same types of tasks for the specific business use case. And so that's that's very exciting to see that we are already going from, I almost think of it as a funnel, like this huge funnel, like just everybody consuming everything, to going, let's just consume what we need. And so we're starting to see that happening within some of the newer methodologies for architecting AI solutions.

David Turetsky:

But even though the scale, you're talking about the one of some of the largest companies in the world that are looking at this, the scale hasn't gotten to a consumer activity base where everybody's pouring AI into their phones, and thus they need their phones to get recharged more often, or their iPads, or their computers, and that energy suck as well. Because if you're doing well, it's probably not large language models, but you're probably doing to your point before something smaller, like the image playground in Apple's new iOS, that's still going to suck down resources. So your point about, you know, one of your first concerns being sustainability, it's not going to get better, it's going to get worse, right?

Beth White:

You know, I try to study the futurists that are looking at these types of big topics, and yes, it's probably going to get worse before it gets better. So let's just all hope that we have people who are developing within these technologies that are being more mindful to it, and then we'll get to a better place sooner. But yes, as hardware gets, you know, sophisticated to leveraging AI as well, it is definitely going to be another draw on it all, because you're right. There isn't the consumer adoption just yet. But it's

David Turetsky:

Not yet. It is. I mean, the the newest versions of iOS still don't have for everybody, the the AI functionality that they had been promising in iOS 18, I think it is, which is getting released now, and people are signing up for it now. So, so I guess the next question, and within that is, what are your other concerns? Because I still haven't heard you talk about the machines taking over, the bots taking over yet.

Beth White:

Well, you know, in that that, obviously, is the

David Turetsky:

I was just gonna say most of the states are concept of the sentient computer, and when computers can think, and you know, at Sam Altman, maybe a month ago, said either reviewing during conference or they're drafting we're 1000 days away from a sentient computing, you know, power being released upon the world. And, you know, that is legislation to either put limitations on or at least give very nerve wracking, you know, in the sense that, do we really know what we're doing with that playground that we're releasing onto the world? I see regulations in AI that were privacy limitations, if not copyright limitations, which is starting to be more forthcoming, starting to be more pulled back now. I mean, the EU has been out putting legislation many of the another gigantic problem right now in the world of AI, but European countries, or even adding their own addendums or different types of policies on top of the EU act that's in play right now. But in the US, you know, I live in Texas. It's the they're trying to give those kinds of at least initial pieces whole country is the Wild West. You know, in Texas, we usually say it's the Wild West, but AI, in the US is very much unguarded, besides a few of the states that have some regulations. of legislation. And I was doing research for for a conversation

Beth White:

For sure, I've been following all the legislative acts that have come from different states, from, again, I gave or a presentation in Hawaii. And I think at that like you mentioned, and many that are still being drafted and time, there were 40 states at least introducing bills or not have years to come. So is this middle ground, or what do we do now between having access to the technology without a lot of you again in conference to start drafting bills around it. So know, constraints, and it really requires us as individual it's going to happen at some point in the US. people, you know, to be smart about leveraging the technology, whether it's for our own usage as consumers or within the workplace. And that's where you know, frankly, the role of HR has such a important part to play within these conversations as companies are looking at creating AI governance policies. Some have them. Some do not. It is complex to go through, and probably what you have today may not even be what you need six months from now. So this is kind of leaving some organizations a little hung up from a business standpoint, because there's not as many types of rules to follow.

David Turetsky:

But the problem is, in the absence of rules, you're gonna have a lot of people who have downloaded chat GPT, or at least gotten a login, downloaded it to their phones, possibly and have started drafting requests or prompts in chat GPT, which potentially might actually have confidential information, not just within the context of the things they're saying, but in the context of where they're doing it from, and what they signed up with. So you know, if you're from XYZ company.com, and then you put in a request in in chat GPT that you want to develop a new compensation philosophy, for example, it knows it was from you, and it knows what you're asking. So it's, it's, you know, in that case, it's a little bit daunting though, you know, try, at least if you're IT trying to police all that.

Beth White:

For sure. Because I do think that's exactly what's happening right now. If companies have said, Hey, we don't know enough about AI or generative AI to give everybody carte blanche to use these large language model tools. They're saying no, well, people are just going to be doing it on their own. So in my mind, it seems smarter just to put some levels of guidance give people access to testing out this type of technology within the workplace in a safe, more controlled way, so that you can, you know, ensure that any company information that may be sensitive can be at least protected if, of course, you have your own type of subscription with the provider, etc. So, um. I'm hoping companies are moving toward that, because in people are curious. They won't they want to test it out. It's it's fun, you know? I mean, once you get started, you start to see it as an instrumental, you know, tool that you want to use daily in your work. So,

David Turetsky:

But with all the caveats that you know, you have to be careful about things, and to the extent of which, some companies are developing wall gardens where people can play in in a safe way, without thinking that that's going to get out outside those those walls. Yeah, it might be, might be safe. But again, on the consumer side, I think people are just just, I don't want to say they're ignorant of it, because I don't think they know enough to be ignorant, if that makes any

Beth White:

It absolutely makes a ton of sense, and that's just sense at all, but it. It's the more we can do to educate people on like, how does this work? Because right now, you see any one of these large language model user interfaces is just this little prompt box, and you have no idea how the magic happens behind the scenes, but if there's are ways that people can start to understand like it would be very cool, some of the times this type of technology would say, I've produced this answer this question, because here's what happened behind the scenes. To get you this information, giving people a little line of sight to the to the to what the technology is doing, how it may be sourcing and scraping publicly available websites you know, using it to deduce down based on your human natural language cues you gave it, how much difference in your verbiage prompt engineering is going to impact your results. It the AI almost needs to teach people how to use the AI and surface what's going on behind the scenes, so that we can all understand the risks and see the rewards as well.

David Turetsky:

You know what it reminds me of? Beth maybe, maybe you don't remember this. But if you remember fifth and sixth grades when we were taught ibid and op cit creating our book reports, what were your sources? Sources? Oh, you mean, where did I copy this from? And literally, that's what you're saying. You're basically saying, Where did you reference this information? Because what I found out when I was at HR Tech was sometimes, sometimes the AI makes stuff up based on things that it's been trained, and it basically kind of reads between lines which it really shouldn't do.

Beth White:

That is so true. I mean, I love your example, because, you know, I was a history major in college, and I used to write a ton of reports where you know you're having to cite every little source you ever use.

David Turetsky:

Oh, my God.

Beth White:

And and is, it's amazing, because you just magically get this information, yet you don't know the source. Sometimes the I have received citations in the responses I've received back or ask for it is a prompt, you know, show me where you receive this information from. I need to be able to cite it, because I do think people should dig more. It's just like you know anything from the media or reading a news article, you want to know where did this all come from? And and getting that validation of the information is something we all should ask for as consumers of AI. And

David Turetsky:

I think one of the things that we were going to talk about during this podcast is actually having people who are really kind of looking at the answers or looking at the requests, and real people being able to provide input on whether or not the models are trained appropriately and are the responses accurate. So how do we get to a place where people are monitoring the AI?

Beth White:

Yeah, I mean today, even at you know, the companies that are hosting and managing the whole and creating the large language models. I mean, there are people behind the scenes right that are seeing, you know, the results of prompts that have been sent to the engine right, and they're making, you know, human assisted guidance on some of the outliers or things that require more training, and so with that said, you the more that you can have diversity, frankly, in the people that you hire for these particular roles. That's why I'm part of a group called Women defining AI, because there's only 25% of all women are involved in AI, and if you have more of a gender balance, ethnic, race, economic societal balance, and people who are behind the scenes training these models, they'll start to not do what was one of the more famous cases from years ago, where Amazon had a process of scanning resumes in the early years of AI, where they were using words like Code Ninja Warrior, you as a developer, term you got surface to the pack, top of the pack, you know. It may have been a male centric development team that thought that was a good word, you know. And and when you have other people who have lenses on it that are like, well, I, as a woman developer, I would never call myself a ninja code warrior, and that gives people a little bit more grounding. So it's really that whole term grounding the AI means that human beings have to help ground the AI and the diversity and the people working to ground will help create, you know, better solutions that we can be more comfortable with, that do not have the biases and and the discrepancies that we would rather not surface.

David Turetsky:

Hey, are you listening to this and thinking to yourself, Man, I wish I could talk to David about this. Well, you're in luck. We have a special offer for listeners of the HR Data Labs podcast, a free half hour call with me about any of the topics we cover on the podcast or whatever is on your mind. Go to salary.com/hrdlconsulting to schedule your free 30 minute call today. So I guess that leads to the next question, which is, what are your hopes forAi, for the future?

Beth White:

Well, what I love to see is more and more people you know, getting involved and being part of what I call almost like the front lines, from either people who are seeing some of the technological risks and they're out there if you know, needed to prevent some of the issues that could be forthcoming in a legislative sense. So that's great seeing that legislative kind of arm starting to happen a bit. Being in Austin, you know, we have a community called the AI Alliance, which is a group of individuals who have all come together from both business and the education sector, as well as even, you know, public service. And so what we're attempting to do is say, hey, AI is not going away. It's going to be part of a lot of different jobs and roles in the future, and the more we can start doing education, even at the junior high high school level, and then get in programs into even the Texas Workforce Commission and to community colleges and to other training centers to help educate and train people on AI today will be able to have a more employable workforce for the future, so, which I think is pretty powerful.

David Turetsky:

Well, it's not even just an employable

Beth White:

That's right. Because, you have one sitting workforce, it's also a more educated consumer. Because, as we both know, the AI market isn't just about getting work done, it's about buying stuff, you know, like, Alexa is right by! Yeah,

David Turetsky:

But, but the ability for us to actually have forever. Alexa has forever been that thing that you could ask, you could generate a prompt and ask it to put stuff in your basket. And I can't use those words together, because it will do that! it be a part of our lives will enable us to be able to utilize whether it's the Apple Watch I have, or, you know, the phone that I'm that's sitting next to me, and be able to be better consumers of the things we already have living around us!

Beth White:

And we've been using AI for a number of years now. I mean, if you think about it, when Amazon first did start as a company, you know, they were a bookseller, and then they started to recommend books based on other books you read. Well, that was an algorithm, right? That's the basis! And using mapping technology, who travels anywhere without launching a type of, you know, Google Maps or Apple Maps, what have you I mean, we, we just don't. And Waze is a great example of human assisted training of AI, because it when Waze came about, it's like there's an accident, and you would report it, and that was data going into the algorithms to help you with your transition. But a lot of times it was so naturally happening around us. We didn't really know what was behind the scenes again. So again, that's you know, other you know, areas of of concern. But I do see hope in the you know, the participation level from you know, all different genders and and ages of individuals embracing AI and again, I just can't stop harping on the we have to have diversity. We have to have, you know, gender. We have to have the age, ages from teenagers to I introduced my my dad to, you know, Siri, because he never really learned to type well, you know. But. He can talk a text like nobody's business. So

David Turetsky:

And it actually the language models that existing today are actually not terrible. They're actually pretty good at being able to hear someone's voice, whatever dialect they have, whatever intonation they have, and be able to pretty accurately transcribe what they're saying in a relatively short period of time. And that's just amazing, because I was in naturally, natural dragon. I forget the dragon technology's name, but I used that decades ago to try to write a book, and it was awful. I spent more time correcting than I did actually talking. So now I can do it easily. I can actually talk into my computer or my phone or my watch, and it does a really good job.

Beth White:

It is amazingly, it has improved so much, but it improved because you kept using it right? And the more you used it, the more it got to know your voice and your intonation and your pronunciation. So it's just a matter of, don't give up, people if you're trying AI and it doesn't quite give you the right results the first time, you just have to try and try again. And we're not always used to experimenting, and that's also been a challenge in the HR profession, starting my career in HR early days, it was you released a payroll system, you better be exact to the penny on everyone's payroll. And so we were never taught that you could try technology and fail, because the failures were pretty risky, and what AI's

David Turetsky:

They have compliance issues.

Beth White:

You know, you don't want a million phone calls about someone's paycheck being off, right? And so you worked very hard to use technology to be exact and to be detailed and to use it as prescribed. And now we have this technology that is much more open and loose, and that's where it's a little harder sometimes for people to make that transition because of the trainings that they've had in the past on how to adapt and use other types of systems that they may have used with inside the workplace or even personal use.

David Turetsky:

Yeah, if I can add on one hope that I have for the for AI.

Beth White:

oh, yeah! What is your hope? yeah

David Turetsky:

Is that AI becomes another tool that HR naturally gravitates to, and also that we're, like you mentioned, that we're encouraging those schools. We're encouraging the community centers, the community centers where the elderly are everybody try and adopt education and training courses to be able to raise the level of acumen of the of the populace so that, you know, it's because AI is surrounding them, you know. Forget about the Will Smith movies. Forget about the, you know, the things you watch on TV. You know, let's cut through the noise and educate people on what it is and why we're living with it today, and how we can utilize it so that it becomes a partner with us at work, it becomes a partner with us at home, and we realize the benefits of it, instead of it either forcing us, forcing itself on us, or the legislation, legislation, which is really probably meaningless, gets passed, which is, you know, we're going to limit how it works in your world. It's never going to happen now, because it's in our technology. But that's my hope, is that people become more educated and they realize what it is before it becomes too late.

Beth White:

Yep, I wholeheartedly agree that's where we're at. Is a lot of need for education, and at least I've seen a number of great courses. There's opportunities to learn, and every chance if you want to learn, there's so many opportunities to learn for free too. So

David Turetsky:

oh man, hopefully people pick up on it and actually take it. Because one it's one thing to offer it, it's another thing to get the impetus to go do it.

Beth White:

For sure, and that's why, if it just becomes commonplace and things you're already doing and goals are always to augment the humans with their AI assistants that make their lives easier. And so there's just a lot of different natural ways to bring that into workplaces. And as we know it was done within our phones before we even knew it was happening, right?

David Turetsky:

Yeah, exactly. Well, at some point soon, that AI Overlord is gonna employ us to help the bots.

Beth White:

Yeah, it is forthcoming. So

David Turetsky:

what's happening? Beth, thank you so much for joining us. Really appreciate it. Your insights are invaluable in this and because you're swimming in it on a daily basis, maybe we'll reach out and we'll have you back on the program.

Beth White:

Sounds great. David, thanks for the opportunity, and it's good to see you again.

David Turetsky:

Good to see you too, and thank you all for listening. Take care and stay safe.

Announcer:

That was the HR Data Labs podcast. If you liked the episode, please subscribe, and if you know anyone that might like to hear it, please send it their way. Thank you for joining us this week, and stay tuned for our next episode. Stay safe.

People on this episode