Insights@Questrom Podcast

Transforming Employee Development with Generative AI

Boston University Questrom School of Business Season 2 Episode 4

Unlock the secrets to revolutionizing workplace learning with insights from Chris Dellarocas and DK Lee. In an era where AI's capabilities are soaring, they illuminate how generative AI is not just enhancing but personalizing the learning and development experience for employees. Imagine a world where your training is tailored in real-time, adapting to your needs, and offering immersive coaching that's just a cut above. Our conversation delves into AI's role in evolving training scenarios across industries, from sharpening sales techniques to navigating the complexities of environmental compliance. 

As we navigate through the podcast, we confront the pressing ethical considerations of AI in learning environments, probing the safeguards necessary to combat biases and protect data privacy. Chris and DK stress the need for continual evaluation of AI systems to ensure they serve our interests rather than mislead us. Shifting gears to the dynamic landscape of education, we discuss AI's potential to customize learning at scale, providing real-time updates to curricula, and offering invaluable personalized feedback. This episode is an eye-opener to the challenges and immense possibilities that AI brings to both corporate training and academic education, urging us to strike a balance to harness its full potential.

Chris Dellarocas:

Education, education, education. I mean it's very important and that applies to everyone, right? It applies to our students, to us, to every kind of worker. We need to be educated about what AI can do and what are the limitations.

JP Matychak:

On this episode of the Insights at Questrom podcast, we take a closer look at the impact that AI could have on organizational learning and development, including personalized learning paths Coming up next. Hello everyone and welcome back to another episode of the Insights at Questrom podcast. I'm JP Matychak. In this episode, insights at Questrom, contributor Shannon Light sat down with Chris Dellarocas, associate provost for digital learning and innovation at Boston University, and Richard C Shipley, professor of information systems at the Questrom School of Business, as well as DK Lee , Kelli Questrom associate professor in information systems and computing and data science. Chris and DK shared their thoughts into the rise of generative AI, its potential to transform employee learning and development, and how organizations can navigate challenges responsibly while still leveraging the advantages of AI-powered learning. Here's Shannon Light.

Shannon Light:

Thank you both so much for taking the time to be here today.

DK Lee:

Our pleasure Great to be here.

Shannon Light:

Chris, I really enjoyed reading your article in Harvard Business Review, which was published back in December, discussing how generative AI could accelerate employee learning and development, and, based on both of your backgrounds, I really wanted to explore this topic a bit more, and I figured it might be best to begin by just discussing how generative AI is transforming the landscape of employee learning and development.

Chris Dellarocas:

Sure I can start the conversation. I believe that generative AI will really shake things up in this space. It's like having a super smart assistant that understands each employee's unique needs and tailors the learning experience just for them. Traditional learning methods, you know they're more like a one-size-fits-all approach, but with generative AI it's more personalized. Imagine a platform that knows exactly what skills you need to work on and serves up content that's just right for you. And there's more it keeps everything up to date.

Chris Dellarocas:

In a lot of fields, knowledge approaches, laws change all the time. Learning materials can get outdated pretty quickly, but with AI, it's like having an always-on editor who keeps the content fresh and relevant without any effort from us. And lastly, ai can also be very interactive. You can step in and offer guidance and answers, take questions, clarify what people don't understand, and it's like having a personal coach, making the whole learning process more interactive and engaging. So, in a natural generative AI, it can make learning more personal, more up-to-date and more hands-on, and so I think it's going to really shake things up in this space.

Shannon Light:

DK. Do you have anything to add there?

DK Lee:

Yeah. So, chris, they're a great job at summarizing a lot of potentials. So if I were to talk about something that's not been talked about right now, what I'm interested in or very much fascinated by, it's not here yet, but we have all these text-to-image models. Text text is the NLM, but text-to-videos and all these text-to-multimodal and I'm sure some folks are working on text-to-the-simulated-world models. So imagine, like a Star Trek, the hologram. But if you can generate these virtual reality where you can then tailor it to, you can think of a situation where you can teach employees or different situations. If you're a firefighter or a police and you have to go through a bias training or situational training instead of it being actual, which might be slow and you need to make time for If you can train a bunch of these models to generate this. I don't know when that will come, but that might be very interesting.

Shannon Light:

That is really interesting. I know you just touched on it a bit, but, chris, are there real-world examples of how organizations are currently using Gen AI in learning and development?

Chris Dellarocas:

There are examples about how organizations could be using it for the most part. Let me just give you some. Suppose that you are a marketing and sales company. You want to train your sales professionals in the latest and greatest sales techniques. You could use AI to scan the work history. You can see what approaches and what techniques your people each individual has used already. Then you can tailor the training material into specifically what they need, what they haven't used, what they haven't used enough, what they need to know. Similarly, if you want to train your programmers, you can actually scan the code of rhythm. Ai can scan the code and assess their proficiency in different tools and languages, then automatically tailor the training to their precise skill gaps and skill needs. That's one way in which AI can personalize training.

Chris Dellarocas:

Then again, let's consider a field where things change very rapidly. For example, environmental law. There's a lot of regulations that change all the time. You can have AI that adopts the training materials to make sure that your employees are trained on the latest set of regulations and laws. Or digital marketing. We are there always new approaches that become trendy and maybe some other approaches that become less effective as the word changes. Again, you can have AI that is, adopting the materials, so that every time you train your people, you make sure that they get the latest and greatest training that corresponds to the leading edge and state of the art approaches.

Shannon Light:

That's great. Yeah, thank you for elaborating on that. It's interesting to think about how this technology can also just help in general with. Now, of course, the workforce has changed with the pandemic and working from home and training employees in different settings. I don't know if this is something you have explored further, but the experiences that may change with onboarding employees and how Gen AI could also help in that sense. I found it interesting being somebody who was onboarded as fully during the time of a pandemic. It was really interesting and impressive because, of course, people were learning as we went on. But that's something that I'd love to hear both of your thoughts on.

DK Lee:

Yeah. So if I can go, then I think one of the first real-world use cases in companies was for onboarding. A lot of times they had these sharepoints or internal Wikipedia or databases that companies have been basically keeping track of and recording. Then you can train Gen AI models to soak up that information and then come up and devise an initial onboarding process for UVs and whatnot. This has gotten another. The next step that I think last year there were lots of startups and companies trying to utilize this was like a co-pilot right, like a lot of call service centers right. First that deal was B2C consumer-facing firms. When they're talking on the phone or chat, they have this co-pilot like a Gen AI trained helper that instead of just searching through this knowledge base manually, the call center person or helper person can just type and get real-time answer on the spot where it's there with you co-pilot right Like a Microsoft co-pilot and helping you solve issues real-time. That was one of the first examples.

Chris Dellarocas:

But yeah, yeah, I fully agree. I think to me, the big promise is just in time training. Training which is integrated into the daily work-forms, so it's like having a learning body which is right there as you work. So you don't traditionally most companies they separate work from training right? Either you work or you go into training. Now, with AI, you can actually merge the two, so you basically like having somebody over your shoulder and, as you do a task, when the AI decides that you need some help whether it is a training document or a chatbot that you can actually go and interact or a drill I mean you can just give it to you right there. Right, so this makes things efficient, makes things engaging, and it's fantastic for onboarding. I mean, you know it can really make onboarding and transitioning to a new role way smoother and faster. So learning on the job, but with a virtual safety net.

Shannon Light:

Absolutely. I am curious, too, based on industries. I know that you discussed the use of Gen AI in creating immersive training simulations, but could you explain a bit more about how it enhances the training experience in high stakes professions?

Chris Dellarocas:

DK. You want to take this.

DK Lee:

Oh, yeah, so, okay, yeah. So a lot of companies that have maybe it's an involved process where it's high as you mentioned, high stake. If you make a mistake it's going to cost the company millions of dollars that they have. I think it's also small proportion, but like they used to have these like a virtual reality training, where they would use, I think, the Unity engine to actually come up with this like a virtual reality and like the employees can actually go in and then use it on virtual reality headset like Oculus, right, and then it used to take a lot of time to make this world and the scenario. It's like you know, it's like making a game, right, and they use the game engine to do this, right. So imagine a situation where Gen AI can just do this on the fly.

DK Lee:

Obviously, there may be some polishing that needs to be done professionally right after, when you can iterate and then combinatorial explosion of all different kinds of scenarios. You can think about that, right, like you did the same. It was just for, like a firefighter example, because it's high stake, right, and maybe there is a lot of like if you're training a new firefighter. It used to be. There are companies that provide these virtual reality training world and games, so to speak. But that's limited by the designers and the game. You know the game, the virtual reality designers and their time. They need to make these scenarios and then render it all and everything. If this text to virtual reality kind of thing comes in and you can just generate lots of different combinations, like faster obviously it won't be like instant or quick work whatever but it'll be now more possible that you can go through different scenarios. That's what I meant.

Chris Dellarocas:

Yeah, I fully agree. I mean a lot of the current generation simulations. Even those that employ virtual reality, they fall short in terms of variety in realism. They usually are using a small number of predetermined scenarios. This generally, as my colleagues said, the space of scenarios becomes exponential.

Shannon Light:

And it also makes me want to bring up the point of, of course, challenges that are associated with the use of AI, especially in learning and development. This you know we hear a lot about data privacy and potential biases. How can organizations navigate these challenges responsibly while also leveraging the benefits of these AI-powered learning tools?

Chris Dellarocas:

Oh, yeah, I mean there's quite a lot of challenges and, yeah, let's make clear that the tremendous promises, the tremendous challenges as well. I mean, first of all, the kind of data that are needed in order for AI to deliver this. Personalized training, you know, addressing skill gaps, et cetera, et cetera, is highly sensitive. It really is performance data of employees and work products of employees, so this needs to be really safeguarded. I mean companies that attempt to go down that path, they need to be super careful about how they handle this data. They need to set up strong data governance policies and be very, very clear with employees about how this data is used. They need to be transparent and they need to have, you know, beefed up security and, of course, definitely they need to comply with data protection laws. That's non-negotiable, I mean.

Chris Dellarocas:

The next set of issues is bias. Right, this is a tricky one. I mean the thing with AI is that it learns from the data it's fed, and if that data has biases, the AI will have to. So organizations need to be checking AI systems, like giving them a routine health checkup for biases, and I think it's important to have feedback mechanisms where, you know, the trainees can report any type of situation where they perceive that content is biased, so that the system can self-correct. It's a tricky one, of course. We cannot just set AI loose and forget about it. It works best when it's paired with human oversight. So think of AI as a super smart system that still needs a bit of human guidance, so that way you can catch any weirdness that the AI might miss.

DK Lee:

I think we have more issues than we can talk about in this webinar, but I think the first thing might be on the user side. There might be overreliance if people use it once and then they get the result that they like a couple of times, and in fact there has been documented research on overreliance of these general AI tools. It's a little bit of a flu, despite its hallucination. I mean, everybody knows about the hallucination, but there was a study comparing people's usage of chatGPT search versus Google search and they ran a bunch of experiments and then they found out that, for example, obviously people spend less time getting similar results using chatGPT compared to Google. So in chatGPT we know hallucinates and these users, even though they were told about the hallucination, they still didn't care, like many, much fraction of them. So even if you tell them, they might misuse it. And the worst thing is, even if you're an expert, to go and figure out whether this is a hallucination or not takes additional time, which you might not do.

DK Lee:

One anecdote is that I was writing paper and I was just talking with a particular LLM. Actually, I use all three big ones chatGPT, bard and Anthropic Claw 2. And there was a book that I read that I used in one of my research it's the book on concepts, actually and then I was forgetting some facts and then I was asking one of the tool and it actually created a very realistic looking citation by an existing researcher and the concept looked really interesting. I'm like I don't remember this, what is this? And then I saw something was wrong and I looked through the book. It's not there. It was able to synthesize very realistic looking thing and say it's like and present it like a fact, and that's the danger. But I actually had to spend a lot of time going through the book to make sure. So that's one thing.

DK Lee:

On the user side, hallucination and people's unaware. Many people might be unaware and over rely on that. Another thing that I think was it's amusing is there is a paper by a colleague, asian Ward. It's titled. It's on PNAS, titled People Mistake the Internet's Knowledge for their Own. I'm talking about basically, when people use Google. They feel intrinsically more knowledgeable than they actually are. I'm probably butchering the results and there are more results there. But thinking about this and combining the paper that I just mentioned, when people get used to chat GPT, they might get even more. You know, have a sense of inherent knowledge, because it's just like things are at a fingertips. Even though it might like, the result that you get from such a team might be totally hallucinated and false. So you might have this the world level Donnie Cougar effect going on. So unless you know, people are warned enough times.

Chris Dellarocas:

So, so yeah, I mean education, education, education. I mean it's very important and that applies to everyone. It applies to our students, to us, to every kind of worker. We need to be educated about what AI can do and what are the limitations.

Shannon Light:

Absolutely, and to that point I'm interested to see how this may play a role in education for students who have these advanced technologies right at their fingertips, how it may affect the teaching that goes into the actual curriculum and the integration, or if some professors moving forward may not want to incorporate the technology. I'm wondering from you both how do you, how do these ideas apply in traditional, in more traditional education, and what may be offered right here at Boston University's classroom school business?

Chris Dellarocas:

Okay, maybe I should take a first cut in a few things. So in my article I emphasize the potential of GEN AI for personalized content, for adapted and up-to-date content, and for feedback and interactivity. I mean, all of these three aspects can play a role in traditional education. So, for example, imagine if you're teaching a class in data science and Z-Qual, or in history whatever in any topic and then you use AI to give your students personalized practice questions every week that are tailored just for them. Ai can analyze students' performance, interests and even the preferred way of learning, and then it can customize the practice question that the student is working on every week.

JP Matychak:

It's like having a personal academic advisor.

Chris Dellarocas:

You still need the professor, but you actually make it even better, right, you can make the professor like a personal private tutor. I mean, the second thing is we can use those tools to make sure that our curriculum is fresh and relevant and actually use them every semester to make sure that, or at least to reduce the effort of adapting. With curriculum, things change so rapidly in a lot of fields, but the effort of keeping things up-to-date manually is just staggering, and sometimes things move faster than we can adapt our materials. I mean AI can help us stay on top of things. And then the third thing is feedback and assessment. I mean, just imagine one of our large classes or large lecture classes with 100, 200 students. Right, it's very difficult for the professional DTA to be able to answer every student's question, whereas AI can actually have a conversation.

Chris Dellarocas:

Or you can have feedback. You can grade assignments or provide feedback. You get your assignment back, grade it, whether by a human or by a machine, but then you can actually ask questions. You can say can you explain this to me? Why can't you tell me this error? Can you provide another example so I can understand it better? I mean this is a great compliment. It can really allow us to improve the quality of what we give our students in the supplement and enhance the value that professors and teaching assistants give to them. And, of course, anything we do around AI educate students about the technology, help them understand, become familiar with it. So we are preparing them for the world that's coming.

DK Lee:

Yeah, I think Chris covered all the great stuff. I think one thing that I think we need in this day and age, I think not counting for these tools is a mistake because people will be using it anyway, and instead, I think one single most important tool or skill that we need to teach the student is how to verify, confirm and validate. That I think, if it's done well and you give them this tool, once they're equipped with this ability to validate, verify and confirm, I mean, and each and every one of them could just progress at their own speed, right. So I think that's single most important thing.

Chris Dellarocas:

Yeah, that's the holy grail in education In a way, the model of having a group of people who move in lockstep right At the beginning of every session. We're all in the same place. And at the end we're all in the same place. It's a fallacy. Every person moves through learning in their own individual pace, but so far it was not. Practically and economically. Physically, we essentially assign a personal tutor to everybody from outside, but AI can help us get close to that Right personal tutor that doesn't judge and get tired.

Shannon Light:

It's great to hear both of your perspectives on this, as being, you know, researching this and really in the weeds with everything.

Chris Dellarocas:

I mean I just like to reiterate what DK said. I mean, the promise is tremendous, but the challenges are also pretty substantial and I think the trick, especially in education, is to introduce AI in a way that assists and enhances learning and avoid the over reliance with it. It's the classic case. I mean, the first issue we had with gen AI is that students will use it plagiarize, and it's not so much. In my opinion, plagiarism is secondary. I mean, our role is primarily to help students develop skills and secondarily, to assess them. I mean, what really worries me is that if learners rely on AI in their own way, they will not develop the competencies they're supposed to develop. So the question is how do we adapt the way we teach them so that we can introduce technology, we can reap the benefits and we can still motivate and help our students develop the competencies that they come here to develop?

DK Lee:

Yeah, just to elaborate a bit and list out all the problems that we have now, other than the hallucination problem. How do you you know if you have a conflicting information? How do you resolve that? It's not obvious and that's a hard problem.

DK Lee:

Another big issue is, once this LLMs and gen AI has gotten sort of mainstream, everybody is doing everybody for data and everybody is trying to protect their own data. Data has gotten another meaning after that. So in that case, how do you then, going forward, sustain this such that you know so every organization, people in country, whatever they, might be pushed to protect data and make everything private, thinking this is the gold, right? So the more and more that happens, the future gen AI will have less trading data to work with. That's another worrying thing. You know, like all these new times sewing the gen AI or this has been happening with the artists and authors, right? Another thing is in the company how do you get like a contributor, like that one contractor that just makes living by being the only person who knows how to do X, y and Z to give that up such that they're replaced now, right? So these kind of like motivation is another thing data.

DK Lee:

Where would that equilibrium be and how do you make that sustainable is another big issue, I think.

Shannon Light:

All of that is really interesting, and what really stuck out is, of course, the data privacy and as that becomes not as easily available. Honestly, myself being a marketer, it's something that I think about too. You know we're using this data that we have to help reach our audience and help our clients reach their audience. So all of that is definitely a big question mark. It's as we go and we see these technologies progress. It's super interesting, and I'm excited to see how these are integrated into learning and development. To all your great points, Chris, that you brought up about the processes, especially in education, and it's all. It's all fascinating, truly. So thank you, Thank you both.

Chris Dellarocas:

Glad to be here, thank you.

JP Matychak:

Well, that's going to wrap things up for this episode of the Insights at Questrum podcasts. Thank you again to our guests Chris Delarocos and DK Lee, and thank you to Insights at Questrum contributor Shen and Light. Remember, for more information on this episode, all of our previous episodes and additional insights from Questrum faculty, visit us at insightsbuedu. So long.