Conversations on Applied AI

Dr. Elizabeth M Adams - Harnessing AI Responsibly

Justin Grammens Season 5 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 38:21

Today, we're talking with Dr. Elizabeth M. Adams. Elizabeth is the Chief Engagement Officer at the Minnesota Responsible AI Institute and the CEO of EMA advisory services, where she curates knowledge-exchange initiatives with global leaders on the critical importance of responsible AI. She believes that responsible AI equals a better Minnesota to both preserve our state's natural resources and promote sustainable economic growth.

She's an author, TEDx speaker, and keynote speaker sharing her leadership of a responsible AI framework, which helps organizations sustain employee engagement. And I'm super excited for our listeners here to learn more about this framework and how they can apply it within their business and professional career.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

  • Becoming Invisible in the Age of AI
    How individuals and communities risk being overlooked or "invisible" as AI systems make decisions that affect their lives.
    Learn more about algorithmic invisibility
  • Discovering AI Bias and Building Ethics Principles
    The process of uncovering bias in AI and the importance of establishing ethical guidelines for development and deployment.
    Read about AI ethics and bias
  • How Bias Creeps into AI Systems
    A discussion on the subtle ways bias can enter AI models, from data collection to algorithm design.
    Explore how bias enters AI
  • The Urgency of Responsible AI Adoption
    Why organizations and society must prioritize responsible AI practices now, not later.
    Responsible AI: Why it matters now
  • Linking Responsible AI to Natural Resources
    Drawing parallels between responsible AI stewardship and the management of natural resources.
    Responsible AI and sustainability
  • Elizabeth's Journey into Responsible AI
    Dr. Elizabeth M. Adams shares her personal path and motivations for focusing on responsible AI.
    About Dr. Elizabeth M. Adams
  • The Leadership of Responsible AI Framework
    An overview of frameworks and leadership principles guiding responsible AI implementation.
    Responsible AI leadership frameworks
  • Community Engagement and Launching EMA Advisory
    The importance of community involvement in AI and the founding of EMA Advisory Services.
    EMA Advisory Services

Elizabeth: [00:00:00] If you are behind in ai, you're not just behind, but you can become invisible. I don't want anyone in the state to be invisible. I want us all to have some sort of awareness, but the institutes around to kinda help bridge that gap. See where we can help create jobs, see where we can help offer training insights or awareness.

We have frameworks. That we have to kind of move the conversation forward in a way that doesn't disrupt an organization's business, but can kind of help organizations seamlessly think about adopting and operationalizing responsible AI across the organization. 

AI Announcer: Welcome to the Conversations on Applied AI podcast, where Justin Grahams and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and deep learning.

In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your [00:01:00] industry, and connect with us to learn more about our organization at applied ai mn Enjoy. 

Justin: Welcome everyone to the Conversations on Applied AI podcast.

Today we're talking with Dr. Elizabeth M. Adams. Elizabeth is the Chief Engagement Officer at the Minnesota Responsible AI Institute and CEO of EMA advisory services where she curates knowledge exchange initiatives with global leaders on the critical importance of responsible ai. She believes that responsible AI equals a better Minnesota to both preserve our state's natural resources and promote sustainable economic growth.

She's an author, TEDx speaker and keynote speaker sharing her leadership of responsible AI framework, which helps organizations sustain employee engagement. And I'm super excited for our listeners here to learn more about this framework and how they can apply it within their business and professional career.

So thank you, Elizabeth, for being on the podcast today. 

Elizabeth: Thank you so much. Super excited to be here. Thank you for the invitation. 

Justin: Awesome. Well, I mentioned a little bit about, you [00:02:00] know, where you are today, but one of the first questions I'd like to ask people is, you know, how did you get to where you're today?

What was maybe the trajectory of your career? 

Elizabeth: Yeah, that's such a great question because I have been in the technology space for over 25 years, and I'll say I got into the AI space about 10 years ago, but prior to that, I led large scale initiatives, teams up to 200 budgets, up to 54 million. So I really, really loved the work that I was doing as a systems integration lead for an organization.

What happened Justin? About 10 years ago, I started to see AI make its way into society, and I also realized that it wasn't really working for everyone, specifically people who might have my lived experience, and I wanted to explore that. So I started just basically asking questions. I started asking data scientists.

What is AI bias and how might bias creep into the training of a model and therefore the output of an AI system? And I began to learn more and began to host these learning [00:03:00] events. And I was working for an organization at the time where I was exploring that. And in one of the learning events with the Chief privacy officer and General Counsel, they invited me to participate in building the first AI ethics principle for that organization.

And I was like, oh, there might be a future. For me here because one, I love to kind of think about how we as humans are impacted by advancing technology, but also that there might be a way for me to help organizations think two and three steps ahead about ai. So that was part of it. And then I began to work in community, immerse myself in community to understand how organizations or the city of Minneapolis and residents were dealing with AI and AI impacts.

Got a couple Stanford fellowships and then launched on my own EMA advisory services to start working way at the top of the lifecycle on what if we engage. More people in the lifecycle, people that are not necessarily technical [00:04:00] wizards, engineers, data scientists, and so forth. And so that's what I did.

And then I wanted to pursue that further, and that's why I got a doctorate in leadership of Responsible ai because I wanted to understand what happens in these organizations where responsible AI advances very quickly when there is a culture of responsible ai. What is the employee experience? What's the leader experience and what's the organization experience?

And that is where I developed my leadership of responsible AI framework. 

Justin: Awesome. We'll unpack that framework here as we get going, but even just at a very sort of grassroots level, maybe like, just to kind of bring it back to 1 0 1, 'cause some of the people here maybe aren't really familiar with how bias gets into these models.

Right. And, and maybe we can start out there as, and you said it wasn't working for everybody, like. What were the things that you were seeing, uh, that were sort of like red flags that were going off in your head as you started exploring this? 

Elizabeth: Sure. So there's a doctor, Dr. Joy Bini, whose pioneering work around bias and facial recognition was what [00:05:00] I was introduced to.

First. She had a YouTube video, but prior to that I just, I started to see it being around facial recognition. And so several, several years ago, the outputs of some of the facial recognition systems did not. Accurately identify women and or people of color. And so part of the way that I learned that bias creeps into these systems is, one, the data.

Where's the data coming from? How is that data being trained? Who is training the data? Are they able to catch these? What are the thresholds for some of these outputs? And then I also learned about data annotation. So before an organization procures data. Or obtains data who is associating certain images in an object in a picture.

So a gun with a darker skinned person or an infrared thermometer with a lighter skinned person. So these are some of the use cases that came out of my Stanford Fellowship where we were able to play with some facial recognition systems. And so it starts with [00:06:00] data. It starts with data annotation. It also starts with data procurement.

And then there are things inside the organization such as bias testings that can be done, or applications that can test for bias. And then all of that happens through governance. So that was really. What got me started and interested in it. And then I read a study from, I think it was the University of Georgia where there were autonomous vehicles that were also unable to identify darker skin people.

And so all of this was very concerning to me, and I wanted to kind of, as I mentioned, explore what was happening because I understood what the results would be. 

Justin: Which is just continuing to perpetuate these biases, right? Because if, if it's generating data and it's using that, does it kind of just snowball on itself?

Elizabeth: Yeah, it's training itself. Right? But then who's involved to be able to catch these? So for instance, if I look at an image and I see that it's problematic because it mirrors my lived [00:07:00] experience, there might be someone working on the technology that does not. And so that's part of why I wanted to go research to find out.

If you got more of the organization involved, might there be other people that catch things throughout the life cycle? So maybe you are getting data from a source where they cannot verify that it's 100% gold or accurate, right? Where else might you be able to catch some of those issues before the system makes its way into the marketplace?

Justin: Joy's Joy's book that is, uh, unmasking ai. Right. Dr. Joy 

Elizabeth: Bulley. 

Justin: Yeah. And I'll be sure to put, you know, links in the podcast notes off to that book. 'cause Yeah, that is, I've, I listened to it on Audible. I thought it was really, really good. And that book was a number of years old, right? Isn't it? I mean, she, she discovered this issue.

No, she. 

Elizabeth: She's discovered this way back in 2014, right? I believe. However, her book is just a few years old, but she also had coded bias. There was a movie and she's done several TED Talks and other types of [00:08:00] keynotes where she's really kind of leaned into that. But for me, that's where it started. But there's also other types of biases that happen.

So with predictive analytics, some people may not get credit cards or they may not get loans because of data and how data is being trained. They also might not get the healthcare that they need. Because maybe their data hasn't been considered in how the model was trained to make predictions about healthcare.

So it's bigger than facial recognition. It impacts every aspect of our lives. And so that is why I am such a big proponent of responsible ai. And also Justin, as you know, you and I have talked, now we need to kinda link it now that we know more about it right now, we need to link it to, specifically here in Minnesota, how are we going to protect our natural resources?

How can we boost the economy by getting more responsible AI jobs and also make more people in the state aware of how to link ai, the use of AI [00:09:00] data to jobs, and to ensuring that we have responsible outputs and also so that we are future proofing our natural resources. 

Justin: Yeah. Okay. Well, let's, let's dive in a little bit.

I, you know, as I was looking at stuff that you've written on LinkedIn, that responsible AI equals a better Minnesota. And I think most people think, okay, you know, of course we all wanna have less bias. Right? At least I hope they do. Right? And so, Minnesota's gonna be a better place if we have more responsible AI and less bias.

But you've kind of taken a different, well, a parallel track, I guess, along the way to, to link it to natural resources. Could you kind of, um, explain to our listeners a little bit more about how you see that working? 

Elizabeth: Yeah, so I think part of what has happened when AI bias first started becoming real popular, we were looking at it as a technical solution, and then we started thinking about AI harm, which was a negative human experience or a negative environment experience.

So it's not necessarily me kind of coming up with this. It's the natural [00:10:00] extension of us looking holistically across many landscapes to understand how AI is impacting us. And so for me. I'm from Minnesota, love Minnesota. Love the fact that I can go to any lake or park, but we are rich in resources, farming, agriculture, fishing, water, and so forth.

And so because of that, a lot of businesses would love to do. Business with US data centers because of our water, right? Mm-hmm. Agriculture, because we can help with decreasing some of the fuel with planes and so forth. But what if we were aware of how we are using AI to get to some of these milestones that we want to get to, more efficiency in business, better fueling for whatever.

Better logistics for transportation, of course, operational efficiencies across agriculture. What if we were a more aware state about the responsibilities [00:11:00] that we could have or that we should have as AI advances further in our society? Most people are loving ai, right? You and I, we use it for operational efficiencies and to do a number of things, but how we really thought about.

How the resources that we have in the state are being used so that we can have these efficiencies. And that's really what the institute is about. It's beginning to start that journey of helping Minnesotans across the state learn a little bit more, and then ultimately where it's a priority for our state.

That would be a. 

Justin: Gotcha. And how long has the institute been in existence? 

Elizabeth: Just a couple months. So it's a nice parlay from the work that I've done over the past five years with EMA advisory services, where I've talked to a number of organizations across the globe, a number of employees, and my research has been informed by those conversations and the movement that has happened across the globe.

Justin: And you're born and raised in Minnesota, so why not? Why [00:12:00] not start it here? 

Elizabeth: Yeah, so fifth generation, um, Minnesotan, my family has had a number of servant leaders. I absolutely want to live in a state where we are considering how we're taxing our resources, and also how do we upskill and reskill. People across the state from Angle Inlet to Albert Lea, how do we make sure that everyone gets engaged in this?

Because if you are behind in ai, you're not just behind, but you can become invisible. I don't want anyone in the state to be invisible. I want us all to have some sort of awareness, but the institutes around to kinda help bridge that gap. See where we can help create jobs, see where we can help offer training insights or awareness.

We have frameworks that we have to kind of move the conversation forward in a way that doesn't disrupt an organization's business, but can kind of help organizations. Seamlessly think about adopting and operationalizing responsible AI across the [00:13:00] organization. 

Justin: Oh, interesting. Yeah. You know, as you're kind of speaking here, talking about a couple different things.

Number one is upskilling. So obviously we wanna get involved with. You know, maybe the Department of Economic Employment, you know, deed type stuff, right? Mm-hmm. That would be a good place, and I'm sure you already have contacts there, you're kind of working through with them. 

Elizabeth: Yes. I, I have contacts. I have been meeting with so many people across the state that are doing a number of different things, and so.

We're in our research phase, we're in our discovery phase. We are also interested in purchasing a training company so that we can scale very quickly and reach as many people across the state. So that's what this is really about, is about creating jobs. That link to responsible AI that then link to awareness, creating efficiencies, and ultimately protect our natural resources.

So I'm probably sounding like I'm, you know, beating this over the head, but I really wanna make sure that we're linking responsible ai, not just from a bias perspective, but how do we [00:14:00] engage everyone to ensure that they're thinking past the technology and thinking about how it's impacting our lives.

Justin: Yeah. Yeah. I think that's the one thing I really love about the group that meets for Applied AI is we're really talking about the applications of it. Yeah. We have sessions that are very technically focused and we have a wide, wide range of people that show up, but I really, this podcast and the newsletters and the people we get together, I really am focused on.

How is this gonna affect your life? Mm-hmm. Just in general. And AI is these, you know, the general purpose technology we call it. It's like electricity. Electricity didn't just single out one specific type of job role. It affected everybody. And that's really what I think we're saying here is AI is gonna be, you know, no matter what you do, no matter what your job is, it's gonna have an impact on what you're doing.

Elizabeth: Same with cybersecurity. Same with cybersecurity, and see how we evolved from that really being something to now every year we, and if you're in an organization, need to take these courses, you need to be reminded of your responsibility, [00:15:00] right as it relates to cybersecurity and your role in your organization.

Justin: Yeah, sure. So there's the economic side with regards to the employment. The other thing I was thinking about too was just, you know, again, saving our natural resources. So, you know, my office building that we're in right now, I don't own the building per se, but I do know that 'cause I sometimes get the mail and stuff like that.

They're sending us stuff and saying, Hey, there's so much room for solar panels on the roof of your building. Like, why don't you consider heating your building via solar? And I'm like, huh, the owners of this building a probably should. But B, it's one of these things where it's like within a business, you should be thinking about those decisions.

What investments can I make today to help the next generation? Right? And I'm focusing on energy per se. You know, energy consumption. I think you're offering up the same thing. Mm-hmm. It's just, it's knowledge. Right? And, and being able, it's knowledge able to compete. It's, 

Elizabeth: it's knowledge. It's how do we begin to think about it?

So part of the thing that I have to think about is where is deed? Where is the state? [00:16:00] Where is any of these government entities as it relates to responsible ai? And we're very, very excited about what AI can do, and I'm hoping, along with many other people that we're not just excited about ai, but we are kind of thinking about the next steps before it's too late.

Before it's too late and before we have to backtrack some things. And so this is gonna take time because it is a journey. But yeah, thinking about energy consumption, thinking about, let's just say the science museum, and they're, they're at the St. Croix watershed, right? So, and they're studying water. How awesome would that be if we expanded the job description to think about how they might be using AI to think about predictions about.

What might be happening underneath the water? Is it gonna be safe to drink in the next five years? And how we might build, I don't know, just a better conversation around the use of AI and and water. 

Justin: Yeah. Now, one challenge you maybe are [00:17:00] seeing that maybe not, is how can you measure that? You know, some agency might be saying, I've been doing fine without having this AI initiative.

Why should I add this? 

Elizabeth: I think that's a great conversation to have. And so one of the conversations I have is, well, what's the cost of doing nothing? So you're not doing anything. What's the cost? And then we begin to have those conversations. And it's different for every organization and every industry because they have different goals.

Right. But measurement for us is simple. It's how many people do we get to train because we are looking at, if you. Didn't know anything about AI or responsible ai, and maybe you know a little bit more. Then we've met our goal. If we get one job description where we can add something about responsible ai, so now that person knows that's a part of their responsibilities, then that's a win.

If we get an employee handbook that has been. Expand it to include the organization's vision for responsible ai, then we've won, if that [00:18:00] makes sense. So there's 2.67, I think million people working in the state of Minnesota. I think that's a great goal. So yeah, that's how we would measure it by individual and by the number of opportunities we get to expand job descriptions or employee handbooks, because it's gonna take longer than that.

That's a start. Right. You gotta get into an organization, you gotta, you know, and, and just like when I started AI ethics over the past 10 years, there's so many more people that are a part of this. So many more people are making money and contributing to their economy, responsible AI across the state. I hope the same will occur.

It will be more than just me. Right. Kind of leading the charge. 

Justin: Yeah. One thing that's crazy about AI, I think is just the speed at which it's changing. Mm-hmm. And evolving so quickly. The idea that I mentioned electricity. Electricity took probably more than a decade or so for it to actually infiltrate into everyone's homes, [00:19:00] right?

There were certain people that were able to afford it, and then maybe, you know, three to five years later, more people were, but there were a lot of people that didn't have electricity for a long, long time for it to finally show up. And do you feel like, question back to you. This is advancing so fast that there's, it's a clock that is just, uh, we have to act now, right?

I, I know it's gonna take time, but the longer we wait, the harder it's gonna be. 

Elizabeth: I don't know if we have time. So there's something called the long wave Theory, and it talks about in between every industrial revolution or revolution from the first to the second. We had like 40 years to catch up, 40 to 60 years, and then the second was maybe 40 years, the third was little less.

Now we're at five to 10. Where we can catch up. Does that make sense? Meaning the next thing will come, and we're still back here trying to figure out what we're doing with ai. We don't have the luxury of 40 years. Anymore of catching up across the globe. And so that is one of the reasons why this is such a sense of urgency for me.

But just that I also [00:20:00] understand that people are doing many things and they have lives, and so I. I'm conscious of how to kind of easily bring this to them in a way that's digestible. 'cause you can't just throw everything. I've learned that in my past five years. You can't just throw everything in executive and say, oh my God, you need to be responsible.

AI tomorrow. You're never gonna do that. There are no checklists to get you responsible. AI tomorrow. Like it takes time. So if we can start the process now, I think it might be easier for us to adjust when the next fig thing comes. Right. So there is research that says those organizations that really kind of leaned into digital transformation, they're having a better time with AI transformation.

They're having a better time with responsible ai. 

Justin: Well, so you have a framework, I guess you call the leader, uh, ship of responsible AI framework. Mm-hmm. I think we've been touching on pieces of that. Are there things that, or, or is there a, is there a process of formula that, that you've [00:21:00] developed that you maybe wanna share?

Elizabeth: Yeah. Thank you for that. So one of the business implications of my doctorate is this leadership of responsible AI framework, and basically it's four parts. You start with stake stakeholders, who are the stakeholders that should be engaged in your AI design and development? For my particular research, I start with I center employees because that's who I wanted to ensure we're a part of it, but it's an exercise of kind of going through who are these?

Various stakeholders. Are they community? Are they clients? Are they customers? Are they shareholders? Who are the stakeholders, and then within that, prioritizing the stakeholders. So for me, again, prioritizing employee stakeholders. The next part of that is how do you get them involved in what I call shaping artifacts.

Shaping artifacts are the digital or tangible documents. That indirectly or directly guide the design development of responsible ai. So from a technical perspective, that might be [00:22:00] a technical requirement document. It might be an AI governance document, it might be an SOP, it might be a policy. And from a non-technical perspective, we touched on, it could be employee handbook, it could be a job description, it could be a marketing document, it could be a product management document.

All of these documents are used by someone to say, this is what we want our AI system to do. This is how we want it to perform. So if you broaden that employee stakeholder participation, then you're getting more insights. The customer care person who has to respond to customer requests around an AI bot can now be engaged in what that script should look like.

So that's the second part. First part, stakeholders. The second part are the shaping artifacts. Then the next part is the responsible AI design and development. So there are three components I researched, human-centered design, responsible research, and an innovation and design science research, and I talked about that at the [00:23:00] applied conference last fall.

And so all of these concepts have been modernized. Whether you're talking about information systems or whatever, they've all been modernized to think about the impacts on humans and society. So that's the next part. How do we make sure that if I'm taking this document from someone, we also have a way to understand the impacts of our information system, our AI system on people.

On the organization and on society. Does that make sense? The final part is the actual output of the system. Making sure that it links back to and can demonstrate that it adheres to responsible AI principles such as fairness. Explainability, trustworthiness, transparency, auditable, unbiased. Now, every organization will not need all of those principles.

It depends on what their product is, or it depends on whether or not they are purchasing an AI system. [00:24:00] But the end goal is who do you start with to ensure that when your system is in the marketplace or being used, it adheres to these responsible AI principles, and my framework ensures that employees who are.

To the work, have a path to contribute to their organizations, the future of their organization and the success of their AI systems. Nice. 

Justin: There's a procedural thing where people see something that's wrong, you know, like there's some sort of feedback loop, I'm assuming in this. 

Elizabeth: Yeah, so part of that, the feedback loop can happen within any of the pillars.

It can happen with the stakeholders. Someone says, we're missing a stakeholder. Okay, great. Let's get the stakeholder involved. A shaping artifact. I started with several that I believed on as well as research, and when I met with employees, they were like, Hey, a website is an artifact. We need to update our website.

Chief HR officer said if I were engaged, if someone asked me, I could update our handbook. I could update job descriptions, procurement people said I could update our requirements to vendors and so forth. So yes, [00:25:00] there are all of these ways. Once you get stakeholders involved and they're giving you more insights, then there should be a path built in for that correction, and it could happen at the end when you realize that your system isn't performing the way that you thought it would.

A way to kind of insert enhancements to the process. 

Justin: Yeah, for sure. Because that's, that's one of the things that people talk about, ai, it just, it should be continuously learning. Absolutely. It, it should be continuously getting better and putting more and more information in it, right? 

Elizabeth: Yes, absolutely. 

Justin: Cool.

And so does this fall back on the work that you did for your doctorate to get your PhD? Is that related to, you know, AI specifically, or the framework? Yeah. 

Elizabeth: Yeah. So my doctorate is in leadership or responsible ai, so it was a doctorate that I curated with my chair because I was interested in coming out of the doctorate with not only empirical research, but a way to apply what I was learning to business.

So at the end of the dissertation, you have to have, what's [00:26:00] the next thing for research and then what's the implication for business? How can you go in and help a business today? So yeah, it was three years of immersing myself in thousands of articles, continuing my work outside across the globe, understanding what they're doing in different parts of the world.

But really my research was focused on US companies and I wanted to find out. What was happening here and how they were advancing or how, you know, responsible AI might be stalling in some organizations. And so based on their feedback, based on research, stakeholder theory was a huge one that I got a chance to immerse myself in.

And then I got a chance to talk to Dr. Robert Freeman, who's what they call the father of stakeholder theory. So really understand what he meant by that, what the research says. Yeah, and it was a way for me to. Use language that businesses use. So we all talk about stakeholders, so why not start there? 

Justin: Yeah, for sure.

That's good. [00:27:00] Yeah. There was a interesting study that Ethan Molik did. He's the author of this co intelligence book that I like to talk about, but it just came out of the Wharton School of Business and they did a thing with Harvard Business School as well. They called it the cybernetic teammate, basically, I think is the word that he had.

But the whole idea was it actually improved. So one person by themselves is not productive and they kind of get stuck in their own train of thought, like can't really bust out one person on a team. Sometimes they're actually can be this. And so they, they did a thing with Procter and Gamble where they had people that were designing products.

And people that were building products oftentimes. Get in a little bit of a tiff, I guess, right. Between what's possible and why don't you just do it my way and you know what I mean? Mm-hmm. There's these different sort of things, and so what they found was a person using an AI along with a teammate, actually provided the best possible scenario.

Interesting. From the idea of like, this is just a sounding board, like kind of a, a neutral entity that can sort of help bridge the gap [00:28:00] between what you're saying and what I'm saying. It can help people just sort of blue sky, different ideas, kind of help them iterate over and over again. So it was very, very interesting.

They found within Procter and Gamble that the teams were not only more efficient, but also more creative, and also came up with kind of more patentable ideas. It was a really interesting study that I'll have to, you know, link off. But I was thinking about that I guess, with regards to these procedures and these policies and mm-hmm.

These things that are gonna have to be built and how they're gonna affect probably how the entire organization works. Mm-hmm. 

Elizabeth: Yeah, absolutely. So I have done enough work with organizations where I had the pleasure of sitting with their employees some before they started using AI and then some after, and I loved.

The discussions when they were open and honest in front of their leadership about how they were leveraging ai, and it also gave the leaders an opportunity to see how their teams were collaborating by using AI and what kinds of efficiencies they had been kind of removed from that process. So that was one particular organization, another organization [00:29:00] that was a conversational AI company that I did some work with, they wanted their employees to lead.

They actually said, we want you to come in and help our organization. Center, our employees, their employees came up with the vision for responsible ai, the policies, it was constant communication with the executive team, and to me it was a beautiful display. Those employees felt valued because there was a path for them, but they also trusted what their leadership was saying externally, and they also trusted the product itself because they were engaged.

Even they could connect with members. Across the country about what they were doing and a couple use cases of that. And so I personally, years ago, when I managed the 200 person team, there's no way that team could work in silos. And my intuition was, I've gotta figure out a way to, not me, but with the team, figure out a way where we can cross connect and do some things.

And they came up with some brilliant ideas. One idea was we [00:30:00] need software developers sitting with analysts. They need to see why it's taking so many keystrokes for us to get the information we needed. Modeling, simulation. People need to sit with others. And I was like, this is brilliant. And I learned from them because they're, like I said, they're closest to the work.

The happy insights. So all of this feeds into and informs the way that I think about leading in my leadership philosophy and obviously how I would want to ensure that our organization, the institute partners with the state and partners with organization, because I know it can be done, but I also know that you have to do it in a way that doesn't disrupt operations.

So that the ship sinks. 

Justin: Yes. Yeah, yeah, yeah, for sure. I try to introduce new technology into businesses too quickly, and people just push back because it can't be too much. They have to still be able to do their job. Like they normally do it. And, and of course, as you know, it needs to make it better. 

Elizabeth: It needs to make it better.

You also have to help them connect the [00:31:00] dots on the why and the value, right? And I think for too long, responsible ai, we haven't really done a good job of connecting the value. And so what happened there was messaging that we're trying to slow down the innovation, not true. We just wanna make sure it works for everyone and this is why, right?

And we can boost the economy and people can upskill and reskill like, this is a win-win. 

Justin: Yeah. No, I, you're a hundred percent right. I guess I hadn't thought about that, but you're right. I think the messaging around responsible AI and AI ethics was, it gives this feeling of like, we gotta slow down and we gotta stop and pause and, and take a lot of time.

And I think. Probably I, I don't know. I, I mean, people go back and forth on that. Should we have te, you know, country stop or people stop building these AI systems, it's not gonna stop. I mean, I just, I'm, I'm of that gonna stop. But you're saying basically your new messaging and philosophy and, and sort of, you know, to people is we, we can do both, right?

We can make this better, but we can also continue to evolve at the same pace that we're at right now. 

Elizabeth: Yeah. 10 years ago, that wasn't the case, [00:32:00] Justin, right? Because we didn't have companies coming to the table being a bit transparent, not necessarily telling us what their secret sauce was, but being transparent about how they were collecting data, why they were collecting it, where they were getting it from.

We didn't have that. So there was a push to stop before you continue. Now, more people are coming online. More organizations are interested in what this looks like. What responsible AI looks like. I think there are some tools out there that are helping that Microsoft's copilot is one responsible AI principles are right on their page.

Not saying they're perfect, but it's a way to tell the story, build a customer story about how to bring AI in responsibly and how to think about it. So yeah, there's lots of different ways that we can, well, we have to, we have to change the messaging because like you said, people are not gonna stop. We also need to be thinking years ahead about being good stewards of our resources.

Justin: [00:33:00] Yes. Yeah. Yeah. I think it's fabulous. Yeah. You've been able to take all this information that you've gathered, all your research in this space and now kind of really direct it at, and like I said, some of the great things of Minnesota is our natural resources. Mm-hmm. So now we can bring this all together here and actually have a huge impact.

The next generation. Right? Absolutely. Which needs to live with these natural resources. 

Elizabeth: Yes. 

Justin: I guess, yeah. If, if someone is getting into this space, how do they engage with you? How do they engage with the institute? Yeah. Maybe if give some information about that. 

Elizabeth: Absolutely. If anyone wants more information on the institute, email address is contact at mn RA i.com.

So that's contact@mnrai.com. Would love the feedback, would love the engagement, would love anything that people want to share. We are open and so that's a way. I'm on LinkedIn. I'm pretty active on LinkedIn and so love to engage that way. So yeah, [00:34:00] reach out. Via email, LinkedIn, check out our website, M-N-R-A-I.

We have use cases. We have scenarios. We have three additional frameworks that I didn't mention, that we kind of broke the larger leadership of responsible AI framework down to more manageable pieces for an organization depending on where they are in their responsible AI journey. And so we're just trying to help people get smarter.

That's all. 

Justin: Yeah. 

Elizabeth: And boost the economy. 

Justin: Yeah, yeah, yeah. And we need that these days for sure. 

Elizabeth: Yes, we do. 

Justin: So people can take these frameworks and sort of apply them themselves, but you're also available to help implement them. 

Elizabeth: For the first one, we absolutely need to be a part of it. So we offer these frameworks through hands-on action-based workshops that help reinforce the learning.

We offer templates that will give them away afterwards. So one of them happens to be something that we call paths, and it's a framework that really looks at [00:35:00] job descriptions. How do we think about jobs? May we need to create new jobs? And so we take organizations through what that looks like. Thinking about how they might incorporate responsible ai.

So once you do it, yeah, we can come back again, or you can be on your own. 

Justin: Sure. That's great. Well. Are there any other topics that we didn't, uh, touch on today that you wanna 

Elizabeth: discuss? Yeah, I too, you know, Justin, we didn't really get into this, but I do think it's important for people to know that whatever you're doing, just like you're doing that ev, there's not always a happy path, right?

Sometimes we get disappointed, but it doesn't mean that the work stops. And so this is what I wanna share. A lot of people. Reach out to me because they haven't found success or the same success that I have in ai, and everyone's path and journey is different. I did not start out to win a ton of awards. I just started out because I was following my curiosity.

So my [00:36:00] message is really. Find out what you're interested in, find out what you're passionate about, and find out how AI might be impacting that area. If it's sports, if it's health, if it's fashion, beauty, whatever it might be, education, trust, and believe that AI is impacting it somehow. And then what do you think is your, um, zone of genius where you're really, really good at and see if you can't put those together and find your own unique path.

But I see a lot of people getting disappointed sometimes because they're not getting the job they think that they're qualified for. But there are many roads to getting into ai. Clearly no one said to me, go study AI ethics. This is what we need. It was an opportunity that I saw out there and I wanted to explore.

So that's the message that I wanna share. 

Justin: That's beautiful. No, I love that. I love that for sure. I think I had quoted this in a previous podcast, I think Oscar Wild said, be yourself. Everyone else is taken. 

Elizabeth: That's right. 

Justin: Right. And so I, [00:37:00] I like that, that saying that you just gotta march to your own beat. To your own 

Elizabeth: beat.

Yeah. And when I started 10 years ago, the colleagues that I had 10 years ago, some of them we don't speak as much, but it doesn't mean they're not continuing to do great work. And then I have new colleagues. Yeah. Based on my passions and interests. So yeah, we're going to evolve. If we should. 

Justin: Yeah, yeah, absolutely.

That's what makes life fun, right? So 

Elizabeth: yes, absolutely. I love 

Justin: that message. That's great. That's great, Elizabeth. Well, thank you so much for being on the podcast today and wish you the best. I will put links to the website and your email address you said in there. I'll put that on there as well, and your LinkedIn page and everything like that.

Awesome. Yeah, people definitely need to check it out. 

Elizabeth: Thank you so much for this conversation 

AI Announcer: today, Justin. Appreciate it. You've listened to another episode of the Conversations on Applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization.[00:38:00]

You can visit us at applied AI Dotn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at Applied AI if you are interested in participating in a future episode. Thank you for listening.