AI Made Simple

Mila Devenport on AI Literacy, Digital Wellbeing, and the Cognitive Offloading Crisis

Valeriya Pilkevich Season 1 Episode 3

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 24:22

Your employees can use AI tools, but do they understand their relationship with them? Most companies focus on "button-pressing skills" while ignoring the deeper question: are we building human capacity or accidentally outsourcing it?

In this episode, I'm joined by Mila Devenport - AI ethicist, founder of Kigumi Group, and certified AI ethics assessor - who has spent over a decade working with digital natives, schools, and enterprises on digital wellbeing and responsible AI use.

We discuss:

  • Why soft skills and human intelligence matter more than ever in the age of AI
  • What companies get wrong when recruiting and onboarding AI-native employees
  • How to prevent cognitive offloading from turning your workforce into algorithm-dependent thinkers
  • The AI process journal framework that helps people understand where their thinking stops and AI's begins

Connect with Mila: 
LinkedIn: https://www.linkedin.com/in/mila-devenport/
Kigumi Group: https://www.kigumigroup.com

Connect with Valeriya: 
LinkedIn: https://www.linkedin.com/in/valeriya-pilkevich
YouTube: https://www.youtube.com/@aimadesimpletalks 
Podcast: https://aimadesimple.buzzsprout.com

Need help building AI capability in your organization? Book a call. 

Valeriya Pilkevich (00:00)
Welcome to AI Made Simple, the transformation series. I'm Valeria Pilkevich and I talk with global leaders, innovators and practitioners who are shaping the future of work in the edge of AI. Today I'm joined by Mila Devenport, AI ethicist, certified AI ethics assessor and founder of Kigumi Group, a social enterprise in Hong Kong delivering digital wellbeing and AI ethics training to school and enterprises.

We explore what children's AI behavior reveals about adult blind spots, why soft skills matter more than ever in the age of automation, the concept of AI process journal for tracking human AI thinking, and what companies get wrong when onboarding AI native employees.

Valeriya Pilkevich (00:42)
Hi, Mila, thank you for joining me today.

Mila Devenport (00:44)
Thanks so much, Valeriya, for having me.

Valeriya Pilkevich (00:45)
Mila, before we dive deeper, I'd love to begin with your work at Kigumi Group. You've spent years building digital literacy, digital wellbeing programs for teens and children. Can you share why this mission is so important?

Mila Devenport (00:58)
I actually started out building a character education company. It was really about integrating ethical reasoning skills into children so that they could grow up to become ethically sound leaders, essentially. And what I found was when I started the company that was exactly when commercial generative AI tools started to hit the market and kids started using them, adults started using them. And the conversations about ethics and technology were converging for...

maybe the first time. So as a person who already had come from the ethics background and the philosophy background and teaching kids philosophy to me, it made complete sense that now generative AI and technology was this new topic to unpack from an ethical perspective. And we needed to allow kids, especially the space to make sense of their relationships with AI, with emerging technologies. So that's really what we started to do

because it was frankly a no-brainer and there also was a real need for it. We saw that the companies that were creating these emerging technologies were private sector companies that were really driven by private sector interests and that's fine, but they didn't really think about the fact that children were also using these products and policy makers couldn't keep up. this...

initial mandate that our company had about teaching kids about ethical reasoning actually branched off into child safeguarding and keeping kids safe online, cyber safety, as well as building their ability to self-regulate and essentially use technology to become the people that they want to become versus having the technology use them. So something that we say at our company is

If you don't know why you're using a piece of technology, then the technology will end up using you. And we find that still to be true with most of the devices that we give to our kids. So that's what we do.

Valeriya Pilkevich (02:54)
loved from one of your podcasts you were mentioning that the kids, if they want to learn math, they go to school and there is a math lesson. But there is such an important elephant in the room right now, AI, and before AI, social media and all the digital technology. So where do they learn? There is no place to learn how to interact with these tools, how to keep their critical thinking. So I believe the work that you do is crucial.

Mila Devenport (03:20)
Thank you.

Valeriya Pilkevich (03:20)
Mila,

from your work with parents and teachers and corporates and children, what behavioral patterns do you see repeating across all ages? Are adults really that different as kids when it comes to using AI?

Mila Devenport (03:32)
I think you know the answer to that question, which is why you asked. ⁓ The answer that we found, although you know we are by no means the people who hold the answers to everything, we partner with universities, pro bono academic advisors at universities and school teachers and leaders to do our own focus groups and surveys. So we're creating this research as we go and we share a lot of it to make sure that other people can also leverage it. But to answer your question, we find that this is a societal

Valeriya Pilkevich (03:35)
Yeah.

Mila Devenport (04:01)
issue about AI literacy and it's not about upskilling. It's not only about being able to use AI tools to move faster, to do the work better than your colleague or to get a better score so you get into the right university. Those skills are tactical skills and of course I think a lot of K-12 education is focusing on that and so are employers and they have good reason to be focusing on it. But what we find is that the deeper risk.

for enterprises and for schools and for families lies in the fact that this is experimental technology and it has very little safeguards on it. So to answer your question, when that type of technology of that nature comes into contact with people who are little humans or medium sized humans or mature humans, we find that a lot of strange things start to happen ⁓ because again, there are not spaces and vocabularies developed yet.

to keep our kids safe, to keep them emotionally grounded when they develop a relationship with an AI companion. And the same goes for when adults in enterprise situations interact with LLMs. We see that there are a lot of psychological ramifications that then snowball into privacy regulation, compliance problems, or compliance risks, or simply bad work.

So I think there's a lot in the AI human interaction, the relationship that we are starting to see as a society that we should be concerned about. Not because AI should never be used, but simply because again, we should be building that relationship between the human and the generative AI tools in a meaningful way and an intentional way that does ground our

work frameworks and our educational frameworks in humanity and human intelligence versus trying to run as fast as we can with the technology. So that's a lot of the work we do is actually for not only schools, but also universities and enterprises trying to help them understand what is the shared vocabulary that your humans can have if you're an enterprise or if you're at a school, what is the shared vocabulary that they can use to understand their own

relationship with this emerging technology so that they don't walk out of that relationship with sort of a more diluted version of themselves. Because in the mid to the long term, that's the real risk. It's not actually getting hit with a compliance fine, you know, by the regulators. That's a short term risk, right? You can get over that. But it's more like what if every person in your company, hypothetically, doesn't know how to use AI ethically and responsibly in a given context?

you could end up with a workforce, an entire company full of thousands of people who are simply hiding behind a tool versus actually having the social and the human capital to contribute to your company's vision and your mission.

Valeriya Pilkevich (06:52)
what strikes me is how can we expect children to interact with these tools ethically and responsibly if many adults still lack the skills themselves. I work with companies on AI app scaling and I see that many adults are overwhelmed by the pace of change. We don't know what's coming in a month from now, let alone a year. And in Europe we now have regulations, which is the AI Act.

which requires companies to provide a literacy training for employees and

It has been real improvement, but there is still a long way to go.

Valeriya Pilkevich (07:25)
Mila,

Valeriya Pilkevich (07:26)
You emphasize teaching metacognition and ethical reasoning over what you call button pressing skills. What do you think leaders are still missing when it comes to helping employees or anyone really build critical judgments so that AI enhances their thinking rather than replacing it?

Mila Devenport (07:44)
For us, a lot of it boils down to building decision-making skills in certain contexts.

You don't want to get into a situation where you're upskilling your employees with frameworks that are binary, that are rules-based.

There's a list of rules. These are the 10 commandments. You never ever do these things. From a compliance perspective, there is a time and place for those compliance-based rules. But from an ethical decision-making perspective, which is really what we're trying to build for responsible AI usage in an enterprise situation, you want to invest in the employees' understanding of their own capacity and then their understanding of the tools capacity, which a lot of it is just literal

technical knowledge. What is this AI tool? What is the algorithm that's driving these things? How does it interact with me as a user? What is the interface and the UI UX that's happening here? And then how do I react in response to that? So to even understand that dynamic, you do have to invest in the human first, because if the person who is using the tool

doesn't understand their own capacity, doesn't have a sense of their own self-efficacy within a work situation or an interpersonal situation, they really are not going to be able to reach the second step, which is understanding the tools capacity. Of course, you could do like a YouTube video where you're like, this is algorithmic bias, you know, don't do this and stuff like that. Still, you can't skip, you would have to go back to the first step, which is who is the human using it? What is the use case that they're using it for? What's the...

Valeriya Pilkevich (08:53)
Mm-hmm.

Mila Devenport (09:15)
context of this AI tool being deployed and what is the interaction happening in that way so that we can fully frame for the user of the AI is this proportionate and responsible use of the AI. And so what you'd want to get to at the end of the day from the business leadership perspective is you would want to be able to say our upskilling program has not just allowed us to move faster, better, be more competitive.

but it also has upskilled our individual employees to grow and flourish as professionals so that they know how to be safe, they know how to be responsible, and they also are able to work better together instead of, again, diluting their human capacity. Because in the long term, those are the businesses that are simply not going to make it, because they're going to be essentially run by algorithms.

even if you're paying salaries, but the people who you're paying salaries to are going to be referring to algorithms without any proportionate understanding of are we outsourcing too much to an algorithm.

Valeriya Pilkevich (10:17)
related to this question, so many are talking right now about AI skills and how important are they, but less about AI era human skills, So apart from what we have discussed already, like critical thinking and ethical use of AI tools, what do you think are the future skills that each of us will need in the AI era?

Mila Devenport (10:35)
So I actually, Valeriya, I really, don't think hate is too strong of a word to use. I really hate when people are like, these are the future ready skills. we have our own internal set of digital competencies that we use that we work with social emotional counselors and AI ethicists to essentially create on our

But I really hate when people sort of put it that blankly because to me, none of us know the future. However, what we are doing right now as a company is we keep our finger on the pulse very closely of what kids in middle school and high school are telling us. Because if we understand what adolescents are experiencing now, we have more of a better foundation to be able to say,

what is going to be the workforce in maybe six to eight years, right? And so studying adolescents to us and their relationship to technology, so these are lower Gen Z, upper Gen A sort of kids is really important to us. And we do see that starting to translate into workforce insights, And what we see in terms of what the kids are showing us with their relationship to technology,

We watch them, we observe them, we have our own interpretation, but we also talk directly to them a lot. We do focus groups, we do anonymous interviews because a lot of the stuff they're sort of embarrassed about if it's too emotional. And when we put all of this together, what we see and what we believe is going to be the most important skill set of the future are continuing soft skills. I don't think there's a lot of change from a lot of the other frameworks that have been saying...

communication skills, empathy, being able to navigate interpersonal environments effectively, being able to understand the perspective of another person. We see that those skills really are what a lot of the kids are struggling with, and it's not through lack of resources. So we work a lot with very highly competitive international schools and private schools in Asia and as well as abroad.

Valeriya Pilkevich (12:09)
Mm-hmm.

Mila Devenport (12:30)
And we still see with even all of the resources that they have, all of the access to fancy platforms and internship programs and trips abroad and stuff like that, the kids are really struggling with maintaining the human contact and communication because they've grown up too much on screens or interacting with Gen AI in some cases And so, yeah, keeping your finger on the pulse of that. Another thing I'd say is we see that communication and

these soft skills are really lacking with the kids who are starting to enter internships from a university age. And we work with partners who are trying to place a lot of these kids into internships now who are university age or maybe 18 and up. And we've heard from a lot of internship providers that the kids, they can't even get placed into interviews because they don't know how to talk.

to a person and it's not public speaking skills, right? It's not that you have to ace the interview every time. All of us have bombed interviews and we'll continue to bomb them, right? But the point is that they really have issues with even eye contact, with being comfortable with another presence in the room. And what we're seeing from our company is that this reflects back on the research that we've done on the COVID generation. That there is a generation of kids globally, particularly in certain parts of the world where there was zero COVID.

policy like mainland China or other parts of Asia where things were really shut down and kids moved to remote learning at pretty developmentally significant ages. So let's say like your kid was seven years old, they were starting to learn how to navigate conflict on the playground. They were learning how to share scissors in the classroom, whatever it was, right? All of a sudden they hit three, maybe two to three years of inside remote learning.

in a constrained environment, very high stress usually with parents working at home. We see a lot of the research coming out of, again, our university partners who study this in mainland China or in Singapore or in Hong Kong, And these localized communities are seeing that the kids who are now, they were seven when that happened, now they're 14 or they're 18, and there are really problems with the social emotional learning, with the soft skills.

So to answer your question, social emotional learning, we need to meet the younger employees where they're at. We need not to see them like they're damaged goods or something, but we need to prepare them with workforce prep programs that are getting them ready for AI, but with a very, very strong human intelligence component.

Valeriya Pilkevich (15:01)
I completely agree. And I see this with the business leaders I work with too. When we talk about AI agents working alongside humans, which is happening faster than most people realize, the skills that matter are fundamentally human. Communication, delegation, orchestration, managing a digital coworker isn't that different from managing a team.

Valeriya Pilkevich (15:22)
Mila,

Valeriya Pilkevich (15:22)
You've actually observed children's AI usage inside schools. Given what you've learned, what should companies stop doing or start doing differently when onboarding young AI native employees?

Mila Devenport (15:34)
I'd say there are a few things. Number one,

try and define what are the AI human intelligence skills that you are recruiting for that fit your company's culture.

If you're a company that's very sort pro-AI tools and you're happy with people pressing buttons and sort of outsourcing their cognitive tasks, then design your AI sort of recruitment framework to ask questions about that, right? ⁓ But if you're a company that maybe wants a more balanced proportionate use of AI, you still want to preserve creativity, you want to preserve interpersonal communication, whatever it is.

Valeriya Pilkevich (16:01)
Mm-hmm.

Mila Devenport (16:11)
⁓ working backwards from your mission and your vision, try and define what is the AI human intelligence skill set that we're looking for. And what I mean by that is that's very abstract, but one of the tools that we've worked on and just released recently to help schools do this and potentially universities and enterprises is an AI process journal. So an AI process journal is essentially just like a document that is scratch paper.

for kids who are using AI to complete an essay or a research paper or make a PowerPoint presentation. Kids are already doing all of those things, but only some of the schools are actually teaching the kids how to track how they're doing it and to show a demonstration through a piece of scratch paper called a process journal. This is how I used AI. I went through this reasoning process because I thought that the AI would do this but then I checked the AI. So it literally is a thinking process journal. That type of

process journal can easily be made into a demonstration of critical thinking and human intelligence with AI for any enterprise, right? And there are sort of like free templates out there on the internet. But one thing that a company could do is use an AI process journal template, whether it's a Word document or a Google form, whatever it is, try and find the people who are coming into your company who already were taught those skills earlier on.

Because those people are going to be the young professionals who are already critically thinking at a fairly mature level with AI. So you want to recruit people who know where their thinking stops and where the AI's thinking begins. Because what's happening a lot right now is that when kids use AI, their thinking and the AI thinking are getting mashed together automatically from the age of eight. And then they're growing up to the age of 15. And then they're like,

wait, was that my thought or was that an AI's thought? And if the AI's thought was literally misinformation, you're gonna have a big problem, So even just at a fundamental level, I guess my last piece of advice to companies would be try and weed out the kids who know where their thinking stops and where the AI thinking begins, whether that is in the form of an AI process journal is up to your enterprise or not. But it's very important to recruit for human intelligence with AI.

Valeriya Pilkevich (18:29)
I love the idea of a process journal. There is a research I recently read on cognitive offloading, showing that all of us, so adults, people who are already knew how to think critically, who could write a presentation or LinkedIn post from scratch, are now transferring so much to AI that our own cognitive abilities are declining. So the process journal concept, understanding when AI is augmenting your work versus automating your thinking.

That's going to be crucial for everyone, not just children.

Valeriya Pilkevich (18:58)
Mila, you recently launched a gamified AI education program for enterprises. And this is close to my heart because I do a lot of corporate upskilling. What I often see is that AI training gets handed to IT or data science teams and they're brilliant in their domain, but not always great educators. I've talked to some employees who said, I thought I understood AI before the training. Now I'm afraid to use it at all.

what would be your advice for L &D and HR teams who want to build real lasting ad literacy, not just another compliance checkbox?

Mila Devenport (19:33)
I'd say definitely looking into frameworks and if you need to, creating your own, that's based off of your company's values about what are the human intelligences, whether they're soft skills or creativity or is time made to interact with each other so that we can cross pollinate.

what are the values and the principles that are really most important to you as a company? So go back to the vision and mission. And then I'd say really defining an AI literacy program and set of skills that span and integrate without any separation, emotional competencies, soft skills, and AI literacy skills. I'd say that would be my first starting point. And then I'd say only after you've defined

a framework for yourself about what is right for your company in terms of AI upskilling, then you can go to the curricular development stage to say, all right, as a company, this AI framework, which is partially compliance, but not only compliance, not just IT, this is directly derived from our vision and mission.

Now we're going to plug it into the curricular development side and we're going to create upskilling modules with people like Valeriya, right? But we're going to give her the homework that we've already done to say, is really what's right for us because that's the fingerprint of your company, Anyone who subscribes to any AI tools, they're going to be able to run as fast as you to do all these things, But where we see the magic really happening with humans and AI is...

human capacity to think creatively about the application and the proportionate use of an AI tool. So you're never going to be able to get around it. I mean, you can probably throw money at it and get all of the AI tools that your company needs to remain competitive for like a year or two years. But the way that things are going,

you need to really invest in the human capacity to make decisions about the tools because that's going to be what's most important.

Valeriya Pilkevich (21:30)
So no one size fits all approach, instead something customized to company's values, mission, people, so human centered. Amelia, to close, across all your work with children, parents, teachers, NGOs, and corporates, you've seen a wide range of AI behaviors. What is one principle, habit, or mindset you hope every adult can carry into their relationship with AI going forward?

Mila Devenport (21:55)
I think it's our company's sort of subtitle, is, if you don't know why you're using the technology, the technology will end up using you. And that is true of every device. Like you could say that about a smartphone. You can say that about an educational app.

It's simply the way that our society has developed relationships with technology. It's been allowed to develop in a very experimental way where tech products are released into the commercialized environment for consumers without a lot of regulatory oversight. And those things are touching our kids immediately. They're touching our workplaces. They're touching our schools. And we need to be more conscientious about the fact that a tech device, especially in emerging technology like generative AI,

and its different use cases or its different forms, those are not like buying a mug from Ikea. This mug has probably been tested quite a lot, but the average generative AI tool has really not been tested. And the black box problem of AI is something that other products simply don't have.

We know the molecular composition of this cup. We can go and look it up. We can talk to the product development team that made it. We can know what happens if we apply different, maybe we fire it to a high temperature. But with generative AI, even the people who make it don't know what's going on. This is an immense problem for enterprises and even just for the credibility and the validity of information that you're using to run your companies.

And it's definitely a child safeguarding problem. So I'd say really think before you open a new tab or click on an app or do whatever you're doing with an AI tool. Think, do I know what I'm using this for? What is the degree I need to use it for? Is there a different way to be doing it or can I stop my use after a certain time? I think having those safeguards in place as a critical thinking framework

is going to be more and more important.

Valeriya Pilkevich (23:53)
Thank you so much, Mila. The work that you're doing is genuinely important and I've learned a lot from this conversation.

Valeriya Pilkevich (24:00)
You can find Mila Devenport on LinkedIn and learn more about her work at Kegumi Group.

All links are in the show notes. If you enjoyed this episode, follow AI Made Simple, the transformation series for more conversations with practitioners shaping how AI is actually adopted in schools, enterprises, and everywhere in between. Thanks for listening.