Edalex Podcast

Navigating AI’s Impact on Education, Workforce, and the Future of Skills Recognition

Edalex

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:04:18

Send us Fan Mail

In this wide-ranging conversation, Dan McFadyen (Edalex & SkillsAware) sits down with Ray Fleming (Founder of Stratentia) to explore AI’s impact on education, workforce readiness, and the future of skills recognition.

Ray shares his “two-speed” analogy – where AI evolves at lightning pace while education moves more cautiously – and examines the growing tension between institutional change and employer expectations. The discussion unpacks why AI fluency is becoming essential for graduates, why durable human skills like critical thinking and communication matter more than ever, and how AI should serve as a tutor – not a teacher.

They also explore the urgent need to rethink credentialing beyond traditional degrees, highlighting the rise of STARs (Skilled Through Alternative Routes) and the shift toward employer-aligned skill validation. For organisations, Ray outlines the three phases of AI productivity – personal, process, and paradigm – emphasising governance, experimentation, and leadership communication as critical enablers.

Looking ahead, the message is clear: the winners won’t be those who use AI just for efficiency – but those who use it to drive growth, innovation, and deeper human connection.

Credentials just got personal - Unleash the power of your skills data and personal credentials

Credentialate is the world’s first Credential Evidence Platform that helps discover and share evidence of workplace skills.

Find out more at: edalex.com/credentialate
Follow on LinkedIn: https://www.linkedin.com/company/edalex
Follow on YouTube: https://www.youtube.com/channel/UCjTzWgfUthHonR7Z5LatKGQ

Chapter 1: The Evolution of AI and the “Two-Speed” Challenge

In this insightful interview, Dan McFadyen (Co-founder & Managing Director at Edalex and SkillsAware) and Ray Fleming (Founder of Stratentia) dive deep into the massive implications of Artificial Intelligence on education, the workforce, and skills recognition. Ray uses the powerful “ski lift” analogy to illustrate the tension between fast-moving AI technology and the slower pace of change in educational institutions. They discuss why essential human skills – like critical thinking and communication—are more critical than ever, and how organisations must adopt governance and a growth mindset to navigate this change successfully.

Dan McFadyen: Hi, I’m Dan McFadyen, Co-founder and Managing Director at Edalex and also Director at SkillsAware. It is my great pleasure to be joined today by Ray Fleming. Ray is the Founder of Stratentia. He says that’s not the most interesting bit of his background—and he does have other interesting bits—but we’ll come back to that. Ray has been the Higher Education lead for Microsoft Australia and Google Cloud’s Global Solutions lead for education, and in the last few years, he has been very focused specifically on AI. After four decades of being fixated on technology and education, Ray compares it to sitting on a ski lift where your legs and body are moving at two very different speeds: the lightning-fast leaps into the unknown with technology alongside the careful and thoughtful pace of change in education. I love that comparison and we’ll come back to that as well, Ray. But thank you so much for joining me today.

Ray Fleming: Yeah, thanks for the invite, Dan. It’s always great having a chat with you.

Dan McFadyen: Yes. Yes, we do go back a number of years and had the chance to collaborate on a few things. So, yeah, really looking forward to diving into our discussion. And again, we will talk a bit more about Stratentia. But having that great breadth and depth and length of experience certainly builds to the perspectives that you’re able to share with us today. And yes, you have been exploring AI for quite some time—more than six years, nearly six and a half years with your “AI in Education” podcast with Dan Bowen—and you claim to have read 250 research papers. I’m curious whether or not AI did the reading for you. But yeah, why did you do that? And let’s give us some context and what have you learned from 250 research papers?

Ray Fleming: Yeah. Okay. I’ll fess up about the research papers a bit later. So first of all, the context for the podcast: six and a half years ago, I was at Microsoft, and one of my colleagues, Dan Bowen, was at Microsoft and we got into this conversation. We said, “Hey, we should talk about something with a microphone.” I mean, typical middle-aged bloke. But would you believe it? We picked the most boring topic in the world to talk about: AI and education. At least six and a half years ago it really was, because it was super geeky. It was all about how AI algorithms were able to do things like predict student dropout—it was that kind of stuff.
None of us knew that halfway through the lifetime of it up to now, this thing would appear where AI suddenly escaped from being the geeky thing that people knew, but nobody knew how to do. You’d sit at a meeting with people and say, “Surely we can build personalised learning plans.” In fact, I went back to the OECD report from 2021, and they were talking about adaptive learning systems, student retention systems, and using blockchain for credentials. I’m pretty sure that when you were having those conversations, you’d be in a room with six or eight people, and down the other end of the table was somebody with a hoodie and you didn’t know which department they were from. They didn’t say very much, but they were the person that went and did things—the one person who understood what you’re talking about. They were the geeks or the nerds, whichever is the polite way to go.
That was the world of AI as it was six and a half years ago. Even if I think back to my first AI course, which was maybe ten or twelve years ago, I was learning about statistics and linear regression and all that kind of stuff; all of that is forgotten now. Because we’re now in this age of human AI where the skills you need to interact with it are the same as you need to be able to interact with humans. I haven’t done my check to check that you’re a real Dan rather than a digital Dan, but you know, we’re kind of getting to that stage now where everyone can get their head around it.

I think we’re still in a stage where there is a lot of misinformation because of a misunderstanding about things. Everybody has got a different perspective about what AI is. Let’s go back 2000 years to the story from India of the blind man and the elephant. If you take the blind man to the front of the elephant, he says it’s a tree trunk; if he’s taken to the middle, he says it’s a wall; and if you take him to the back and you’re lucky, he says it’s a rope. That’s AI for me. Everybody has a different perspective on what it is. It’s an answer machine; it’s the equivalent of Google search; it’s a lifelong companion; it’s a romantic partner; it’s a thing that summarises research papers for me. It’s got all of these different use cases and people have different perspectives. So we’re just like any community or organisation: there are lots of blind people touching one part of AI, not aware that there’s lots of other stuff to it.

Dan McFadyen: Fascinating. Yes. Well, and I think if you can extend that analogy and think about how quickly that elephant is changing, right? So, suddenly, is it a camel in the middle and you feel that hump on the top? Let’s talk about that in the context of the research papers.

Ray Fleming: So yeah, you’re right. I’ve read 250 research papers. What I’m trying to do is keep abreast of the research coming out about the impact of AI in education, what’s good practice, what’s bad practice, and what case studies are out there that are informative and evidence-based. So I read a stream of research reports. Now, my reading of research reports is not the way that you’re taught to read them academically. I once read a guide about how you’re supposed to read research reports: you read them once, then you go back and read them again with a new understanding, and then you go back with questions and read them again. It basically takes four hours to read a research report properly. So, I think the maths tells you I have not read 250 research reports that way.

But what I’ve done is I’ve trained AI to read research reports for me the proper way. Then it does two things. One is pulling out the key information, but then the second is making it relatable to human beings, because I’m not an academic language person. I find that research papers tend to hide the interesting stuff. There might be interesting stuff on page 14 rather than in the abstract. They also hide it behind complex academic language. When we talk about this stuff on the podcast, I’m talking to, you know, a teacher in a classroom who’s got 20 minutes to listen to a podcast on the way to work. So it’s got to be pretty clear.

I use AI to help me in that translation process. But I read way more papers than I probably want to because I’ll look at that information I get from my analysis and then think, “That doesn’t seem quite right,” and then I dig into them. Often I’m reading the research paper before it’s been published in a journal. The reason for that is this technology is moving so fast and research publication processes are so slow that it’s out of date by the time it’s been published. If I can see the preprint, then we can use that straight away. I’m still seeing research papers published today that tell me AI can’t do something based on AI from 18 months ago. That’s like talking about a toddler not being able to walk when they are now much older, because this stuff is changing so fast.
Dan McFadyen: That’s right. When we’re actually thinking about having this interview in December last year, and then wait—December to January is too late, right? Things will have shifted.

Dan McFadyen: So let’s talk about that analogy of riding that ski lift where technology is heading one way and the educational institution body is heading the same way, but at a much slower pace. I’m sure your answer to this question 18 months ago would have been very different in terms of how institutions are handling and leveraging AI. On your podcast last year, you interviewed and referenced the great example of University of New England and what they’ve been doing with AI. Are they the future and everyone will follow that model? Are they the exception? And for those that aren’t following that model, what’s holding them back?

Ray Fleming: So I think we’re seeing two or three things happening. One is we are seeing big, important educational institutions making a decision that they have to put AI into the hands of their staff and students. Take one of the top universities in the world, Oxford University. Oxford said we are going to put ChatGPT Education into the hands of all of our staff and students. I’m not sure if they know how they’re going to use it yet, but they know that it’s important to do that in order to start making change. Then a few weeks ago, they said they’re going to put Harvey into the hands of all of our law students. That’s really interesting because it starts to show how it is affecting different professions.

There is an expectation from employers that our students will have the skills they need for the workplace. And let me be frank, for a lot of employers, AI skills are now something that are needed for the workplace. In fact, I know of employers where, if your answer to how to use AI is “I don’t,” you won’t get a job there. If you’re a lawyer in the future—and for many law practices now—you will need to be using AI systems. Harvey AI is the biggest legal AI system, and they’re using that in the law school. They’re doing it in some Australian universities as well. That is driving the connection between academics and learning and the workplace, which we know has always been there. It’s been stronger in vocational education than it has been in higher education, but I think that tension from the two-speed change is going to pull the elastic so tight that it’s going to spring back really fast.

Dan McFadyen: Right, right. Well, and it’s an inevitable disconnect if you suddenly have been taught that this tool is cheating, right? You’re not allowed to use this tool, yet it is a fundamental tool for your job. You can’t do your job without it. This is an incredible dichotomy for education and the purpose of learning.

Ray Fleming: While learning is supposed to be difficult—it’s like going to the gym. You go to the gym in order to lift weights, not to watch somebody else lift weights. Learning is a bit like that. You’ve got to go through the hard work of working out how to structure an essay and how to get your ideas expressed well. But then when you go out to the workplace to do exactly the same tasks, you probably use AI to help you draft the report and all those things. So you’ve got to differentiate between how AI comes into the picture in order that people have the right skills for employment, but without killing their ability to go through the hard work of learning.

I think there’s a bit of a technology rule here: if you’re under the age of 16, AI should never give you the answer. It should always be Socratic with you. But then the other part of it is how do we motivate learners to realise that they have to go through this difficult process rather than switching off the cognition and letting AI do the work? We’re in Australia; maybe a 15-year-old rugby player would understand the gym analogy more than they would understand the AI analogy: “don’t go to the gym and have your personal trainer do the lifting for you.”

Chapter 2: Addressing the Disconnect Between Education and AI Readiness for the Workforce

Dan McFadyen: Yes, great context. So from your experience working with a number of different universities and seeing the space, are our universities getting it? Are they becoming innovative, or are they falling hopelessly behind in some cases?

Ray Fleming: I don’t think that anybody is falling behind. It’s just that perhaps the pace of change has moved up. I only think about AI every morning and try to find out what’s happened, and I can barely keep up. So how can a system or an institution keep up? We’re still in that stage where it’s emerging technology and new use cases are being discovered every day. It is difficult to keep up, but I think that we’re starting to see some areas where it is becoming clear that it’s got a genuine benefit for the education process.

I think one of the most important first use cases agreed across education will be AI as a tutor, not as a teacher. I honestly do not believe that AI is ever going to be able to replace a teacher because a teacher does way, way more than give you information. But AI as a tutor—I think we’re going to start seeing a lot of focus on those use cases. Other use cases will come along and that will help us to say, “This is the positive role it can play in education,” and we can start to nudge out all of the negative roles or the ways that it can cause us to feel like we’re moving forward when we’re not. The challenge for the education system is how do you keep up with all of that? It’s probably easier when you’ve got an 18-plus audience, but if you have vocational education where students are starting at 14, 15, or 16, it’s a very different world regarding how you provide resources and facilities for them. That governance piece is really important.

Dan McFadyen: Yeah. And obviously there’s a lot to dig into on that, but that’s encouraging, your broad view that institutions aren’t falling behind. I do wonder about the urgency and immediacy of expectations of consumers, right? Who has the time to go to a three or four-year undergraduate degree when they can do a bootcamp or an accelerated work-integrated learning blended model? It will be interesting to see how that evolves—just these expectations of “Well, I can learn all that” or “I don’t have to learn, I can just use my assistant here.”

Ray Fleming: Yeah, I think it puts some real pressure on the education system because of that mismatch potentially between the learner, why they are coming to university, and the employer. The number one reason people go to university is to get a better job. I know that there are lots of other reasons and socioeconomic factors, but the number one reason is to get better employment chances. Part of that then has to link to the skills they need to be able to sit in front of an employer and show it’s not all about technology. There’s this very basic skill that is now being looked for by employers.

I believe in a couple of years’ time, the perfect recruit into graduate programmes for employers is the student that really, really understands technology and is the smart one that knows how to use AI. If I’m a law firm, I’ll pair them up with my crusty 60-year-old lawyer that thinks technology is a waste of time, because that lawyer has deep experience in the industry and the use cases. I’ll pair them up with a 21-year-old that knows no barriers and has no limits to what they can imagine is possible, and they’ve got this AI tool to help them do it. That will be the brilliant combo. It may not matter that they don’t have much experience in law today. In the last few years, those people might have been hired just to go and do research, but maybe now they’ll be hired to build the new model of what a law practice, an accounting practice, or a management consultancy will look like.

The graduate in a year or two’s time will have learned so much about AI, either because the education institution helped them or because they’ve helped themselves. There was a case a couple of years ago of a student in Turkey who was found cheating in their university exams. They had integrated a camera into their glasses, an earpiece, and an internet hub in the heel of their shoe. The camera was looking at the exam paper, sending it to the internet hub, then out to an AI system that was reading the question and giving them guidance in their ear. That student got caught and expelled. I reckon there would have been a pile of employers waiting at the exit door to hire that person because that’s the kind of innovation and efficiency that you want as an employer! It doesn’t match with the education experience because we’re trying to do something different, but those are the two worlds.

Chapter 3: Assessing and Credentialing Durable Skills in the Age of AI

Dan McFadyen: That’s great. I’m not sure if that’s the modern-day Maxwell Smart or Mission Impossible, but that concept of not having boundaries is key. You’ve talked about the transition to the workplace, so I’d love to pull that thread a little bit more and talk about skills in an AI future—what skills are important today and moving forward, other than being able to put an internet hub in your shoe?

Ray Fleming: I think the essential human skills become more important: collaboration, communication, critical thinking. Those skills become much more important because, as I’ve experienced frequently, AI will take my dumbest suggestion, tell me it’s brilliant, and tell me how to implement it. Or I’ll ask a question and it will give me a well-reasoned, five-page paper full of rubbish because I’ve asked the wrong question or it’s misinterpreted things. All of those critical thinking and communication skills come back as being really important because I need to be able to dig further into the support it is giving to me, in the same way you do when you work with an intern. The intern scuttles off and does something for you, but they come back and you go, “I know you interpreted my request like that, but actually I didn’t want a spreadsheet, I wanted this.”

The challenge for the education system is that we have maybe focused a lot on fact retention and recall, and we’ve drifted away from essential human skills because they are a lot more difficult to assess. I’d argue we probably worried that the assessment has been subjective rather than objective. I’m going back about 25 years ago to a conversation with the chief examiner of exams in the UK. They were very driven around objective assessment. Then I explained how recruitment happens in the commercial world. At Microsoft in the UK, we used to interview 1500 potential graduate hires in one day with a panel of maybe 25 people. In 20 minutes, I would subjectively make up my mind whether that person was the right fit. Probably in the first 20 seconds, actually! We’ve all done that, haven’t we, Dan? You’re interviewing somebody and you know you’ve got to make the interview last 20 minutes, but you already have an indication.

We’ve lost a bit of trust in the education system and educators to be able to make those judgments for us, so we’ve added big, complicated assessment systems. How do we get back to being able to assess what are now critical skills? They’re not “soft” skills; they’re actually critical skills because those are the skills that are going to survive the AI apocalypse when AI starts doing other parts of our jobs. How do you assess them, certify them, and communicate them? Today, the only way that is communicated is in a job interview where somebody talks about what they’ve done before with passion, not just from a piece of paper.

Dan McFadyen: Right. I love where you’re taking this. There was a really interesting report that the World Economic Forum released last month called “New Economy Skills: Unlocking the Human Advantage.” They talked about “durable skills”—human skills. They made a fascinating point that human skills aren’t necessarily durable by default. They have research charting a decline in those skills during COVID when we weren’t interacting or working with people, and then they’ve come back up since. It raises the question: not only how should educational institutions be assessing this, but how should they be training learners on this? Should that come from formal education, or is the reality that we gather that from life—non-formal and informal learning, work, hobbies, and passion projects?

Ray Fleming: If you’ll excuse me, I’m going to relate this back to AI because I only think about it when I wake up in the morning!

Dan McFadyen: I’m guessing you’re running this whole discussion through ChatGPT and it’s coming up on a teleprompter in front of you, right?

Ray Fleming: Oh, I’m not sure even the worst model could be as dumb as me with some of the things I say! There are two ways to think about using AI. One is: “I’ve got this task, help me with this task.” The other is to see it as a true thinking partner. It’s not a tool like a pencil; it’s something that I can use to help me increase my cognitive ability and think about things I wouldn’t have thought about, or get past brainstorming blocks. The skillset isn’t as easily teachable because it’s not just a specific technique; it’s having another form of intelligence alongside you. Often people ask me how to get it to help with a specific task, but actually, it’s generalisable. If you know how to do a critical assessment of something, you could apply that to a research paper, your shopping list, or a colleague’s performance. AI skills are about having the ability to generalise rather than just knowing one technical task. It’s not like a spanner where if you’ve got a nut-shaped thing, you know you need a spanner. It’s about how do I use this to help me get somewhere?

Dan McFadyen: Fascinating. And that report I mentioned has a call to action around how to develop, assess, and ultimately recognise or credential these 21st-century durable skills. They identify three phases: assessment, development, and credentialing—trying to get a sense of the whole human beyond what they can do in terms of a paper test. This is something that’s near and dear to my heart and both Edalex and SkillsAware. Do you see AI in all of that, or is there some of it that doesn’t belong? You’ve talked about tutoring, not teaching, but the tutor role.

Ray Fleming: As an employer, we often use one signal as an indicator of a whole lot of other things. For example, many job ads require a degree. It’s not because the job specifically requires those academic facts; it’s because we want to filter out a bunch of people so we only have 100 applicants to look through instead of a thousand. It gives us a signal that they probably have the tenacity to get through a university course. There are jobs where it is required—accountants or lawyers—but if you want to be a Chief AI Officer, a piece of paper isn’t what you need.

Let me demonstrate with a personal example. In all of the roles you talked about that I have done, a basic requirement has been an MBA. Every job I’ve done in the last 15 to 25 years has had an MBA as a requirement. I don’t have an MBA. I don’t have a degree. So I don’t meet the criteria on paper, but I very clearly met the criteria to be able to do the job because I’ve done those jobs. I was doing some work with a large financial institution that said they were having difficulty finding candidates. I put myself into their candidate system for a job that I could do tomorrow. It took them less than 20 seconds for the system to say, “Oh, we don’t have anything for you.” I know it’s because they checked some lines and I didn’t meet those lines.

The credential system, as opposed to the qualification system, is how an employer and employee get matched. Education certifications have become a proxy for those skills. I would argue 80% of the people my age do not have a degree. If you’re an indigenous person my age, the proportion is less than 1%. So if you are recruiting somebody 50 plus, the fact that they didn’t do this thing 35 years ago isn’t relevant. But asking for it automatically disqualifies the people you say you want to hire. If you’re not attracted to a new way of credentialing people, then perhaps the risk of using the old way might be an incentive to think about the new way. To answer your question about AI’s role: AI could have a role in performing more objective assessments of things like communication, critical thinking, and collaboration in a way that is much more cost-effective.

Dan McFadyen: Thank you for sharing your own story. I think your tenacity is an example of someone who has been able to succeed and get around obstacles that employers unknowingly put into place. You’re part of what is known in the industry as a STAR—someone who is “Skilled Through Alternative Routes.” In America, that’s more than 70 million people. The challenge is, if you have 1500 candidates and you can win it down to a more manageable list through applicant tracking systems, you’re screening out great candidates. We’re moving into a post-technology economy where your ability to know facts isn’t as important as your ability to apply them. Does that change the role of education and certification? Are there different skills or pathways STARs should explore, or does their real-world experience give them the advantage so that this becomes less of an issue over time?

Ray Fleming: I think the answer is going to become employer-led. The opportunity for the education system is how they get aligned with employers. In the tech world, one of the most valuable qualifications to get a job is a Microsoft or Cisco certification. That has traditionally been done outside the education system, but because it’s linked into the employer model, an employer knows that person definitely has those skills.

For durable skills, does the education system and the employer have to come together to say, “Okay, we can agree on what good looks like”? Because today, I suspect an educational institution’s view of what good communication looks like and an employer’s view are in different places. I’d even contextualise it—is it in a medical setting, retail, or business? I remember working with a childcare provider who said the standards haven’t changed, but the employees coming through the door with certifications don’t meet the same standards people did five years ago, so they have to train them directly. You’ve got to close that gap. Maybe instead of focusing on things that have been around forever, we do that for skills that haven’t been credentialed in the past, like critical thinking. Who can create that certification that has the confidence of employers?

Dan McFadyen: I know organisations like the Council for Aid to Education out of the U.S. have performance-based assessments to surface evidence of those durable skills, and it’s not your typical multiple choice. It’s interesting to think about how AI might help. When the first crisis wave hit and we couldn’t trust the words a student wrote, there was discussion about whether AI could do the reverse and perform conversational assessments.

Ray Fleming: Yeah, which education organisation has the ability to make that happen? Or does it have to be, I don’t know, the “KPMG standard for communication” that then becomes adopted more broadly? If I was a credentialer, how could I get a bunch of employers to validate this? That is exactly how the legal, accounting, and engineering professions work. University credentials in those fields have all been done with industry. Someone is going to come up with the critical thinking credential.
Dan McFadyen: Coming back to technical skills—if I see someone has AWS or Azure certification, I trust that. But if someone just graduated and they’re going to become a junior programmer, they’ve likely been taught computer languages that are five years old or older. We have less trust in that as an indicator of someone who is ready for the team.

Ray Fleming: So we have a double challenge: how we recognise those skills, and how we apply AI to help recognise them when we don’t know exactly what AI can do or how it’s changing. A wicked problem! But I think it’s the obvious way to go. In five years’ time, we hope to have a solution. You can’t solve it all today, but you can move in the right direction. That’s very different from the past where we pinned everything down before we started. It’s a compass, not a map. I can’t show you exactly which roads you’re going to go on, but I can tell you the direction of the destination. We will discover infrastructure as we go.

Chapter 4: Strategic AI Implementation – Governance and the Three Productivity Phases

Dan McFadyen: Perfect. Now, let’s pivot the discussion a little bit. We’ve talked about AI in education and the workforce. I’d love to get your perspectives through your work at Stratentia and RockMouse. You help organisations integrate AI into their business and workflows. What is the single largest mistake you see organisations making when they’re trying to jump into AI?

Ray Fleming: Most organisations are currently at the “personal productivity” level—everyone using AI, but no longer having to hide it like they did 18 months ago. Those big glowing case studies we read, like Klarna replacing 700 people in call centers with AI, are the exception rather than the rule. Don’t worry that you’re missing out, but do worry if your organisation hasn’t got a culture of learning and a set of governance so your staff can be confident they are doing the right thing. I’ve been doing a lot of work around governance, including helping create a governance pack for RTOs (Registered Training Organisations) in Australia.

Getting governance in place is important because currently, staff are individually carrying the stress of using these tools without knowing if they’re allowed to. The answer isn’t to ban it, because everyone will just do it at home and the data will leak out anyway. Unless you’re a bank with a super controlled environment, everyone is doing AI in some way. Give them safety and security, but don’t make it ownerless. The first mistake is not having governance; the second is not recognising that we’re all on a learning journey. IT teams have a lot to learn because AI projects are very different from conventional IT projects. I’ll often engage with a leadership team, and then the board, the risk team, and all the staff need to know. It’s not onerous; it’s just about giving two signals to your staff: “We want to enable you to be innovative” and “We want you to use AI responsibly.”

Dan McFadyen: That’s great. I was reflecting on our own journey at Edalex. In any company, you have early adopters and those who are resistant. We’ve encouraged people to show examples of AI innovation in their own roles during our monthly all-hands meetings. We’ve had some say AI doesn’t help them, but more and more are saying there’s no way they could do their job without it. You mentioned most people think about it as productivity versus transformation. Where do you see that balance shifting? Are we transforming how we work, or are people just using it to catch up on their inboxes?

Ray Fleming: I think of it in three phases: Personal Productivity, Process Productivity (teams and organisations), and Paradigm Productivity (completely new business models). Uber is paradigm productivity—starting from what the customer wants and designing a new process. In organisations, the “process” bit can be tricky because many have suboptimal processes grown over years with manual tweaks where the team provides the “glue” between gaps. If you don’t understand that, you either automate a terrible process with AI or the process breaks because you didn’t realise that human glue was happening.

Don’t just automate your existing crappy processes! If you do, your project will likely fail. Regarding the “paradigm” model—which organisation is going to vote for themselves to become redundant? A new model of higher education probably isn’t going to come from an existing faculty; it will come from a startup or an innovation unit that doesn’t have to follow all the old rules. In Australia, we have regulators like TEQSA and ASQA, but the rules can be interpreted in more than one way while remaining compliant. People just need the space to do that. A year ago, I said I wanted to help 100,000 people work out how AI can change their industry. Helping them make a living out of it is more of a challenge because it’s not yet very solid. I think about “AI fluency” rather than just “literacy.” For a leader, fluency is knowing what AI can do now and in the future to adjust how an organisation is structured.

If we can use AI to handle the tasks that usually get pushed to next week’s action list—the important but not urgent things—that is awesome. Those organisations will grow. This isn’t just about needing fewer humans; it’s about organisational growth and doing the important stuff we never get to.

Chapter 5: Practical Advice for Navigating the AI Future

Dan McFadyen: I love that multi-tiered model. I’ve taken a lot of your time, but I have two questions remaining. You mentioned the relentless pace of change, yet people are largely resistant to change. How do we make sure our legs are not going off in one direction while the rest of our body and our education is somewhere else?

Ray Fleming: Two things: Experimentation and Communication. Instead of asking if AI can do something, have a crack. I got a data file from my electricity company—800 lines and 100 data points on each line from my smart meter. I wanted to work out my energy usage, so I gave it to Gemini and said, “This is my file, tell me the story.” AI gave me an amazing story that let me work out that a couple of devices were running overnight that I had no idea about. It gave me the economics of a battery and everything. I saved about $500 on my electric bill in three days! It cost me 30 seconds to ask. So, just have a crack.

Second, for people at the top of organisations: talk about how you are using AI. It gives people ideas, but more importantly, it gives a clear signal that you think it’s a good thing. They will then open up to possibilities. Talk about it to inspire others. That’s the way to get the group that hates AI to diminish. It’s important to respect their opinions, but there’s too much untapped opportunity.

Dan McFadyen: My final question is the “crystal ball.” Looking 3 to 5 years down the road, what is the impact of AI? I say this in the context that overnight AWS announced they are laying off a large number of people, and graduates are not being hired in some parts of the legal profession. Is it going to transform our economy or trash it?

Ray Fleming: It could go either way. When I sit with a board, I’m looking in their eyes to see if they see this as a growth lever or just an efficiency lever. If you’re an education institution, can you deliver learning in a different way? Could you refocus your team on interpersonal delivery rather than content development? Could you deliver transnational education in seven languages at once? That’s the growth opportunity. The other is the economic opportunity: “Do we need fewer people? Can we replace teachers with bots?”

Over the next five years, I think the survivors will be the ones who go for the growth opportunity. Content isn’t king; human connection is king. Human connection is what delivers a brilliant educational experience. Double down on that. AI can enable you to do that by removing the obsession with backend processes or manual curriculum plans. You get brilliant plans from AI, but the brilliance is in the delivery excellence.

Dan McFadyen: Excellent. I was at an EduGrowth Clearpath CEO Syndicate recently and we had a vibrant debate on this—some say government will step in to make it illegal to fire people, others say the world as we know it is done in five years. I like your optimistic view that individuals and organisations can make a difference. What is your final piece of advice?

Ray Fleming: Don’t let AI switch your brain off. Use it to extend your brain rather than replace it. It’s a bit like social media—it shouldn’t replace all your personal connections, though unfortunately for some, it has. AI could be the same. There will be people that use it to extend themselves and others that use it to replace themselves.

Dan McFadyen: Wonderful. So many thought-provoking perspectives today, Ray. Thank you so much for your time. How can people find your podcast?

Ray Fleming: The podcast is easy to find—just search for “AI in Education.” You’ll either find an episode where we interview someone fascinating or a conversation between Dan Bowen and me about the latest news. We alternate weeks. Some say the interviews are great, some say the research interaction is great; I’ll let the listeners be the judge. The easiest way to find me is on LinkedIn—just look for Ray Fleming.

Dan McFadyen: Wonderful. Thank you so much for your time today, Ray.