The Canberra Business Podcast

Where Do We Draw The Line Between Creepy And Caring In AI?

Canberra Business Chamber Season 4 Episode 18

A hotel door that scans your face sounds futuristic, until it won’t let you back in after breakfast. That story sets the stage for a candid tour through AI’s promise and pitfalls with Dr. Ryan Payne of CanBe Lab at the University of Canberra. We dig into how convenience can collide with bias, what inclusive design looks like in practice, and why Canberra is a powerful proving ground for responsible, human-centered innovation.

Ryan bridges global AI governance and local impact, sharing how behavioral science helps people actually use technology wisely. We unpack everyday risks like pasting sensitive text into public models and demystify model behavior, prompt tactics, and memory limits. From there, we dive into the lab’s hands-on projects: robot camps and computational thinking for kids, high school programs on cyber resilience and scam risk, and health research that uses better data to reach underserved communities with meaningful prevention. Along the way, we examine the hidden signals in our devices, including gendered voice assistants and the norms they reinforce.

One standout initiative aims to expand children’s aspirations by securely visualizing them in a wide range of future careers traditional and non-traditional powered by on-site infrastructure and strict privacy. We talk partnerships with startups, energy providers, and global tech firms, and we share pragmatic advice for founders: show up, help first, and make the complex simple. The throughline is clear: move faster than harm, measure inclusion, and design for dignity. If you care about AI ethics, behavioral insights, or building products people trust, this conversation offers practical examples and a roadmap for action.

Enjoyed the conversation? Follow Canberra Business Podcast, share this episode with a colleague, and leave a quick review to help others find it. Got thoughts on the Mission Impossible vs Bond debate or how to make AI more inclusive? Drop us a message at infocanberbusiness.com.

SPEAKER_00:

Hello and welcome to the Canberra Business Podcast. I'm Greg Harford, your host from the Canberra Business Chamber, and today I'm joined by Dr. Ryan Payne from Canby Labs at the University of Canberra. Ryan, welcome to the podcast. Hey, thanks for having me. Now, Canby Labs launched a few months back. But tell us what is it that you're doing?

SPEAKER_01:

Well, first of all, thank you so much. You actually came and spoke at our launch. So it's good to be back here and to be able to talk to you a little bit more in different. The Can Be Lab or Can Be Lab, I have to make sure with my Canadian accent I say can be. Otherwise, it sounds like can be. And everyone loves a good sugary substance, but uh it's what society can be with technology, or can be as in Canberra Behavioral Lab.

SPEAKER_00:

Yeah. And you're at the university. How did you how did you come to be? What are you doing?

SPEAKER_01:

University of Canberra actually reached out and said, hey, you know, we recognize some of the things that are happening with artificial intelligence and with growth of technology. And my background, I work as the United Nations delegate on AI governance. So talking about that around the world, and we just joked off the air. I literally just got off a plane. I've been in Shanghai, Bhutan, Singapore, Hong Kong, Sydney, and then little towns along the way. It retreats before arriving just here. And then I'm tonight I'm speaking about dementia and AI. And I think one of the things that you see with the Can Be Lab is it's kind of, I want to say it's a response to what we're seeing with technology. You know, we kind of don't necessarily understand how artificial intelligence is going to change the world, but we're seeing robotics, we're seeing, you know, biometrics, you know, personalizations. Uh when I was in China, my hotel room scanned my face. Really? I didn't have to have a I didn't have to have a key card, which in some ways was handy because I didn't have to worry about losing a key card. And if you travel lots, you know that trying to figure out what hotel key card and where did I put it and going forward. But it also meant that if it didn't recognize my face, it didn't let me in my room. And sometimes first thing in the morning, you're a little you know smudgy and you know, I won't say the bags under my eyes are a little bit longer than they should be. And it's like, is that really you? Is that really you? If I came back from breakfast one time and it didn't recognize my face.

SPEAKER_00:

Right. That's a little bit creepy, really, getting locked out by technology because your face isn't sort of your bright, bushy-tailed self.

SPEAKER_01:

Yeah. Well, and also I think most people recognize that um when biometrics came out, it was biased against everyone who wasn't a white guy, right? In the 1970s, art, you know, you had the Ivy League schools, Harvard, Yale, Stanford, and they did artificial intelligence and they trained it on their students. That's where they got the data from. And at that time, it was primarily white guys. And so you fast forward 30 years, and now we have issues with recognizing, you know, women, people of any kind of ethnicity, um, anything around the world. And so we kind of are trying to compensate for that. And you see the United Nations coming up with training data, represent representative-based training data. And then you see other countries such as China, and they are literally saying, Here's some olive oil if we can scan your face. Let's create our own training data set. Um, but then when you go and like I was in Shanghai for this, and as a uh Canadian who definitely comes from a Scottish-Irish, pasty white background, the trained data didn't recognize me. And so I got to experience that um reverse or just blame discrimination from training data because it wasn't based on Western ideals. So, how did that make you feel? Um, you know what? It actually made me smile. Um I I recognize I come from so much privilege. I, you know, talk of the talk of the privilege. I'm literally a white guy from a North American Ivy League educated school, and I'm now a university professor who gets to travel around the world and talk about AI. So fully recognize some of the privilege that comes with that. So it was kind of nice to be like, oh my gosh, this is something so silly. And you, I mean, I'm coming back from breakfast and then be like, oh yeah, I can't get in my room. And thinking then, that's a minor inconvenience. How many other people every day have to go through that? And it's kind of uh a good reflection. And actually, if you go to the Can Be Lab website, it's uh canc-n-b-e-lab.org, uh, you can actually see that we've actually started looking at inclusivity and technology. And that was one of the really big things I started pushing for when I when I started working at the lab, was to say, okay, how do we actually include everyone and make sure that technology doesn't keep perpetuating those same stereotypes or same ideals? And how do we actually use technology to empower people with that? More so than let's set something up and then go back and fix it and fix these quote unquote imperfections.

SPEAKER_00:

So so it's a really interesting bit of work. And and how far along that journey are you?

SPEAKER_01:

Yeah, you know, at the lab, it if you ask my air frequent fire miles, they would say it was a long time ago. If you ask my mom, she commented on my face, it's apparently my face has aged a thousand years in the last three months. But we only launched three months ago. Um, we're not even at the three-month mark. But uh we've been really lucky. We've had the uh Andrew Leigh, the junior secretary for treasury, we've had the education minister, and actually the prime minister even has actually come out to the lab or come talked about the lab, and we've seen on the website, and um I've been able to hear about it from other people around the world. Somebody in France uh just a few days ago was like, oh yeah, yeah, yeah, the Lego thing and the Canby Lab and stuff. I've heard about this in Australia, and it kind of caught me off guard to be in an airport and have somebody talking about this, but it's starting to recognize really quickly the need for it. And I think one thing that's exciting is that now Canber is at the forefront of trying to actually look at this.

SPEAKER_00:

Yeah, absolutely. And that's got to be good for Canberra, but also, of course, for the university where you're located. Now, tell me, Ryan, the the Canby Lab, um, I've got your your brochure in front of me. It says you apply behavioral science and advanced tools to design programs that drive societal change. Um, what are what are the current trends that you're seeing in behavioral science?

SPEAKER_01:

Yeah, you know, one of the things that's interesting is actually the University of Canberra, um, or UC, I don't know what the proper acronym, I call it UC, or UCAN. Uh you're starting to see that the executive dean is a behavioral scientist. He's an economist, so coming forward. And that was one of the things that we recognize going into that. In terms of trends, I think one of the biggest ones is recognizing who uses technology and how do you actually teach people to use it? I think everyone's sort of used a ChatGPT or copilot or something going forward. Uh I've met a lot of government employees who put you know very sensitive things into Copilot or into ChatGPT and say, oh yeah, yeah, it fixes the grammar. And I kind of have to be like, you realize that you've just put something in that now everybody who uses ChatGPT potentially could access. Um famously, Otter used Copilot, and it was in a privacy-safe, incontained system. And then their competitor, Zoom, asked a whole bunch of questions of Copilot, and Copilot went to its training data and said, Oh, we're not going to say the company name, but who do we know that does this kind of thing? And gave away a whole bunch of secrets of Otter to Zoom unintentionally. Right? It it it's kind of like teaching, so in terms of a behavioral science perspective, it's sort of teaching people that it's like a five-year-old. So it it knows the secret, but if you ask a few questions, you can find out the secret yourself. Yeah. And I think teaching that.

SPEAKER_00:

So that's a big issue for anyone who's using this kind of tech, right? How do you how do you go about navigating your way through that?

SPEAKER_01:

That is uh that is the ultimate million-dollar question about how do you navigate what you say and what you don't say. And I think you still see technology companies themselves trying to figure that out. Copilot, when it was the first plum market, and it sold itself as this containment-based field that it would say, great, we'll store your data, it's contained, we can guarantee it's safe and it's private. But they didn't recognize that by taking that data and using it to train itself to make it better, it also made it vulnerable to these kinds of questions and inquiries going forward. So I think in terms of behavioral science on AI, you actually see now a lot of people trying to figure out how the algorithm processes information and how, as a society, we use, I don't want to say snake oil, but the things that worked in the 1920s and 30s to figure out information. I'm sure all the spy people of Canberra, did you know that Canberra has some of the highest concentrations of like intelligence officers and spy people? I didn't know that, but that's very cool. Yeah, one of my Uber drivers told me that the other day, and apparently there's books you can read all about it, and I was like, oof. And so I started thinking when I meet people now asking them, Are you a James Bond or a Mission Impossible kind of person? But they're probably just as much as everyone else now trying to figure out how do you how do you figure out how people do logic? Because those guys do the logic in verbal conversations to figure out how to gain information without making it seem obvious. But with Chat GPT, Gemini, whatever kind of large language model you're using, it's the same strategies. Sort of how do I figure out how you're processing this information, and then how do I ask questions to get around the biases or the what you think I want and to get clarity on what I actually want, right? So don't give me a table, give me a paragraph, or don't give me a generic answer, give me something very specific. Um, the other question I get asked a lot is why does ChatGPT forget after three prompts what I just told it? And I actually just learned this the other day, it's actually a quad processing system. And so you have three processing systems working, holding the information, and the fourth one stitching it together. So as soon as you ask a fifth prompt or a third prompt past the third it's holding, it actually has to remove the pro the first processing on it. Right. And so it actually then shifts them, and the first one kind of gets bumped to the background, so it doesn't consider it in the high priority. It considers only the most three recent prompts. And that's actually why it has that forgetful nature.

SPEAKER_00:

Just like many people, I guess, really. Yes. Tell me what does an average day look like at the Can V Innovation Lab. Oh, a lot of people wanting to come see technology.

SPEAKER_01:

We haven't even talked about the robots.

SPEAKER_00:

So what have you got in there? You got robots?

SPEAKER_01:

Oh, we've got we've got everything. Uh and in front of you right now, we have Lego. So we do uh for young kids, we do robot camp for kids. Um, it happens during school break, and we teach them how to work with robots. We do what's called computational thinking. So we use Lego and we use crayons and other things. We've actually we're on the down low, we're in talks to work with Crayola and with Lego, but shh, that's on the down low.

SPEAKER_00:

And who would think who would think that those sorts of brands would want to be associated with uh a behavioral science lab at a university?

SPEAKER_01:

They think this is we're uh when we did the duck exercise and showing how people take the same six blocks and we ask people to make a duck with them. And you mean I took a photo and well, I'll send it to you so you can put it up on the podcast is the the thing, the the picture for this podcast. It's the idea that it's teaching we've got these stable blocks, so that's the training data, and then the task of putting them together, which is the idea of making a duck, but then personalizing information. And so sh that makes really sense, right? You know, we've got training data, algorithm, personalization, and that's how a large ring of model works. And I can show you a little diagram of that. But if I then show you a duck and we'll go through and do that, or we use crayons, or we use something like that, little kids understand that. So we've got camps for kids, we've got the bright minds, which is our high school initiative. So we then interact with high school students and we use behavioral economics and behavioral theory, so biases. And last year we were able to look at uh loneliness amongst teenagers. Um, this year we're looking at maybe a cyber resilience or a financial resilience kind of situation. People don't realize, but 18 to 24-year-old guys are the most liable for bank scams. And it's not because they're not capable and competent, it's the opposite, it's overconfidence bias. And so if you teach an 18-year-old guy and say, hey, look at yes, you're the smartest, most probably tech-savvy guy, you're on it, you and your buds are great, but you're the most likely to click on a scam because you think you know better, right? Whereas the the older person who might not have any knowledge of technology is gonna be a lot more cautious, right? So we have a high school outreach, and then we have the Canby lab, which covers university research. We engage with all sorts of different ideas from um inclusion in technology, we look at the genderization of technology. We've got, if you go on our website, you can help name one of our robots. We've got a research study there. We're looking at inclusion of how you collect data. So, what kind of data does the health and the stats departments collect? So, can you actually do intersectionality of data and inclusion of data? So recognizing somebody who might be part of the LGTBQIA plus community actually is much more susceptible to certain conditions. Then if you know that, you can actually then create preventative measures and do behavioral nudging to make them more proactive. Uh, one of the other ones we've done is actually put biometric uh baryonic iron into Lego, and that's just something that actually came out this year, and it doesn't sound like a lot, but when you scan a Lego chip inside of a child that swallowed it, uh it now shows up on X-rays. And that doesn't seem like a big thing, but a lot of kids would swallow Lego, and then you'd have to play the guessing game of where is it inside the child. Recognizing that, okay, a lot of kids swallow Lego, great, how do we then find it? Well, we can actually just put this little chemical and Lego now internationally did that. And so every doctor now, when they do an X-ray, can find the Lego piece and figure out where the best decision is. Taking that to a university perspective, if we recognize who's vulnerable for learning or inclusion or engagement or those kind of things, we can actually create programs at universities to say, hey, or at health programs, look at your susceptible to these diseases, you're susceptible to these kind of things. Let's help you with that. Um and then the last one we want is actually kind of our greenhorns, and that's people who are 55, 65, up till 80. Um, my mom asked me the other day, she was asking about using ChatGPT to generate images for her Bible study, and she's gonna be 80. And I thought, right, you're an 80-year-old woman that wants to learn, wants to use the new technologies. Um, my grandma, the microwave for her was revolutionary. And I thought, but she didn't want to get one because she was worried about what I might how do we use a microwave. And I'm seeing my mom now going, Yeah, I know I need to use like AI as the future, but how do I learn? Where do I go to learn? Right? YouTube universities one, but there's so many videos. So we started thinking about can we create courses and as a university say, look at we're we're an authority or we we have some credibility as University of Canberra, and inside the CanB, can we actually then create education? So you ask what the average day is, it's covering all of those things. Uh, we've got different robots. We actually just got a new one. Uh, we've got recent partners that have given us stuff. People are now starting to give us new technologies to say, Hi, can we give this to you? Because we recognize you're doing stuff. Um, in Shanghai, we got to deal with the new DISO robot, uh, an RPG one. It cloned my voice, and then everything came back in my voice. It then also showed photos of me on its chest, and then tried to figure out my soulmate and said, here's people that potentially could be your soulmate. I haven't seen them yet, but you never know.

SPEAKER_00:

Yeah, and do you think we're ready for that kind of technology?

SPEAKER_01:

Yeah, you know what? I think having my voice cloned back to me, and I thought about consent and where do you draw the line between surveillance and personalization? It's great that it could recognize my voice, recognize I was Canadian. It asked if we wanted if I wanted maple syrup and you know, to have pancakes. And I was like, that'd be nice. But I also thought, but where's my anonymity? Right? If I'm walking down the street and I jaywalk, am I now gonna get a fine? If I, you know, stop the smell of the flowers, am I gonna get flower ads for the next 25 minutes? Um, where do you draw the line between creepy and caring? Yeah. Um in terms of also robots doing I think we've we've seen the Doomsday, like iRobot and all the movies that came out. And it was kind of good that they were there because I think it's sort of the canary in the coal mine, but sort of to get people like, hey, don't just don't just think of university if we could make this technology, but if we actually should. So in some ways I I think we're kind of balancing that, but I also wonder if trade policies and politics are let's make as much money as possible so keep producing innovation, whether we should or not.

SPEAKER_00:

Yeah, and I guess if you don't do stuff at UC or in the Canby lab, there'll be someone else somewhere around the world who might be doing stuff.

SPEAKER_01:

Yeah, and that was actually one of the questions that we had is some of the things that we're looking at, we we're like, do we want to foster this or do we want to uh further it? And then we start thinking, but if we don't, we know other people already are. So if we get in the conversation, we can then say, hey, what about the ethics? What's the privacy implications? What are these things? rather than waiting until it happens and then trying to react to it.

SPEAKER_00:

Yeah. Now it's it's early days, obviously you've been going three or four months. Um, what kind of partnerships have been most fruitful for you to establish so far? Apart from Lego and Crayola?

SPEAKER_01:

Everyone. You know what? I I've gotten to meet with some really unique Canberra businesses. One of the ones is actually through the Chamber of Commerce. Uh, we had Christmas in July at United Canberra, and you very kindly invited stride scooters. And uh I met these two guys, and um, at that I had a service robot following me around, kind of like a dog or like a little child, lost child following me around, novelty, and everyone, lots of people took selfies with it. And then we were talking about it, and we recognized that we could actually do profiling of customer bases based on behavioral segmentation or psychographics. And I talked about that a little about them and sort of carried on, and they came back and we chit-chatted and chit-chatted on and on and on. And through a Google fund through the lab, we actually were able to get them a we're able to they now have Google funding and NVIDIA funding. Um, not through the lab, but just they own the own one and got that. But because of that, then we're able to say, hey, if you sponsor one of our researchers, we can actually then create personal segmentation, and not only for scooters, but for EV vehicles and for the adoption rates, and sort of who do we need to nudge to encourage to adopt an EV vehicle and how do we target them? But also what kind of customers should we say, look at we know you're gonna adopt this, let's not put our marketing money there. Um one of the things we also see is with a lot of energy providers. Um, one of my PhD students is looking at how you trust and distrust, and we've got race uh 2030 and some of the retailers of energy in Canberra. Um, and we look at that and say, okay, we're trying to get a lot more power being sold back to the grid. Is that a good thing? And like, you know, some countries or some places are giving it three hours of free power, right, to try and encourage consumption of power. So you know, we've had energy providers, we've had scooter companies, we've had yeah, uh creative toy companies, we've even had uh technology companies reach. Out. Nvidia is now one of our people we're talking about. We had the Cisco Oracle, the infrastructure capacities. One of the projects we're working on, we need to make sure we have a secure server so it can't process things off site. And it's with our part of our AI visualization, so showing what the future can be. And the first project was actually for little kids. We recognize that a lot of boys and girls, when they're seven to nine years old, have already gender perceptions of what who does what. So firemen are men, right? Police officers are men, teachers are women, you know, nurses are women, and doctors are boys, and those kind of stereotypes. But we also see that if you ask a little kid, they only say the jobs they've seen. So what does mom and dad do? And then what basic jobs on TV? So one of the projects that we've part we're talking to Oracle and Cisco Systems is could we take a photo of a little kid, age them 20 years in the future, and then show them in 20 traditional and 20 non-traditional job rules for their gender. Now, for those who are thinking, oh my gosh, you're genderizing kids from young age, we actually thought, let's just take 40 job applications or 40 jobs. So you can see yourself as an astronaut, as a stay-at-home diet or stay-at-home mom, as a nurse, as a doctor, as a researcher, as a as an artist. And let these kids see themselves in those job roles. And it also is important for that, somebody who might be in from a rural community or from somebody maybe um segmented community, like you know, you typically go to the indigenous community, but I think it's just even segmented communities and saying, okay, I've never seen outside of my parents anyone do this job role, but could I? Um one of the people we talked to was Nichelle Nichols, the first black woman on TV who wasn't a slave. And she is a really good clip where she talked about um, she got a letter from a lady who was a crack victim, Alicia Khalida. And I was like, okay, this story's, yeah. And this Alicia woman said, Oh, I saw you on TV and you inspired me to be an actor. And I thought, yeah, yeah, that's great. Except for Alicia changed her name to become Whoopi Goldberg. And at the height of her fame, did Star Trek Next Generation, so more women could see black women on TV who are equal, who weren't slaves who weren't going forward. And it was the power of visual representation. So we're trying to get sponsors, and if if anybody listening says, hey, we like that project, we're helping to jump in. Um, we have need to pay for servers and pay for security systems, because to do that, we can't use a lot of the infrastructure because it would require processing overseas or processing off-site. And if that happens, then you can't secure the safety of a child image. So we're sort of figuring out the projects and then trying to find partners to go forward with that.

SPEAKER_00:

Yeah, excellent. So, what advice, Ryan, would you give to Canberra entrepreneurs who are looking to build human-centered products or services? Ooh.

SPEAKER_01:

What kind of advice would I give to Canberra entrepreneurs? I think getting involved in a lot of different groups and clubs, and I think also there's a very North American concept to have your business card at the ready. And I think Canberra doesn't want to do that because I think it comes across grimy or sleazy. You know, I'm a sales cars person.

SPEAKER_00:

Um I love my business cards.

SPEAKER_01:

Yeah, yeah. I I my parents had a printing business, and so I've had business cards since I was about seven. Fantastic. Yeah. Yeah. Um, and for a long time, as the AI guy, yeah, I had a QR code. You could like you could scan a card, it would automatically bring it up, or I had QR codes and all those kind of things. And then I realized that the the power of the power of the physical in the digital space. And the in the digital world, the physical things extremely matter. And I've written that paper out for the New York Times. Uh and I I it's funny, I always say to people, look at you find Kohl's receipts in your pocket and you pull them out, and you're like, oh yeah, yeah. I remember I bought that. That's when I was buying that stuff. And that's the same with the business card. So I think the easiest low-hanging fruit is literally give out physical business cards. And you get them in stacks of 500, and you think, I'm never gonna use these, give them out every day. And if you give a card out, give out two so they can pass it on to someone. And I know that's common sense for business networking. I've heard this a thousand times in university and in school. I think the other thing is in Canberra, going to some of the offbeat things. Like I went to slam poetry the other day, and we're talking about how does AI generate poetry, and then they had home to slam poetry people. And so I was like, look at it if any of you have any questions, and I made a point to go talk to each of the people afterwards to say, hey, if you have any questions, let me help you. I don't know anything about slam poetry, but I know about technology. And I didn't even say my title or what I was or I'm from a lab or I'm a researcher or I'm or professor or anything. I think sometimes in Canberra we're worried about selling ourselves, and I think that just saying, you know what, yeah, just go after to help everyone. Don't sell your business, just try and help solutions. And I think that if you have a good product and you can help people with it, it'll automatically do really well.

SPEAKER_00:

And hopefully that's true of the Canberra Behavioral Lab, which I'm sure it is. So, so where are you? What's the direction of travel? What are you aiming to sort of have achieved through the lab over the next couple of years?

SPEAKER_01:

Yeah, you know, over the next sort of six months right now, we're still trying to find some partners and get things going. We've listed a whole bunch of projects. We've got, I think, about nine or ten under the way. So um the academic side of me, the University of Canberra, will say will love this answer is to publish that research and say, you know, disseminate it out to the academic world. But if you're not an academic or you understand academia, you know that it's a PDF document on a website behind a paywall, potentially, so it's not going to make an impact. So we actually are trying to think of some impact things. So we would love to get the AI visualization of kids up off the ground. Uh, we recognize that's a year-long, two-year-long process. So invite me back in two years and we can talk about if it's if it's launched, or I can tell you all the obstacles and all the reasons why it hasn't. Uh the other one is actually also about uh sort of showing some of the results that are really easy. Uh one of the things people don't think about is the generization of technology. Suri, Alexa, everything's a female voice. And I ask a lot of kids, and I'm like, sort of curious, what does that mean to you? And they all kind of give a 1950s school answer. And it's funny but true. Mum knows everything, but she's subservient, right? And that's that's true of anytime. Let's be honest. Women are smarter than men. We both smile, we know it's true. Yep. Ladies, we know you're brilliant. I'm just happy to be invited to the party. But because these kids are growing up with all these chatbots or all these services being female voices, you know, Alexa on demand giving you my answer. Hey, Siri, give me what I want when I want it. And that actually then reinforces women as being subservant. And so I'm kind of curious if we can, and I I tried to see about creating policy and then about trying to do nudges and behavioral changes. And then I realized that no one's actually proven that's changing and that's actually taking place. So that sort of took us back a step to say, okay, let's prove that it happens. It's not just a hunch. And then once we've proven that, then trying to come up with these behavioral nudges to sort of say, hey, how do we make sure that we don't just reinforce this 1950s, 1960s stereotype going forward? And so people can then actually not have to revert back and have to fight the same fights, you know, just in a different form 20 years, 30 years later.

SPEAKER_00:

Yeah, I mean you could say, of course, that Alexa and Siri are busy telling you what to do, uh, often more than uh necessarily being subservient, I guess.

SPEAKER_01:

Yeah, yeah. No, I I was at an elementary school the other day, and that was that's kind of what came out was and I laugh because every every kid had the same answer that you know, dad might make money and mom might make money, but mom's the smart one. I'm like, yes, that's every household. I'm sure that's right.

SPEAKER_00:

Um the um uh I have completely lost my train of thought at 27 minutes and 45 seconds, so we're just going to whip that out.

SPEAKER_01:

Um that means that my answers are long-winded, which I mean I gave short answers, so I got a lot of people.

SPEAKER_00:

Yeah, no, that's all good. Um so moving uh sort of to a more inclusive approach for technology is obviously really really important. What does success look like in that space over time?

SPEAKER_01:

You know what? I think the the short-term measures is just to prove why it matters. I think sometimes you identifying what happens and what's going forward with technology and why we need to actually talk about these things. Because I think sometimes until we actually see the impact, do we actually change it? Uh there's a few famous professors that came out when social media um sort of took off the 2005 Times Man of the Year 2006 idea, and they said, I don't think this is good. And everyone sort of said, What do you mean? And it's like, well, you're now making people personify themselves and they become brands and they become larger than life personalities. But Dolly Parton even said, you know, I can I take my wig off and I go home, right? Whitney Houston said, Yeah, I kicked my heels off. I I'm no longer Whitney, I'm you know, and she used her real name uh or used a different name. And I I think that sometimes with social media you're like, oh wait, we're always on. It's the panoptic always being watched. And it wasn't until sort of 2016, and even now industrially trying to get the under 16 regulations going forward, you know, 20 years later. Well, it's 20 years, right? I think with AI, the the ability to affect harm is a lot quicker. And so we need to be quicker in recognizing the problems. And so success for me right now is just showing, hey, here's the problems. We need to fix these things. And then letting other people who maybe are subject experts or who care, or just want to try ideas. I think sometimes that's the other part. Just letting everyone try out and say, let's try and fix this together as a team, as a as a country.

SPEAKER_00:

Excellent. Well, look, we wish you luck with um progress over the next uh little while. Um, there might be some people uh listening to this that's coming up to school holidays who whose ears lit up at the idea of robot camp for the kids and bright minds for the high school kids. Um, if you do want to find out more about those or indeed about the can be lab uh more generally, where do you go for that information?

SPEAKER_01:

You know what? I always tell everyone go to the website. Um because you can find my email, you can find everyone's emails from that. It's can be lab.org. If I tell people my email address or email me, we're all still figuring out all the details. Uh and if you know me, you know that it's jump off the cliff and assemble the plane as quick as possible and hope you do it before you reach the bottom. And students laugh and then they see it and they're like, oh wow, this really is we're two and a half, three months out, and it's exploding. It's an incredible opportunity.

SPEAKER_00:

Yeah, fantastic. Well, we'll look forward to seeing how it goes over the the next period. Uh finally, Ryan, I guess just to close, you you raised at the start of our conversation here um James Bond or Mission Impossible. What kind of spy would you be?

SPEAKER_01:

I mean, for me, I I always think about the Mission Impossible theme songs easier to sing. And so dun dun dun dun dun dun. And when I make my gun, I clap my fingers together, whereas apparently James Bond makes the fist and then the hand around it. That's the James Bond gun.

SPEAKER_00:

And yeah. I have to go back and have a closer look. So you're a Mission Impossible.

SPEAKER_01:

I feel like I yeah, I don't know if I like Tom Cruise, but I I think I can think sing the theme song better. How about you? What are you?

SPEAKER_00:

Uh look, I don't know. I need to give that quite a lot more thought, but I'm I'm quite partial to a nice, nice martini. So we'll see.

SPEAKER_01:

You can ask people that ask people that message you in.

SPEAKER_00:

Yeah, absolutely.

SPEAKER_01:

Yeah, does the podcast have an email address?

SPEAKER_00:

Uh yeah, well, the podcast, you can reach us at infocanberbusiness.com. So uh if you've got thoughts on on um whether whether I'd be better as a as a Mission Impossible um spy or a James Bond spy, certainly let us know.

SPEAKER_01:

I'm seeing a near future we'll we'll do mock-ups of you and we can have people vote.

SPEAKER_00:

Excellent. Look forward to that. Uh Ryan Payne uh from the um Canby Behavioural Lab at the University of Canberra, thank you so much for joining me today here on the Canberra Business Podcast. It's been great having a chat. Awesome. Thanks so much for having me. Uh don't forget to uh follow us on your favourite podcast platform for future episodes of Canberra Business Podcast. I'm Greg Harford from the Business Chamber. Uh it's been great having Ryan here today. And uh don't forget if you want uh more information about his work and the work of the Behavioural Lab, check out their website at canbelab.org. We'll catch you next time.