
SafeTEA Podcast with Nicola and Deborah
Join hosts Nicola Knobel and Deborah Pitout on 'SafeTea,' a podcast where the conversation about safety gets personal, powerful, and a bit of 'tea' is always spilled! In each episode, Nicola and Deborah dive deep into the world of safety leadership, viewed through the lens of inspiring women in New Zealand and beyond.
At 'SafeTea,' it's not just about policies and procedures; it's about people. Our hosts bring their unique perspectives and experiences to the table, engaging in candid conversations with remarkable women who are reshaping the landscape of safety in their fields. From trailblazing leaders to unsung heroes, each guest brings a wealth of knowledge, experience, and inspiring stories to share.
But 'SafeTea' is more than just interviews; it’s a movement. Nicola and Deborah are here to empower and uplift, turning the spotlight on the achievements, challenges, and insights of women in safety. They delve into topics ranging from overcoming workplace obstacles to the importance of mental health and wellbeing, all while fostering a sense of community and connection.
Whether you're a safety professional, aspiring leader, or simply someone who believes in the power of women's voices, 'SafeTea' is your go-to podcast. So grab a cup of tea and join us for empowering conversations that aim to make a difference, one story at a time.
SafeTEA Podcast with Nicola and Deborah
S1E5: Advancing Health and Safety through AI with Chief Data Scientist Melissa Ingle
Join the ranks of informed listeners by tuning into our latest episode, featuring Melissa Ingle, a trailblazer in the realm of data science with over 15 years of innovative experience. Currently at the helm as a Chief Data Scientist, Melissa shares her inspiring journey from a mathematics graduate to spearheading the content moderation team at Twitter, and further pursuing a PhD. Dive into the evolution of AI and machine learning, tracing their journey from emerging technologies to foundational elements that are revolutionizing industries, including enhancing training methodologies and risk mitigation, especially in health and safety sectors.
Melissa offers deep insights into the transformative impact of AI on global decision-making processes. Her firsthand experiences provide a rare glimpse into the dynamic capabilities of these technologies.
Our in-depth conversation explores the intricate challenges AI presents in enterprise risk management, focusing on critical aspects such as data governance, the appropriateness of models, and the imperative for transparency within AI models. We delve into the significant issue of unconscious bias in AI systems, emphasizing the vital role of workforce diversity and the application of tools like Fairlearn in creating more equitable technologies. Through real-life stories, we highlight the importance of diversity in addressing biases and enhancing the well-being of those leading the charge in this technological revolution—our invaluable data scientists.
Furthermore, we tackle the pressing issue of gender diversity within the data science and broader tech industry. Sharing personal experiences, we shine a light on the challenges and achievements of women in these fields. By discussing the disparity in salaries between genders and advocating for effective policies to promote equity, we aim to pave the way for a more inclusive tech environment. Additionally, we celebrate how AI is transforming health and safety strategies, envisioning a future where technology empowers professionals to foster safer, more efficient workplace cultures.
Don't miss out on this episode for a comprehensive exploration of AI's significant role in creating a diverse, inclusive, and safer world.
Looking for our LinkedIn Page? Find it here: https://www.linkedin.com/company/safetea-podcast
Want to sign up for our newsletter or get freebies? Grab those right here: https://jolly-mode-586.myflodesk.com/safetea
Please do leave us a review! It helps us spread the word and empower others!
So joining us today on our next episode is Melissa Engel, and she is a data scientist with a remarkable 15 year journey in the field, currently being promoted to chief data scientist. Melissa's work is shaping the landscape of data driven decision making, with a skill set that spans across many fancy data related goodies and machine learning.
Speaker 1:Melissa has a profound understanding of the data life cycle and the implications in various domains. Her journey includes pivotal roles such as senior data scientists for Twitter, where she worked closely with the content moderation team to analyze trends and develop solutions for policy violations. And you're also a PhD candidate, bringing that academic rigor and practical expertise into your work. Her skills in communication and public speaking and presentation have made her a respected voice in the field of data science. Welcome to the episode today, melissa.
Speaker 3:Thank you so much. I'm so happy to be here and thank you for that generous introduction.
Speaker 1:How did you get into this what?
Speaker 3:How Absolutely. I'm so glad you asked me because I was thinking about this in preparation, so I thought you know. I got my master's degree in 2017 and master's of science and data science. At the time, there was only eight universities offering master's degrees. I started looking for towards my degree in 2014 or so because, if you were, I was in data, been in data, like you said, my whole career. If you look with the right kind of eyes, you could think, oh, my goodness, this is going to change the world. This is amazing. It has so much power.
Speaker 3:Now I'm not here to say I predicted GPT or anything like that. That's not what I'm saying. But the idea of these predictive analytics that can help you make decisions using these advanced statistical tools was so fascinating to me, and once I started researching it more and more, I had to do it, and I've just been on this absolute journey ever since then. I completed my degree in 2017. I teach data science at. I just recently taught at St Jose State University and, as you mentioned, I'm getting my PhD right now. It's just such. This potent mix of computer programming and statistics, but also real life results Like this has actual effects that you can touch and feel and it really makes an impact on people's lives, and that's what I really really love about it.
Speaker 1:So, before you got like because once you we've spoken about this before on a different podcast and I nerded out on it because it was so fascinating Before you got into this, what was some of the things that you were doing before you started nerding out on data science as well?
Speaker 3:Sure, so I got my bachelor's degree in mathematics. So I've always been interested in kind of the math and statistics portion of it and I began working as a data analyst, which is a great field in and of itself, you know. I really think it helps to support the core mission of many different organizations. We are moving, we're a data driven world and I thought that's really exciting. I can help to drive the mission and the vision of the company. So I was working before that as a data analyst and sometimes a programmer. You work in that field long enough. You pick up coding you know, sort of as a sort of default you have to so tell us a little bit what machine learning is.
Speaker 3:Absolutely so. Machine learning and AI have been around a lot longer than people think. Ok, the term itself was first coined in 1955 by a researcher at Stanford University. So, in general, ai is just this system that lets attempts to make machines think like human beings, let's machines mimic human behaviors. It's a discipline. A subset of that is machine learning. So all machine learning is AI, and that includes the specific tools and techniques. So that's things that are as old as linear regression, which has been around since the 1840s, through these tree based models all the way up through these large language models Chat, gpt. The application of AI, that's machine learning. A subset of machine learning is deep learning. That's where you have things like neural nets. So that actually includes these large language models. That is the specific domain of them. So when people say I'm using AI today, you're not wrong. Chat GPT is AI, but AI is so much more than just Chat GPT has all these other effects.
Speaker 2:Well, it sounds so interesting. I can hear why you wanted to learn more, nicola. Like I'm just thinking about, what are some of the practical things that for us, in the layman's terms, those who are not data scientists or analysts what are some of the practical things that you think that we, as health and safety professionals, could use? Machine learning or AI or Chat, gpt? What are some of the practical things that we could use it for? Do you think we could use it for the way that we train people to across the world? We think about how workplaces have evolved and people working from home and all over the world. And if I think about my environment, where I worked in steel and we had lots of machines but it was very difficult to get engineers from across the ditch, from Australia, to here because it was quite expensive, or there was COVID, and I'm just trying to think about how we could have used AI to train people to use specific machinery rather than a person, in other words, yeah, so there's a real wide range of applications of AI.
Speaker 3:The very very first commercial application of AI was in 1980. Xcon I don't know some acronym, but it was to help companies order computer systems based upon customer needs. So you fed it the customer needs and they said hey, I think, based on your needs, this would help you. This would help you Going forward. We have an enormous variety of tools that can help you make decisions about what should I do now, what should I do in the future, what are potential risks that I can mitigate.
Speaker 3:I think this is a really big thing that is really a growing area in AI because it's this tremendously powerful tool that, like I said, I firmly believe has the potential to really change the world. But because of that, there's so much that goes into it that you want to make sure it's not having adverse effects on any one group, and this concept in AI is called explainability and it's just. It's had a lot of growth in recent years. So, as far as like specific effects in steel or any one particular industry, it can help to identify areas where there might be safety risks. It can help identify staffing We've used it in a hospital settings to help identify duration of patient stay or what factors might indicate customer churn or what factors make people subscribe and be interested in your product. So you're not just kind of operating in the darkness. It's hard to give a specific answer, but I hope that's right. No, it does.
Speaker 2:Because I also think about the ways of working right and the future ways of working. We have so many different generations and people at different times and understanding how they work and how to set up a sense of belonging. In other words, if some people like to work from home or some people like to work in the office at different stages in their life, if I think about our younger generation coming into the workforce, you've got all that neurodiversity right and on the spectrum and I kind of see it being able to analyze. And what do our people really need to be able to feel a sense of belonging in the workplace?
Speaker 3:Can I tell you a couple of quick stories? I want to keep this brief, but I think like maybe in 2015, 2016, 2017, when I got my degree, the idea of quote unquote bias in AI was somewhat a ridiculous idea. I remember in 2019, we have a US politician named AOC Alexandria Ocasio-Cortez. She's a US representative. She mentioned this and she was roundly ridiculed by her opponents in Congress. How could AI have bias it's computer code?
Speaker 3:In 2018, a researcher at MIT, joy Bulamwini, performed this study called Gender Shades. There's a slick. There's a really nice video that MIT put out. Everybody should watch it. It's five minutes long. She's Ghanaian and she's a black woman.
Speaker 3:She looked at three commercially available products facial recognition software products and she looked at it at one specific axis. How good are these models at identifying the gender of the subject? So she's had it millions of faces. Now, when you just looked at the faces of white men, it correctly identified the gender 99.2% of the time. Not bad, right? That's a 0.8% 0.8% hour rate, right. But when you looked at women, and particularly women of color, the error rate was 20% for two of the models and 34% for the third model. That means you only got it right two thirds of the time.
Speaker 3:And so what's going on there? Were the programmers explicitly racist? No, that's not what was happening. What was happening is, for the most part, the people who programmed these models were white men. It's still a field overwhelmingly dominated by white men and so they tested the product, they trained it on themselves, on their colleagues, on their coworkers.
Speaker 3:They just didn't think, hey, I should include women and black people and people of color. They just never occurred to them, because we all and it's easy to make fun of them, of course, but we all have these blinders on where we don't consider other sources, and so I think that's one of those things when you're talking about, like the younger workforce. I have two children, 12 and 15. They're not that far away from the workforce. They come in. They're all the things that young people are. They're non-binary and neurodiverse and they're all these great, amazing things. We're going to need to make sure we're able to accommodate these needs, to maximize their health and their well-being, to make sure they're able to function to the top of their potential. Because don't get confused, we're not doing it because we're woke. We're doing it because a fully healthy, functioning employee, they can do the best output for the company?
Speaker 2:100% agree. If I think about my family dynamics, I have five children, right? One of them is ADHD, the other is autistic, so, and they all coming into teenager, they all two years apart. And then I have a young daughter who's like, really, really young. What's that going to look like? And we talk a lot at the organization that I work for about, you know, diversity and inclusion, and what does that look like for our people? Because everybody is different, right? And how do we make our people feel that sense of belonging? So I kind of see this machine kind of learning or AI, or chat, dbt, as using that to understand the workforce.
Speaker 3:Yeah, absolutely 100%. So, if I can talk about this concept of explainability which they I mean, it's really, it's this umbrella term. It's looking at, you know, risk and bias and fairness and all these things that they fall into this Um umbrella of explainability.
Speaker 3:Ai and, depending on who you look at, if you look at you know, any of these major researchers or think tanks in AI kind of has their own sort of definitions, but they all you know IBM, amazon, aws, microsoft's Azure, google Cloud, all these people they have their own kind of guidelines here, but they're all the same sort of idea, right? So you want to make sure that you are accountable, right? You want to make sure that you understand what exactly is this going to do? What's the impact assessment of this AI? What are the potential negative impacts of this AI? And if this tool allocates any resources I'm talking finance, healthcare, education if it's responsible for allocating resources, you need to test the accuracy of that model on these different groups to make sure that all groups are receiving those resources equally. And I think this is a really important field, a really important topic that we can't sort of ignore.
Speaker 1:I think you know you talk about that sense of belonging and I think you know, for those people that maybe are at the start of their AI journey, those that are maybe AI nervous, ai anxious, you know, you always hear that narrative of oh, ai is coming, you're going to lose all these jobs right. Versus harnessing it or leveraging it better, create jobs right. What is your alternative narrative to that concept?
Speaker 3:Yeah, first of all, I completely, I'm completely sympathetic to this. It's this unknown thing that a lot of people don't fully understand and I think, anytime. I have a friend who works in automation not an AI and he encounters this a lot as well, and I think whenever you have this new tool, there's a lot of fear around it. I would say that it is most equivalent to the computer, to a calculator, to something that's going to be able to extend your capabilities. The real truth is these models they have great press, right. You have, you know, Sam Altman, who's the CEO of OpenAI, saying oh, we need, it's going to reach AGI artificial general intelligence. They're going to think like human beings and everybody's kind of scared about it, but in reality they're not. They always need human input because the model that's called model drift over time. The things that it's predicting shift.
Speaker 3:Now, again, I use this example on your other podcast. I hope you don't mind me reusing it. I worked in political misinformation at Twitter, so the political information, the misinformation that was current in, say, 2015 is not the same thing that's out there today. If I build a model in 2015 and it's still, you know, in the US hyper focus on, let's say the Clintons, for example it's not going to pick up the new techniques and tools that that's going. So I I'm not going to sugarcoat it. I do think there's going to be some workforce disruption, but I think companies that make these huge, that are tempted to make these huge, massive layoffs, need to understand you still need a large workforce to regulate and to under be able to understand and assess how these tools work.
Speaker 1:You know kind of, if you were to take a mirror and look round the corner, what is?
Speaker 3:it. I love it.
Speaker 1:Take a mirror, look around the corner, see what you can see. Just click a loo. I'm curious to know, like where do you see, you know, where do you see organizations focusing their enterprise risk stuff? Where do you see them looking at that in relation to machine learning and AI and these new technologies coming through? What are some of the things at an enterprise risk level that they should potentially consider or think about?
Speaker 3:Yeah. So I think that right now, you know the big player in the game. Of course, the new kid on the block is this chat GPT. Right, and everybody wants to use it for everything. But you can't use it without doing these impact assessments, without understanding really where the data is coming from, without having data governance and oversight, without doing like, hey, how is this really? Is this model even fit for the purpose that it's intended to use, is intended to be used for? You can't? I just think they need to be really, really careful.
Speaker 3:A thing that I like to think about is the business users should be able to explain the effects of the model. Now, this isn't. You don't need to explain the code or the statistics. That's so about four or five years ago I gave this highly. I thought it was so sophisticated. I gave this presentation and I explained it to my boss and it was the output of this model, and he went to explain it to the higher ups and he didn't. I didn't really do a good enough job of explaining it to them and he missed several important parts, not his fault.
Speaker 3:These things are complex, so as data scientists, we need to make sure that we are able to explain. That comes with the job domain knowledge. We need to make sure that we're able to explain to the people who are using these tools downstream what exactly does this do? What's the effect of this?
Speaker 3:Something that Microsoft does with that also that I think is really really nice is they will let you know whenever you, the end user, are interacting with an AI. You're speaking with a chatbot. They'll just let you know, because too often you get down the line. You're like I just want to talk to you, who I'm being, or I don't understand why I'm getting the run around. It's because you might have a chatbot that's programmed with this kind of limited functionality, and so, at the very least, you need to make sure that everybody is aware of what's happening. You need to allocate resources to training people, to understanding the effects and, again, the biases. We all have biases and we have unconscious biases, things we're not even aware of, and a diverse workforce is really one of the best ways to combat those.
Speaker 1:Talking about a diverse workforce. You mentioned quite a lot of this information has, like the AI almost has an unconscious bias because it's being programmed by these guys, right, sitting in a I imagine them sitting in a room, in a dark room, just with breakfast, right. What do you think, if we look at diversity and inclusiveness, what are some things organizations can do to try and prevent that unconscious bias, not only from, like, a machine learning perspective, but from an organizational perspective?
Speaker 3:Yeah, it's really interesting. So I recently dealt with a client who was trying to hire for a new position. He fed the resumes. He was feeding the resumes through chat GPT and he said, hey, can you rank these in order of who you think is most suitable for the job? And it ranked them in a certain order. Yeah, it ranked them in a certain order.
Speaker 3:And the issue is Google, that chat GPT is trained on public AI is trained on the public Google corpus of works. So whenever you're going to, you should not use a tool like that, especially not to make personnel decisions. You can definitely chest the accuracy of your tool against various and diverse groups. There's actually tools out there that help you to do this, that are written, that are maintained by a diverse set of people. If you're, if you are, an AI person, I want to hit you to, to, to, to fair learn, fair learn. It's a great tool that's out there that can help measure it's free and help measure the effect of your model on various subgroups and pull out those biases that are really hard for us to figure out.
Speaker 3:So I think just being like super conscious and aware of these aren't machines, these aren't. You can't wave a magic wand. It doesn't work that way. You need to be really, really aware of the downstream effects of any kind of use of AI. I'm not negative on AI. I'm very positive on AI. I I think it can again, I think it can change the world, but we just need to be really careful and thoughtful about it, and I don't. I think that's that's missing somewhat these days. I think people are thinking about it more, but not, not, we're really.
Speaker 2:Yeah, I think you're right in terms of thinking about it as from a risk perspective. Thinking about you know if you, if you were to implement it, what would be? So, what would be the inherent risk, what would be your controls that you would have in place, and then what does that residual risk look like afterwards, and thinking about how do we effectively manage those controls to make sure that we don't have any issues going down. I think there could be an unintended consequence about that for the people that need to do that. So, for the data scientists, thinking about their wellbeing and how we can make sure that they, you know, cause it could add extra stress of new systems and understanding it, and what are some of the things that you think that our people who, like we, have data scientists, what do you think that we could do from a wellbeing perspective to help them make sure that they are not getting too stressed or, you know, taking it in their stride and cause. This is all new right, and it does come with some some sort of stress.
Speaker 3:Yeah, that's a great question. I think what I see too often with companies is they put the whole source of the decision-making, almost from top to bottom for data-driven decisions on data scientists, and I don't know that data scientists should be out there running projects unless they have specific training in that. Of course, there are many data scientists too, but unless they have specific training in that, and I really again, not to mention Twitter, but I really liked they had this I worked in political misinformation. We worked directly with this group of people who understood the laws and they could. We would meet with them and they would help us understand what to look for. We worked with the site trust, the trust council, the trust and safety council, and they would help us to understand hey, these are like academics, we see these emerging trends you might want to look out for, and just people that are able and willing to take on some of the oversight of these projects.
Speaker 3:Again, some data scientists can and want to do it, but it's a lot of pressure on any one person and again it goes it's hard to understand. Obviously, it's really really difficult to understand, but I think I do think anybody can understand the effects this is going to this model says that the top three effects, the top three factors for increasing student success are, you know, access to teachers office hours and these three things. And then you want to say, okay, well, how can we help to implement those? And so I think that that's something that we found in one of our studies, so I just mentioned that up, but, like, I think, like that's just like one of the things that we a greater partnership between business and data. I guess that was a really long way to answer, but I hope that answered your question.
Speaker 2:Yeah, for sure, it's just. It just brings me back to everything we do in health and safety around collaboration. I don't feel that we can do it alone, right, so it's best to collaborate with others across the business, and I'm thinking in my head how I can collaborate better with data scientists and understand the work that they do and understand how I can support them from a well-being perspective.
Speaker 3:So, yeah, I love to, maybe I love to talk about AI and data science. I love to to if we are forced to talk about the effects and forced to talk about the reality, everything we do. There's this model called CRISP-DM and it's this model that is the life cycle of the data science model Number one. The number one thing is the business understanding. So you need to understand what exactly is the business goal, and only then do you go to the data understanding, and so every data scientist who's built these models should have a very intimate familiarity with the business goals, and you need to loop around at the end to talk about how those, the model that you built, meets those goals, and then everybody can understand oh, this model says this and it allocates this or it does this, and everybody can understand that and help to interpret that and propagate that forward.
Speaker 1:So you mentioned earlier that data science is quite a male dominated field. It really is. How can more one get into it? But how do we make this more exciting for the ladies?
Speaker 3:Yeah, I know, right, I'm a trans woman and I mainly work at startups here in I'm in San Francisco, silicon Valley, and I'm frequently the only woman on my team and it's a big issue, right. So what do we do? So it's a topic I'm really interested in. I was at a company once and they said, oh, they gave a little presentation about their, the breakdown, the demographic breakdown of their employees, and they said, well, 75% are men, 25% are women, and that 25% was usually in non-technical roles or if they were tech, they were a tech PM. We only have a few coders who are women at our company. So you're like what do you? What do you? What do you do with this?
Speaker 3:Right, I think that data scientists as a profession as a whole, we need to make sure that we are welcoming to women. We need to recruit women. I think it's absolutely okay to say we need more diversity on this team and to go look for that diversity. We all know here, right, I'm preaching to the choir, but when you people the answer is, oh, I need to look for the best candidate for the job, well, it sure is such a coincidence that the best candidate for the job looks exactly like you right. I mean, that just happens time and time again and we need to be really, really careful of our own biases in who's keeping women out.
Speaker 3:Is it women? I don't think so. If the people who don't want to hire and nurture women I've taught at coding boot camps where there's a lot of women, there's justice egress anybody else to get in there and get their hands dirty, especially if you're able to say to somebody who's not technical hey, we're going to work with this and it can actually affect. You have to have this real effect and they get justice. I mean, they're human beings, everybody's human being. We get excited when we hear about these things. So I think that the way it's kind of marketed is often as this it's a tech thing and for some reason we keep women out of check and it's a real problem. So you know.
Speaker 1:Also, it's the chat GPT CV checker that's only checking for by CVs.
Speaker 2:That's crazy. I guess it's in any industry, right? We see it in health and safety all the time.
Speaker 3:Is that right?
Speaker 2:Yeah.
Speaker 1:Yeah, it's a very limited industry Is it real male dominant Very very. It's really interesting, and those top positions as well. We're starting to see more women come through and really take those strong seats at the table.
Speaker 2:But I said the other day to Deb's I was like stuff the table, I am the table, let's go yeah and I think a woman brings another perspective right and just a softer perspective and a more empathetic perspective that can really help an organisation. But I'm just not seeing as many women in those top tier, and I think you're right, it's like I don't know. The world, whatever it is, has put men at the top of the table for many, many years and it's kind of it is changing. I am seeing it change. However, we're still expected to be the admin queens, as I feel. Yeah.
Speaker 1:Let's use a statistic here for a second.
Speaker 3:Right.
Speaker 1:So I I hadn't. This is not the point of what I'm going to say, but I had an article published in quite a well-renowned magazine in New Zealand for occupational health and safety, and in that particular episode or version or whatever of the magazine they had the annual health and safety survey. So I was a little obsessed with the data because, I don't know, I met someone called Melissa not so long ago and now a data-obsessed psychopatist, and so I looked at it and I was really surprised that in New Zealand, more women responded to the survey than men did. Okay, so 53% women. The rest were men or undefined. Okay, cool, I was. I was like okay.
Speaker 1:So I pulled out the rest of the magazines, because I've got tons of them from many years, and looked at this annual salary surveys from every single year that they had published this data, and the year before also, quite a lot of women had responded. The year before quite a lot of women had responded. The roles were differing. It was really interesting. However, the thing that stood out for me probably the most was over the last six years, without change, even though the women are responding more to the survey than the men, the male salary was still $20 to $25,000 more than what the average woman was earning.
Speaker 2:Okay, well, that just explains it right there.
Speaker 1:And I was like, well, that kind of sucks.
Speaker 3:Yeah, it does. It's unfair.
Speaker 1:It's so. There's my little, there's my little your little rent my little rent for today. So if anyone's listening to this episode, pay women appropriately.
Speaker 3:Stop that right now, Absolutely here here.
Speaker 2:So getting back to data, yeah, so I just got a question around how do you think organizations can effectively integrate insights into health and safety policies or other policies or procedures that they have? So just thinking about taking that data and then making it into something tangible that people can, or guidelines on, how do you, how do we do that as an organization?
Speaker 3:Yeah, I mean, I think what you need to do is be really I feel like we have a lot of guidelines and we don't always follow our own guidelines, and you need to have data around those guidelines. I want to be careful that data and tech are not the solutions for everything, but I really do think that we need to be accountable to ourselves and to our own standards that we set and make sure that we're following them, because we all think we're following them right. We all have the best of intentions. It's just that we all fall prey to. We all have the same things. So I really think that this kind of awareness and accountability is a big part of it.
Speaker 2:Yeah, definitely. I always think about when I'm doing my insights and trying to look at trends, et cetera, to think about what is the data telling me and how do I analyze that data to back it up what I'm saying.
Speaker 3:Okay.
Speaker 2:I think that's really really important from a health and safety perspective, because we have so much data on different mechanisms of injury or so much right, yeah, and it's hard to get your mind around, it's hard to know what's the most important thing.
Speaker 3:What do I pick out of this? Yeah, and I mean again, that's one of the things that feature importance that data science can do is help you figure out. Hey, these are the most important factors in injuries. These are the most important factors in that type of thing, so really interesting yeah.
Speaker 2:And then how do we action it right? So often we get a whole lot of data and then we present it in our reporting. But then so what? Right, so what?
Speaker 3:are we going to do about it? So what? Yeah, yeah, again. Yeah, that I don't have the prescription for, but I do think that we just we have the ideas, we just don't have the willpower to execute on those ideas, and that's the really hard part. Yeah, nicola, when you mentioned the salary information, this is a few years ago, but I did a little. I took a bunch of data from in the United States labor data just from the Bureau of Labor, and I did a bar graph of women and men's salaries and you can just see men's salaries is consistently like 20% higher over the years. It's a really fascinating, a really fascinating thing.
Speaker 1:It is though right.
Speaker 2:Yeah.
Speaker 1:It is really fascinating. It's like when did we get to a point where this was a disconnect? You know it's 2024, when all of this shit should be normalized. You know, it's like we're doing an equal to or sometimes better than job than others. And it's like, oh okay, well, we're still worth 20K less 20 to 25. Yes, absolutely.
Speaker 2:But who's allowing that? That's the thing, that's the question, right yeah.
Speaker 1:I find it really interesting because now I'm in an organization that is, you know, it only has 2% men in the organization.
Speaker 3:Oh, wow.
Speaker 1:Which is awesome, like it's a totally different dynamic.
Speaker 3:That's so interesting.
Speaker 1:It's fascinating, but understandably so, though. Right, it's all around women that work with maternal mental health or maternal health and children and babies, so it's all these amazing nurses. So most of the people that join the organization are women that want to support this kind of stuff. Yeah, so we've got like a handful of men in the office and I have to be honest, we made a joke about it the other day because there were three men in the kitchen at the same time, oh no, and we were like, actually, I think we need a policy that like stops this from happening. Yeah, yeah.
Speaker 3:No more than two. No more than two.
Speaker 1:No more than two at a time, which was really lovely. And, ironically, you look at our data team or you look at our IT team. Guess where all the guys are. Yeah, of course Data and facilities and properties and you know it's like. So how do we? You know what? This is a question for both of you. How do we shift this mindset that there are specific jobs tailored to guys? You know, properties, IT, construction, stuff. How do we shift this? Where is this narrative coming from?
Speaker 3:Yeah, it's really. It's really an interesting question because, you know, I think when I was young, we were dealing with this too. People were talking about it. There were, of course, women in the workplace. I'm older than either one of you, but there were women in the workplace and they were, and they were like well, you know what? Women aren't aggressive, they're not asking for raises, they're not asking for promotions, and I guess that's that's not really true. Women are asking for these things, they are pushing for these things, but when it comes to that judgment, they're saying no, sorry you're, the judgment of your salary is only this and the judgment of the male salary is this.
Speaker 3:You know, I really want somebody with a little more experience for this job. One thing that I did really hear that was so fascinating about job search after Twitter, I had to look for jobs quite frantically and that, you know, when people put out job listings, that's just a wish list. I wish somebody had this and I wish they had this and 10 years in this, whatever, and that men will apply to a job even if they have I don't know a third of the qualifications, whereas women will not apply to a job unless they have nearly 100% of the qualifications met. It doesn't cost anything to apply. Just apply to the job. Just apply to the job.
Speaker 1:That's so true. It's very true.
Speaker 2:Very, very true. It's like almost like we need to be bolder as women. If I think about it, sitting at the top table and challenging the status quo, I think we need to be bolder because we have so much to contribute to making decision making or risk analytics, et cetera. So I really think that we, as women, have a responsibility to up our game.
Speaker 3:really, we need to be more bold honestly, I do think so and one thing that I meant to mention that I don't want to agree with your point 100% and support that. But one thing I think that companies can do that's very concrete is put women who are in tech to where people they can be seen in these public facing roles. My company was sponsored. We were doing these series of lectures and presentations and panels. I'm going to just plug my company. They put me on the panel. I looked there were 34 speakers. Only four of them were women 34, four women. And that's so we need women out there to be representatives of tech. And I tried to tell my boss I was like you have no idea, I'm so grateful that you put me out there. And to him he was like I put you out there because you're talented and not every leader is like that. Yes, yeah, and I think companies need to work to put women out there so that other women can see and other men can see.
Speaker 3:Right yeah, anyway.
Speaker 1:Agreed. Well, okay, let's circle back into machine learning and I know you're circling all the way back, you know. I know that it can have like the more we talk about it, I feel like it can have a massive impact on health and safety, careers and how we develop health and safety. And well, I know for America it's OSHA right. But I'm really curious to hear what your thoughts are around kind of risk prediction or prevention. You know that identification like preventing accidents. What are your thoughts in that space?
Speaker 3:Yeah, I think it's tremendously useful in that space and here you know I always say this is my lecture at the beginning AI is so much more than just chat, gpt.
Speaker 3:So we have models that are really really good at doing this kind of risk assessment and predictions and things like XG, boost and other tree based models that are really really interesting. What they do is they just split it out at every single decision point and they game out a thousand iterations and they say out of a thousand iterations, this is where we found the danger could be. This is where the risk is. These are the steps you should take to help to ameliorate or reduce that risk. We have these tools, they are in use and I think that they absolutely you need to. You get data from your government agency or whatever your own company. You just need to make sure it's a big enough data set when are the risks actually occurring, where are the accidents really occurring and try to generate these risk assessment models that incorporate AI and data science and machine learning into them, and I think it can really be tremendously beneficial.
Speaker 1:You think it could streamline our incident reporting processes as well, because you know, when you look at, like, ocean stuff, please. You know we're health and safety professionals. We know that the majority of people hate health and safety, like water or a snorkeling and reporting a workplace incident. You know you always get the narrative of like, oh, I'm not going to report my paper cut. It's like that's not the point. So we know, we know it's not the most loved topic or thing Professionally. Yeah, not the most loved. We're trying to bring a little bit of fun back to it. But you know, realistically it's very compliance heavy and you know there's a lot of stuff in it that makes it unfun. So I'm curious to know, like, do you foresee, again, using our mirror around the corner, do you foresee, like, any automated incident reporting? Like, do you see that there's potentially things that could make it easier for our frontline staff to have more engagement with helping us collect that data?
Speaker 3:Yeah, so when I worked, when my first data science job I was still in school was at this insurance company insurance, this online insurance company here in the United States they do auto insurance and we were evaluating these risk assessment models at the state level and even at the zip code level. And one of the things we needed to be really careful about is that you're historically, in the United States, insurance rates just happen to be higher in certain zip codes and those happen to be where the economically disadvantaged people live. You know what a surprise, kelce, please. Well, you know you need to do these kinds of assessments. That can you know? Just do a ground based assessment of like is there higher rates? That's not AI, but what AI can do is look at that data and say this is where we're trending, this is where we're headed.
Speaker 3:But again, when I was first in this company, I'd press a button. It would go on my laptop and then I'd have to like, like, show it around to the different places. But now we have this wonderful, this cloud technology Amazon and Azure, you know, and GCP, google everything can be automated. You publish your report, the data is fed in in a batch process or automatically, or whatever. And then you have this live dashboard that can help to show you incident reports and risk assessments and all these really interesting tools. It takes time. I do think we're getting there. It just takes time to get there.
Speaker 2:Wow, that sounds so much easier than spending like three, four hours writing a report, right.
Speaker 2:Right which is not yeah, if I think about health and safety professionals, that's not where we should be doing.
Speaker 2:We should be on the ground, coaching, talking to people and building relationships and doing all that stuff. So I really do see technology as taking away some of that admin that we think is important. And, yes, yes, it is important to trend and it's important for us to look at what our risks are and how do we mitigate those risks. However, I think that there's easier ways of doing that through tech or AI or data analytics, and if that can be solved in that way, it frees up our time to work on strategy and building capabilities of our teams, et cetera. So I just think our us as health and safety professionals need to look at it in a positive light. To say, right, our role in the past was very admin heavy. We're now moving into that space where we need to build relationships If we want to change culture and we want to analyze how different people react to safety within different generations, or woman or men or genders or whatever that is. I think that frees us up to be able to do that.
Speaker 3:That's really tricky and I think it goes toward what we were talking about earlier, like AI is not here to replace all these jobs. It's here to free you to really do your jobs.
Speaker 2:Yes, I agree, I agree, but you'll get you'll get people who still think in the old way that you know I have to do admin. But I really don't think that admin is going to change culture. It's not going to change the way that we are going to mitigate risks in the future.
Speaker 1:Yeah, yeah, I agree, and I also think you know you talk about that admin stuff that we could potentially just slice out right.
Speaker 1:That report writing. You've got things that can automate that sort of stuff. It is really important to have human eyes go over it because that's you know, you've got that experience right and it's like, oh, this is actually important, whereas, oh, I've seen that this is actually not as important, might look important but it's actually not something that's going to have credible relevance later down the track. But you know things like compliance monitoring, like looking for those new legislative changes or things that you just want to do. You know something simple like get that admin stuff done and dusted. Have you seen, like, have you seen this kind of technology being used in emergency management?
Speaker 3:anyway, I'm going to admit I'm not super familiar with that particular field, so I don't know if I'm the best person to answer that question.
Speaker 2:I think I've seen it work.
Speaker 2:I've seen, you know, looking after the fire department in the medical center at a steel company and, for me, using technology to pinpoint where you know you have your fire extinguishers or your emergency equipment or your exits, which it kind of was like using an online plan.
Speaker 2:At any stage you could see if there was an issue in those areas and then sending your fire department to those specific areas if there was a fire right. So I've seen it work in that way. I've also seen it work in asbestos. So thinking about buildings that have asbestos and marking out where the asbestos is before a contractor comes onto site. So having that kind of a digital plan, in other words, to be able to do that, is quite helpful rather than having little stickers all over QR codes for risk management. I've seen that work too, and it can be very, very helpful because it becomes easier rather than saying, right now, I've now got to sit down with my contractor and go through the plan to tell them where the asbestos is before they get on there. Instead of they go there, they can scan a QR code. They know exactly where it is at that time. So again, I've seen it work in those spaces before.
Speaker 3:Yeah, the jug me memory that I did read a study of a factory floor where they analyze. They used machine learning to identify potential risk cases and risk assessments and they said, okay, well, you know, these are 30%. You know risk assessment in this certain area. If these things occur and to kind of tease apart these confounding factors, and that can happen together where the things seem safe on its own.
Speaker 3:But if you have these two, three or four things together that you can begin to pry those apart and implement, you know kind of security procedures that make sure that those do not happen in the same time and space.
Speaker 1:Yeah, for sure Makes sense. I was seeing that in construction as well. We can hold it like that and you can see like the pipes and shit.
Speaker 2:Yeah.
Speaker 1:I think that's genius because I like that can just solve a whole bunch of stuff. And I think I'm pretty sure I saw, pretty sure I saw someone the other day that was talking about first aid kits and how the first aid kit like blinks when stuff is like Like out of the, and I was like don't hate that.
Speaker 2:Yeah, that was Christine at restaurant brands. They've just done that and that's where I saw it. Yeah, another company that I talked to a lot is keepsake and immersive and they do a lot of work with AI, where they actually get their contractors going on site to do an induction but through their phone, so they scan a code and it comes up and it's real life. So they actually on a construction site on their phone and they can pinpoint where those risks on it gives them a full induction at the time. So there's a lot out there that I think sometimes we as professionals get scared to use, but I find it. You know, innovation and data and tech is we can use it to our benefits so well 100%.
Speaker 3:It's just about getting getting the word out, getting people familiar with these things that they can, you know, demystifying it, I think, is a big part of it.
Speaker 2:Analyzing the rest, right yeah.
Speaker 1:It's all about risk, it's all that risk. So, okay, so, kind of wrapping up, tell us you know kind of what is your leaving thoughts on machine learning and what would you love for people to know or understand the most about the concept?
Speaker 3:Yeah, but you don't have to be an expert to understand what's going on in machine learning that it can have these. It's designed to affect the real world and the best models are should be fully explainable and understandable to everybody. Oh, this model does this. This model tells me this, and if you have a data scientist and they can explain what they're doing, you need to ask them to go back and explain some more. They're not doing their job fully. Everybody, business and data should be connected as seamlessly as possible. It's just another tool that business can use to achieve its goals. It's not anything new. It is new and exotic, but in another sense, it's just a tool that everybody can use.
Speaker 1:Fair enough. So we have a bit of a tradition over here in our new, in our new, in our new podcast. So, thank you to describe what is currently in your satchel handbag purse. Okay, whatever is in there, you describe it and we will guess what is in there.
Speaker 3:Oh, wait, wait describe it, but don't tell you what it is.
Speaker 2:Yep, we need to guess.
Speaker 3:Okay. So it's sort of a rough cloth case. It's blue, it opens up. I store something important that I need as I'm getting older.
Speaker 1:Glasses case. I feel like I should say don't be fooled by debs sunny disposition. If anyone's getting old, it's debs. Sure under the bus. Do you love that?
Speaker 2:I just know that's okay, I'm hitting the big five and three years time.
Speaker 1:All right, what else have you got in there?
Speaker 3:Okay, let me see what else do I have in there. Okay, I have a long thin tube that's a pop pops off Water.
Speaker 1:No, it's metallic Sometimes use it before I go into meetings.
Speaker 3:Good, good guess it's. The bottom can rotate.
Speaker 1:I want to say my brain. At this point I'm like I know I'm like. I'm like what's going on here?
Speaker 3:sometimes. But that's not what I'm talking about today. It's about this big the pop comes off, I twist it. Oh, it's lipstick.
Speaker 2:Yeah.
Speaker 3:I love the answers, though this is fun.
Speaker 1:I feel like that's quite the stuff that you're doing. Did you have any questions for us?
Speaker 3:I'm so fascinated. I came in here not knowing. I mean, I've never worked in health and safety and it's a really fascinating topic and I'm just so glad that you invited me on and we're able to share kind of your world with me. I guess it's not a question. I just really want to thank you for having me on. This is really fascinating.
Speaker 1:Oh, we find you really fascinating, thank you.
Speaker 3:I'm glad that we could get you on. Me too. Me too.
Speaker 1:This was great Well it has been great having you.
Speaker 3:I'm so glad that we could get you on. Me too. Me too, this was great.
Speaker 1:Well, it has been great having you. Did you have any closing remarks or questions?
Speaker 2:No, I just think that what I've taken from this conversation and it's been really, really interesting and I must say I was a bit anxious about chat, gbt and AI but now I feel less stressed. I feel like let's be bolder and understand it and understand the risks that surround it, just as we do risk analysis and health and safety and enterprise risk, know what controls we need to put in place to manage those risks, and then continue to learn, never stop learning, and I think this has been a really interesting opportunity to chat to you because we're out there to make a difference. But if we don't learn new things and we don't try new things, we're never going to know whether they work or not. So, yeah, that's my conning.
Speaker 1:I'm curious to know I'm assuming you've given chat GBT ago, melissa.
Speaker 3:Oh yeah, I use it quite a bit.
Speaker 1:Tell us what has been your most frustrating, because I know I have sworn it at a couple of times and it doesn't like that. What's been your most frustrating answer?
Speaker 3:Yeah, it's really interesting. I was working with my boss and we have the chat GBT plus right. So it's got the dolly, the generative art component to it. So we're developing this new sort of vision for our company and we try to get dolly to generate art for this. So we said, well, we have three tiers and it's in the cloud and it designed like this awful, looking like a ziggurat, with like an actual cloud over the top of it, and we're like no, no, no, we want you to be like, not, because it just takes your, it takes your inputs as prompts and it draws the literal thing. So the complete lack of figurative thinking of chat GBT can be quite frustrating to me.
Speaker 1:Yeah, how about you?
Speaker 3:Do you have anything in particular?
Speaker 1:Oh, I just find the one thing that really like gets my goate and I've noticed this a couple of times with it is if you ask it for a specific number of words, then it doesn't give you the number of words no no. So I'm like, give me 400 words, it won't give me. It'll give me 120, and then I'll be like, can you fucking count?
Speaker 2:Yeah.
Speaker 1:Can you count? And then it'll be like yes, I can count, this is 120 words. I said, well, is this 400 words? No, my apologies, Let me try again. And then it gives you 200 words and you're like are you kidding me? Like I said 400, give me 400.
Speaker 2:And then it goes to 102.
Speaker 3:It's still learning. It's still. It will always give you an answer, even if it doesn't know, and I find that I find that really kind of frustrating and a little dangerous. Coders use it quite a bit. I use it for code. I can't figure this out. Do you have any ideas? Don't give you an answer, but it won't always be the right answer, no, so you need to be careful with it, just like anything else.
Speaker 2:I guess it will evolve right.
Speaker 1:Yeah, of course, like anything, it will get better.
Speaker 3:Yeah, it will get better. Yeah, we're still in the infancy of this. It's a brand new thing.
Speaker 1:Yeah, the thing that I've really enjoyed it for recently is I've started listening to a new podcast that I'm really enjoying. But, sometimes the episodes are really long. They're like maybe two, two and a half hours long, and I'm like I ain't got no time for that. So I take the transcript and I say pull out the key messages and the key data from this transcript. And it usually gives me a pretty good summary and I'm like, oh okay, cool, this was actually a really interesting podcast. Thanks for that.
Speaker 2:Yeah, that's good. Wow I loved it Cool.
Speaker 1:I'm going to say a whole two and a half hours of listening.
Speaker 3:There you go.
Speaker 1:Anywho, thank you for coming on today.
Speaker 2:Melissa.
Speaker 1:To all our wonderful listeners. If you haven't clicked like, follow, share, subscribe, find us on LinkedIn, find us on all the really good places. Where can we find you, melissa?
Speaker 3:So I'm on LinkedIn. I am a social media addict, so I'm on all of them. I'm on X and Blue Sky and Threads and Instagram. You can find me on all of them.
Speaker 1:You're on Instagram. How are we not following each other on?
Speaker 3:Instagram. Oh yeah, we have to. That's right, we will have to fix that. I will come hunt with you later. Be ready for dog pictures.
Speaker 2:Oh, I can see it in the background.
Speaker 1:So, cute. So, yes, thank you so much for coming along. Thank you, we'll catch you on the next time.
Speaker 3:Yeah, yeah, I'd love to chat again. Bye, bye-bye.