
Auditing with data: for Performance Auditors and Internal Auditors that use (or want to use) data
The podcast for performance auditors and internal auditors that use (or want to use) data. Produced by Risk Insights.
Auditing with data: for Performance Auditors and Internal Auditors that use (or want to use) data
58. The final episode
This is the final episode of this podcast.
It includes:
- an explanation - why the podcast is coming to a close.
- snippets from a selection of prior episodes
- a brief introduction to Algorithm Integrity Matters - the new Risk Insights podcast.
About this podcast
The podcast for performance auditors and internal auditors that use (or want to use) data.
Hosted by Conor McGarrity and Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).
Thanks for tuning in to what will be the final episode of Auditing with Data, previously known as the Assurance Show. After much thought, we've decided to wrap up this podcast. I wanted to take a moment to explain why and share some exciting news. So a couple of reasons. First, my co host has moved on to an exciting new role. Conor has joined the partnership at BDO in Australia. While we are thrilled for his success, podcast. Secondly, our focus at Risk Insights has been shifting for a while now. We've been moving more toward independent audits and away from internal audits, which ended up comprising a fairly substantial portion of this podcast. This, however, has led to a new project that I'm excited to tell you about. So I'm launching a new podcast called Algorithm Integrity Matters. The aim of that podcast will be ensuring fairness and accuracy in algorithmic systems, especially in financial services. If you've enjoyed auditing with data and want to stay up to date with these topics, I'd love for you to check out Algorithm Integrity Matters. You can find it on all major podcast platforms or via the Risk Insights website. And I'll put some details in the show notes. Before we sign off, I want to thank you, our listeners, your engagement, questions and support have made producing this worthwhile. Of course, as well, it would be remiss of me not to thank Conor for the almost four years of recording together and just over 50 episodes. really enjoyed those conversations and planning the podcast and doing all of those weird and crazy things that go along with it and talking to guests, et cetera. So Conor, thank you. A special thanks also goes out to all our guests who shared their expertise and insights. Thank you for being part of Auditing with Data. I hope you'll join me on the new adventure with Algorithm Integrity Matters. Before I sign off, and the reason you would have seen this come through as a lengthy ish episode is I've compiled a range of, snippets from prior episodes that I thought were quite interesting. and those will follow this, but until next time, this is Yusuf signing off. So this is a collection of highlights from Auditing with Data, previously named The Assurance Show. It went for about four years since February 2020, which was right before the pandemic hit all of us. Over that time, we've had various guests and listeners from 487 odd cities in just over 90 countries. the episode highlights you'll listen to today are from 10 to 15 of the more popular and memorable episodes. About half of them are guest episodes and the other half were the co hosts, Conor and I, chatting or, as they say here in Australia, having a yarn.
Narrator:Welcome to the assurance show. This podcast is for internal auditors and performance auditors. We discuss risk and data focused ideas that are relevant to assurance professionals. Your hosts are Conor McGarrity and Yusuf Moolla.
Yusuf:So we start with the very first episode and the introduction to that really, and that episode discussed quality in the use of data and analytics as part of audits.
Conor:Morning Yusuf. How are you today?
Yusuf:Good morning Conor. Good thanks and yourself?
Conor:Good, thank you. Looking forward to our chat this morning. What are we going to be talking about?
Yusuf:Today we're going to talk about thought three things to consider in ensuring quality in the use of data and analytics as part of audits. In episode 16, Conor and I spoke about the power of small data for audits.
Conor:what's the second thing we should think about
Yusuf:so the first one was the easy part. Get a bit more data. The second one , is really interesting, and this is probably where most of our time ends up getting spent. And this is where we augment. The data that we have by combining it with other proprietary data sets. So proprietary, meaning data sets that aren't available in the open domain or that you have within the organization that you auditing or within your own organization, if you're internal audit, often if you just looking at a particular domain, you may not get the full picture as to what is going on with a particular topic or subject or domain or order the area. when you start combining your data with other proprietary datasets from adjacent subject matter. so for example, the easiest example within internal audit that we always talk about because it's been done so many times is where we're doing a barrel audit and we bring in procurement data. So augmenting the data that we have with other data that can provide us with a view off the original process, but also a view a bit more broadly across the organization. That documentation is really useful. augmentation could also be where we combining master data with transactional data and then transactional data with audit logs. So how have the transactions changed? If anything at all? So that augmentation opens up a whole range of possibilities. if we were looking at, for example, something like sales data and the sales data was reasonably small. We can combine sales data with marketing data to understand what happened before the sales actually occurred. So what is leading up to those sales? and then if, if you want to go further down the track, you then combine sales data with support data. So complaints, data, and the lack. And we'll talk about, free text data in a minute, but. Combining proprietary data sets does give you a much broader perspective than just the individual data for the individual subject matter that you're looking at. In episode 17, we spoke about auditors and artificial intelligence.
Conor:to summarize we had three main objectives. The first one was bias and ethics and how important that is as with all of our audits and making sure that in developing these machine learning models, that there is no bias. Inherent within her there put together. The second one is how machine learning can provide signed decision making support. So you spoke there quite a bit about making sure that the inputs were right and understanding the business properly, and having some sort of fundamental basis upon which we would design the machine learning or the algorithm. The objective we talked about was where we've actually identified through our audit work, where. AI or machine learning in particular could provide some efficiencies or help the business in terms of overall performs, but that those opportunities are not being taken up. Some of the broader considerations where a model accuracy the old favorite of data quality. And then lastly, once these models are deployed , what is the change that needs to happen all up there, , six key matters that every auditor should be looking at when they're thinking about audit and AI bias and ethics, decision-making support opportunities for the use of AI, making sure that there's accuracy in the models, good data going in, and then how's your internal controls environment impacted on how has the business changed as a result of deployment?
Yusuf:we're recording this in late may of 2020. we expect that, a lot of this we'll hold true for some time. They'll obviously be some tweaks. what we spoke about were basics. There's a lot more as auditors. use AI more themselves or as auditors, audit AI more, over time. So both performance auditors and internal auditors, There's a lot more detail that we'll get into. and , in five years time, or in 10 years time, some of this might've changed, but, for now this is what we have. and, this is what we need to use. Our first guest episode was with Josam Watson, the Chief Risk Officer of Tyme Global, a digital banking fintech based in Singapore. So obviously a lot of conformance work has been the backbone of internal plans, internal strategies and performance is something that we need to focus on a lot more. we should be focusing on a lot more and we need to focus on a lot more going forward. what do you see as role of the use of data in helping advance that
Josam:if you look at this skill set of internal auditors, the majority, would have a. Background, similar to mine. so you get this homogeneous group of people, expecting them to do things differently. And she expects them the business, expect them to understand the business. you expect them to be very good communicators and influencers, and you expect them to. do everything. they're troubleshooting. that's a challenge. And, we will be asking, too much of our people that you select a sit in group of people with a certain kind of mindset and you want them to do everything. I would advocate four more diversity, in an internal audit or risk function that you started to get that are reflective of what the organization does., in one of those, things would be to, do things smarter, in a way that reduces operational costs
Yusuf:Then forward to episode 34. In this episode, we had a guest, Taka Ariga, who was the Government Accountability Office's Chief Data Scientist and Innovation Lab Director, and I believe still is. Picking up on something you mentioned earlier, but switching to the conversations that have been going on around the ethical use of AI and ethical use of machine learning and I'm not going to say algorithm bias because I personally think that there's more bias in data than in algorithms, but what is it that you are doing as the GAO to contribute to that discussion? And secondly, is there potential for reduction in some of the overlap that we're seeing between agencies? So there's all sorts of agencies that are creating frameworks around the ethical use of AI. And it seems as though, as opposed to just standing on each other's shoulders, we're just recreating things across the globe. How can you, and are you participating in that conversation to reduce the effort to enable a more efficient and higher quality outcome?
Taka:Your observation is absolutely on the mark. Earlier on in the beginning of the segment, I talked about GAO lives in an interesting duality relative to AI. We want to use machine learning capability as much as any other organization, but we also know we're going to be called upon to audit these implementation that are fast and furious coming across all different corners of the federal government. We have started an engagement really to look at the question of how do we conduct oversight of AI solutions out there and through our research process and discovery process, you're absolutely right, there are all sorts of governance principles, but they are all at a very high level. You can almost boil it down to a thou shall do no harm, which sounds great. But what does that mean to the day-to-day responsibility of the data scientist or the program manager? If you're implementing an autonomous vehicle, those requirements are much different than let's say computer vision or mortgage underwriting modeling. What we've done is we actually convened a set of cross sectoral experts back in September, to discuss issues that are relevant to AI oversight. Number one, what are the criteria that we should use to evaluate these solutions? Number two, what are the evidentiary standards that we should consider collecting? Do they include data? Do they include code? Do they include any other technical artefacts? And number three, how do we actually evaluate them? So for example, if we take the code outside of the agency and try to replicate that within GAO, can we reasonably expect that we can produce similar results or because of the operational tuning, something that happens in one agency may not necessarily be reflected operationally in another environment such as GAOs. We're tackling all of these conversation and it was actually a fascinating discussion over two days. We had about 27 or so different experts, that really got into the nitty gritty of it. Part of our requirement was that we didn't need the experts to come here to admire the problems. We know what the problems are. We want them to come to the table to really discuss plausible, practical solutions that we could consider. At the same time, we did our own due diligence and looked into various analogues that we could draw upon. So for the government of Canada, OECD, EU, and UK and Singapore, etc, are all in various stages of experimentation relative to AI governance. We looked into those, we looked into what the literatures are telling us, and we convened this forum of experts. And so now we're in that distillation process to say, when we encounter an AI system, what are the practices that auditors will adopt in terms of the kind of evidence that we would collect, the kind of audit procedures that we will apply. Certainly this is just the first step of many that we will have to take. Right now our focus is on all of the common denominators of an AI system. But like I was saying before, there are nuances between different implementations. A subsequent evolution of this AI oversight framework would now take a very specific branching towards whether it's computer vision, whether it is a risk algorithm, whether it's HR benefit processing, etc. Part of it is that our recognition, we didn't want to wait until certain technological maturity for AI before we talk about verification. At the speed in which AI is evolving, if we did that, we will always be playing catch up. So our concept here is really to co-evolve with technology, recognizing that we're probably not going to get it a hundred percent right. But the reality is that there is no other voices, that we can find that we can say "Yep, that oversight framework works perfectly. Let's just adopt that." So given that gap in the conversation and that gap specifically, there's a lot of conversations around trusting AI, but not a whole lot about verification of AI. And GAO is in the business of verification. So how do we take that evidence-based approach to do our assessment credibly? Very much looking forward to the draft report coming out in early 2021. And that's something that we're quite proud of to having undertaken, this particular challenge so early on in the existence of the innovation lab. But we'll certainly have other ideas, in a planning stage as well around blockchain, for example. That fundamentally, when it involves that level of cryptography, how the audit methodology will have to adapt to meet those kinds of operational requirements.
Yusuf:In the very next episode, we spoke to Stephen McAteer, who would be essentially Taka's counterpart at the Victorian Auditor General's Office. Stephen is a performance audit data scientist.
Conor:So you mentioned there, the use of a sophisticated technique straight away and as you said, you were thrown into the deep end on that. I'm wondering though, from the auditor's side, are there any common challenges they come to your team with? Are you seeing anything where there's a repeat need to provide assistance, whether it be through certain techniques or other things?
Stephen:Yeah, definitely. And I think that these are the things that are the perfect candidates for being put into the skills that you demand of the performance auditor. But the one that I have people reaching out to me the most for is power analysis for doing statistical tests. So it's typically a yes/no question. You're looking at some sort of binomial distribution and you're trying to figure out how big your experiment needs to be in order to give a sufficient level of certainty. And the question usually comes over the fence in the form of "what does n need to be?" And then the question comes back from me, well what is it you're trying to say? And what sentence is it that you want to put in the report? The sentence is going to be of this form. You Know, so it's going to, typically you're not saying, at the 95% confidence interval and all that kind of jazz, because you want it to be readable. But at some point you're going to be able to make some statement. We are confident that at least 10% of X is bad, or we're confident that at least 90% of X is good or whatever it is. Then you start working back, how much effort does it take you to get one of these responses? Is it send an email to someone or is it we're going to be able to send the survey out and can get thousands back without much effort, or is it you're going to have to go retrieve documents from an archive and do a manual analysis of it? So the question obviously then becomes, how much work are you willing to do in order to get that extra percentage point of certainty. That's typically the one that comes back. That represents a shift in the way performance auditing is done, and the fact that comes across the fence so often is really the argument for that should be one of the core skills of a performance auditor. And it wouldn't be everything like it wouldn't be, using the real corner techniques. So how do you assess that confidence interval when you're getting up to around 99% as your response rate and things like that. And things start to get really hairy when you get into these really odd corner cases. And that's where we would jump in, but where it's a really run of the mill understanding how many things you need to ask, how many samples you need to take in order to get a given level of confidence that is a question that every performance auditor is going to be expected to be able to answer in the coming years.
Yusuf:After that, we spoke to Gemma Diamond and Morag Campsie from Audit Scotland.
Gemma:it's going to be a bit of both and a bit of trial and error and seeing what works in the best way. So I think at the moment, we've recognized that the central team is working well and a good way for us to bring those skills together across the organization. And certainly we are looking to invest in that team more to help us go a little bit further, faster, but what we recognized as well is that we do need to train auditors. What we want is that there are tools that any auditor can pick up and use with minimal training. There might well be some training needed for everybody, but hopefully some auditors who have an interest and want to grow their skills in this area. Maybe have some baseline skills. So for example, if we talk about the COVID tracker that we've just been doing. That was a really great partnership between our central team and somebody who was in the audit team who's looking at that who's monitoring all the announcements who had looked at building some of our skills, but were very, they would say themselves, quite rusty in that, but working with that central team, they've been able to develop those skills and can now essentially be a little bit self-sufficient in maintaining that tool within the audit team. And I think for us, that's a really great model to aspire to that where we see that there's these dashboards and audits, which are data heavy, that we try and get the teams to be as self-sufficient, as they can be, recognizing that the central team's really small, we're talking about essentially three or four people within that team. So there's no way we can support everything that's going across the organization. So to get the audit teams to some level of self-sufficiency with a lot of support and then essentially to have, as a broad principle, the fact that we want the tools to be able to be picked up by any auditor and used within the course of business, without making anybody fearful of the tools, but just use them as expected as they would do any of the tools within the audit process.
Morag:They come into a pool of data champions, or whatever we might want to call them. In fact, one of my colleagues actually said we should call them data PyRates because it uses R, that one's stuck with me. But I think it's quite a good model just to have that core team, but, it spans out to a pool of others that can be called upon and they themselves can help support other auditors, to expand that knowledge going out the way. Yeah, that's what we're hoping and, ultimately having people that they, the whole, an organization, that's not scared of doing data analytics and kind of making them realize that actually data analytics is what we do anyway. They might not realize it or badge it as that, but essentially we're analyzing data everyday. That's just what we do.
Yusuf:We spoke to Alli Torban, a data visualization designer
Alli:There's a few main mistakes that I see people making a lot. And the first one is about audience. When you first start, you have to first define what your measure of success is. You have to make sure both you and your client are pointing towards the same target. And in order to know what your target is, you have to know who your audience is. And in order to do that, you have to ask, who is it for? And then someone might say my manager or HR or something like that. And that's fine. But then I really like to ask what are three words that you would use to describe your audience? Because that starts revealing a little bit more about who they are. So they might say something like they're very busy or they're stressed or they're non-technical and all these words give you hints about what your visualization is going to have to do. And also how long they're going to spend on the graphic. That is a super important question that you have to ask, because if they're going to spend a few seconds because they're very busy or stressed, or if it's something I'm going to be looking at this every single day for hours, if it's an analyst, then that's something very important you need to know about your audience as well before you even start designing anything. Another very important question is what's one thing you really want your audience to take away. And a lot of times it's hard for people to just take it down to one thing. But I like to push it so that they pick one thing so we can aim at that. And then if more things come up, then we can assess later. So really who's this for, a few words to describe them, how long they're going to spend with the graphic. And then what's one thing you want to take away. And then when you have that information, I like to summarize it. And that serves as your success criteria. If you ask those questions, you might get something like, we need a technical analyst to be able to drill down and get XYZ out of this graphic every single day. Or you might have, we need our HR department to see X pattern at a glance in a non jargon-y way. And you can see how having those criteria set beforehand can really change the direction of your visualization.
Conor:Coming to that selection of criteria, is that an iterative process that you have to have multiple conversations before you can arrive at that statement of success and what that looks like?
Alli:I think so. That has been my experience. It's very rare that I ask those questions and people know the answers right off the bat. And I'm talking to two people like I was talking to you too. You guys might not agree on who, who you're talking to or, you might have to go back and think about it yourselves and then come back to me. I'm in a situation right now where we decided on all this information and then I created the prototype. And they went back and showed it to more people and they realized actually, we are talking to someone different now, I think now that we've seen it, so it's very iterative. I don't think people should beat themselves up for having to go back to the drawing board or not having the answer immediately. It is hard. It's a hard thing, but, it'll be even harder if you skip it.
Yusuf:We spoke to Scott Frank, who's the head of performance audit and IT audit at the state auditor's office of Washington.
Scott:Yeah. Voting by mail became a very prominent and hot topic in the U S during the 2020 presidential election. But it's something that Washington has been doing for a very long time for all of the elections are done entirely by mail-in ballots. It's something that Washingtonians are very proud of, and feel really good about what it does for voter participation. We had some legislators who had some concerns though, they had, seen some reporting that Washington had the highest rejection rate for ballots in the country. And that's true, but there's a big asterisk to that, which is, rejecting a ballot like coming in and saying, it's not going to be counted as sort of a unique artifact of voting by mail. So when, when people vote in person, it doesn't happen. That doesn't mean they might not get turned away for other reasons, but their ballot is not rejected. So it's not counted that way. So only states that vote entirely by mail end up at the top of the list. So Washington was at the top. It was very much in line with other states that had entirely voting by mail. That being said, the legislature still wanted us to look into it. They wanted to know why it was so high. They also wanted to know why there were differences across counties. And they also wanted to know, are there any differences across demographic groups? And so, to me, one of the most interesting thing with this was to tackle these questions of demographics, in particular, there was a lot of interest in, are there differences by race and ethnicity? This is data that we do not collect on voters. And so we had to come up with a method in order to analyze ballots and ballot rejections when we don't know the race and ethnicity of the voter, but maybe come up with a way to guess it. And so what we ended up doing was using a technique that comes out of some of the consumer protection agencies, that are also looking for patterns of discrimination in areas where the data is not being collected. And so we use a predictive algorithm. it takes a person's first and last name has a probability from the census of that first and last name being affiliated with a certain race or ethnicity. Combines that with, we do know for the voters where they live, we know the demographic characteristics of their census block, where they live. So that also gives us a probability. And then you merge the two together. You get a pretty accurate, especially across large groups of people, pretty good prediction of whether the person is white or black or Hispanic. And then that we could then use as a data point when we were now analyzing across all the ballots that are cast. What's the likelihood that there's going to be a difference in rejection rates based on race and ethnicity. Unfortunately, we did see that, we saw that, pretty much all of these sort of marginalized communities that we think of with race and ethnicity were in some cases, much more likely to have their ballots rejected than strictly white voters. That was troubling to a lot of people to have that finding come out. But it also sparked some action and it sparked the legislature gave some more funding to have another group dig deeper into that. Why? Because we couldn't really, we just weren't able to get at why but also some money to do just general better outreach, even if we can't figure out why, how can we overcome that and start the lower rejection rate. So I think it sparked a lot of action and was kind of very interesting to me as an auditor to come up with such a highly quantitative approach to looking at a question.
Yusuf:We spoke to Michael Pickup, the Auditor General of British Columbia.
Conor:I'm sorry, I'm gonna throw back one of your quotes to you and then follow up with the question. So you said it in , a recent hearing of your legislative committee on finance and government services, where you said, "I strongly believe in investing in building not only for today, but for the future as well". So our on the spot question to you is, what will a performance auditor of the future look like?
Michael:Oh, good. Good question. So, so I think that, you know, a performance auditor of the future, I think of the number one qualities I would look for in a performance auditor, and it doesn't necessarily, uh, mean that that trade has changed. But I would suggest perhaps that that trait maybe, More challenging to find, is intellectual curiosity. You know, people who make good performance auditors are those people who, may just seem like the biggest pain if you're teaching a class or if you're a parent or if you're, you know, have friends that are like, that is so people who ask all of those, um, types of questions. So, I always look for that. And I'm sure outside of here and you know, and I include myself in this, those of us who are performance auditors and who are good at this, I'm sure drive people in our personal lives crazy. Because I know I've recently heard, can we go in and out at that car dealership without you turning it into a performance audit, right? Like, you really need to be auditing them. I'm like, Yeah, you know, you can't necessarily shut this off. So I think the performance auditor needs to have that. They need to be with it in terms of technology. Not just be able to use Instagram and TikTok, um, but be to be able. Particularly if they're coming from, uh, disciplines outside of accounting where you may learn things related to technology and how to use some of these tools. I think no matter what your discipline is that you come into performance audit, you're going to have to have that ability to use technology. You're gonna have to have the ability, to collaborate with people, to work together. And I think it's one of the, areas to watch and risks to watch as we move to more isolated, in some cases hybrid type of working to say we don't wanna lose people's, ability, to collaborate. I think the performance auditor of the future has to Really keep the eye too on the mandate and the purpose and the objective of what we're doing, particularly as we look to performance auditors who are not accountants, who are very well educated, who are passionate about their area of discipline. No matter what that discipline is, that part of why they have a master's degree or a PhD or whatever other kind of education they have. But realizing that when you come into an audit office, you're not here to develop policy and not to be frustrated. So you have to have that, ability to be adaptable, I think, and to be able to say, Okay, as a performance auditor, I have to understand my role. I look for agility, those people who can say, Okay, we're in a pandemic. look how many pandemic related audits we did. We did an information report September 20th, 2020. We had the first, pandemic related, report out in Canada., that was by being agile. If I look at some of the other, pandemic related audits we put out, again, is that, agility, which is different than being adaptable. So those I think are some of the traits I would think of as the performance auditor of the future. Maybe it's of the present , as well.
Yusuf:Then Conor and I discussed the use of the word analysis. Okay, Conor. So this is the first episode for 2022. I need to keep reminding myself, keep thinking it's 2021. And today we want to talk about the word analysis in the phrase data analysis. Why this came up is that - just in my own head, and I know in various conversations we've been having, there's often a difference of opinion or difference in understanding, of what that phase of the data work that we do as part of audits actually is. So thought it would be good to have a conversation about it and define it, either loosely or not, but understand where it comes from what it means. And when we having conversations with individuals that we report to, or individuals that report to us. And we talking about analysis, what does it actually mean? So if somebody says to you go away and do some analysis, or if you're telling somebody go and do some analysis, what does it actually mean.
Conor:Sounds like one of these topics that you've been thinking about over the holiday break Yusuf.
Yusuf:Ah, maybe a little bit.
Conor:At the back of your head.
Yusuf:Yeah, just a little bit. Through various engagements over the years, it's one of those things that never really got properly defined. And like I said, everybody's got the differences. And so yes, I did think about it a little bit over the break.
Conor:So as with most things, we probably should start with definitions or the history of the word or its etymology and where that comes from. So, where do we start with the word analysis?
Yusuf:Etymologically, it comes from the, combination of two root Greek words. The first one is "ana", which is "up" and the second one is "luein". I hope I'm pronouncing it correctly, so any Greek listeners, please just, apologies in advance for not pronouncing that correctly. But ana for up and "luein" which means "loosen". So when you combine those, in reverse order, it's "loosen up" is where it started. Today's episode of the podcast is a little bit different to our normal podcast episodes in that we are playing a prerecorded interview that we had with Benji Block from the Author Hour. We discuss the book that we released last week, The Data-Confident Internal Auditor. And we go through several things like, what the genesis for the book was and what the book is about, who it's for, being every auditor using data themselves to deliver on their audits, as opposed to waiting for data specialists or data scientists. Conor and I then spoke to Benji Block on the Author Hour about the book that we had just then published, The Data Confident Internal Auditor.
Conor:And if you enjoy the content and think the book or something would be useful to you, you can get it through Amazon.
Yusuf:Yep. And we'll put links to that the show notes. So here's our interview with Benji Block on the Author Hour.
Benji Block:For internal auditors, developing trends in data analysis and data science can feel less like a wealth of information and more like an avalanche still better use of data provides an opportunity to advance your career by adopting new invaluable skills, the missing link, the jargon free guidance that cuts through the. The data confident internal auditor, demystifies the use of data in internal audits, through practical step-by-step guidance, with concepts and tools that are easy to understand and apply this comprehensive guide shows you how to approach data yourself without having to wait on a data scientist or a technical. Developed over the course of hundreds of actual audits. These real-world approaches and practices are distilled into a simple sequence of steps that will leave you feeling confident and even eager to apply them for yourself. You're listening. Author hour podcast. I'm your host Benji block. And today we're joined by Yosef, Moolah and Connor McGarrity. They have just come out with a new book and the book is titled the data confident internal auditor, a practical step-by-step guide. Guys. We're glad to have you here on author hour today.
Yusuf:Conor and I also discussed simplicity and simplicity within audit.
Conor:Why care about things being made simple?
Yusuf:Auditors want things to be simple because a simpler approach and output of an audit means that you can get to a higher level of quality and be confident in that quality that you're producing. Because it's always easier to see problems or issues in something that's simple than it is in something that's complex. So we've all picked up complex audit files before, and it's very difficult to navigate. whereas a simple, well-structured audit file and audit report is far easier to review. Why should stakeholders care about something being simple? They don't care about our process, but in a lot of cases, they will respond positively, where audit reports are being made simple or audit plans, that are shown to audit stakeholders, are simpler. This may not happen all the time, but often those stakeholders recognize that simplicity has been created. Over the years, people have become used to seeing things that they have to read and then interpret and then understand. And when you read something that is simple, it is very clearly easier to read, and people recognize that. We've seen, so many times, shorter reports are recognized, simpler reports are recognized. And so management, audit committees - even if they don't necessarily say it out loud- they will recognize that is not something that is easy to do. People that sit within audit committees, management executives, they've all been there. Quite a few of them Would have been within either internal audit or performance audit teams in some way, and so they know what it's like in a lot of cases. And even if they don't, even if they haven't been in those shoes, they've had to create documents at some point - and they know how difficult it is to get to simplicity. It doesn't come naturally for everybody. So even if they don't say it out loud and hopefully they do. That recognition is there that you've made their life easier and you've made things easier for them to digest.
Conor:So if I'm an audit committee member or a member of the board, I'll appreciate the fact that you, as an auditor have taken a process that might have 50 inputs and lots of mechanisms in the middle and boiled that down to a few key steps that I can see and digest really quickly.
Yusuf:In episode 56, we had Cathy O'Neill, who is an algorithm auditor, and the author of the book Weapons of Math Destruction. A lot of your book focuses on auditing algorithms. What exactly is algorithm auditing?
Cathy O'Neil:Well, there's essentially three different types of algorithmic auditing from my perspective. they all center on this question of like, does this algorithm work? As expected. So algorithms are generally seen as opaque and, complicated, sophisticated objects. if you just think of them as sort of things that take in input and put out outputs, usually risk scores, almost entirely risk scores in my experience., then., you can sort, ask questions in plain English and see what the answers are. that's the kind of work I do is like translating plain English questions into, sort of statistical tests. You could say, for example, for a student loan, credit card algorithm that's trying to determine whether somebody deserves a student loan or not. You could say, does this treat, white people fairly versus black people? Or does it treat black people fairly versus white people? and you have to define what exactly that means. You could say like, what is the definition of fair? case, maybe we, care about false false negatives. you have to make that very precise. You have to say something like, it is the false positive rate higher for black people than white people. And then, you have to define, well, what does it mean for, two people to be similarly qualified, but one of them is denied when the other one isn't. the work there is defining qualification then you do a test and it's not that different from the tests that we have heard historically that sociologists do, you know? So the sociologists would send equally qualified applications for law students that didn't really exist, but theoretical law students, to summer law internship programs. And they would change small amounts of information like, names or, some other kind of thing that shouldn't matter theoretically. And then they would see, if people with white sounding or male sounding names would get, interviews more often than people with, , black sounding or female sounding names. And, basically the answer always is yes. Yes. in the sense that there always is that sexism or racism or ageism or whatever it is that they're looking for. And the question is, how much is it unacceptable? what is the threshold of acceptability? So those are, these are all the questions that you might ask in that situation. I just do it statistically with a computer. And it's actually much faster and easier, because actually have to up a bunch of applications. You just literally send a bunch of queries to, the same predictive algorithm. And the predictive algorithm just spews out , the risk scores of everybody involved and that defines the, thresholds. So lots of questions there, obviously, and they're human questions. They're human value questions like number one. of course, what is race? I mean, race is of social construction. but of course racism is very real, so we have to grapple with that and we have to grapple with gender. Gender is also, in a lot of ways of construction, But sexism is also real. And then what does it mean to be qualified and why is it false positives or false negatives that you're worried about rather than something else? So there's lots of choices in that. And so really my job, if you think about it, like what I just described, the easy part is the test. my job is mostly a translator of values into choices, at a statistical level, and then making sure that everyone in the room understands those choices. And by the way, the final thing I'd say is like, I never do only one test. because there's ambiguity in those choices. I would do a battery of tests with all such choices.
Yusuf:And then the final guest episode, immediately prior to this one, was an interview with Michael Drechnowicz, head of internal audit at TK Elevator. Michael, again, I hope I got your name right.
Michael:My parents are from Poland, so I speak Polish, German, English, and that's also where this difficult surname comes from.
Yusuf:Talking about multiple countries, you said you operate in over a hundred countries. Of course, everybody at some point would have stepped into a TK elevator or maybe uses it two, three times a week when they go into the office, jumping into an elevator that was provided by yourselves. And you know, 50, 000 employees across the globe. But what was really interesting is you mentioned that your team, so you said you have a team of 16 auditors, but 10 nationalities. So a lot of diversity just in that itself. Do you want to talk a little bit about your thoughts on DEI and what that has looked like within your team?
Michael:Yeah, I think diversity is extremely important, especially when you do audits in different countries. You also have to somehow ensure that there is some sort of cultural fit as well. I would say currently we are female dominant in the team. And we can also see it on the market to be honest, because we are currently recruiting. And I would say 80 percent of the people we are actually inviting to interviews are the headhunters give to us. are female, so I can sense some sort of change in this, because I remember every time a new colleague came, I would say it was 60 70 percent male probably. But I can see definitely some sort of switch I think this is great because maybe internal audit in the past was seen as a male job because of the traveling, because of the stigma. I would say that this is definitely not the case. We have also many, many female colleagues with kids. And we just try to be very, very flexible for them when it comes to traveling. So instead of saying you have to travel in this particular two weeks we very early ask them what the best week is. Of course, there are sometimes ad hoc investigations or audits. Where you need the flexibility but we always try to be flexible and yeah, due to the fact that we have strong markets also in China, for instance, so we have also an office, in China with, colleagues from China as well. But also in our European teams, we have really different kinds of nationalities and we almost do not speak German, so all of our repos are in English as well, even though we are a German company coming from ThyssenKrupp. So it's extremely important and it's also, I think, value adding and everybody grows personally because if you did not have all of these different nationalities, you would miss all of these conversations and aha and wow moments when people talk about how they see things how their culture is. And yeah, it also creates a great spirit I would say. And just great to see it.
Yusuf:And finally, a small sign off from one of the episodes between Conor and I.
Conor:Thanks Yusuf.
Yusuf:Thanks Conor.
Narrator:If you enjoyed this podcast, please share with a friend and rate us in your podcast app. For immediate notification of new episodes, you can subscribe at assuranceshow.com. The link is in the show notes.