Auditing with data: for Performance Auditors and Internal Auditors that use (or want to use) data

34. Taka Ariga, GAO's Chief Data Scientist & Innovation Lab Director

Risk Insights - Conor McGarrity and Yusuf Moolla Season 1 Episode 34

Taka is the inaugural Chief Data Scientist at the US Government Accountability Office.  
He also leads the GAO's Innovation Lab.  We discuss:

  • What the GAO does
  • How data science is helping GAO
  • Innovation and experimentation at GAO
  • Machine learning assurance

You can reach out to Taka via the GAO website.


About this podcast
The podcast for performance auditors and internal auditors that use (or want to use) data.
Hosted by Conor McGarrity and Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).

Narrator:

You're listening to The Assurance Show. The podcast for performance auditors and internal auditors that focuses on data and risk. Your hosts are Conor McGarrity and Yusuf Moolla.

Conor:

Today, we've got a special guest on the show, Taka Ariga from the U.S. Government Accountability Office. Taka's the chief data scientist and director of the GAO's innovation lab. Welcome to the show Taka.

Taka:

Thank you Conor. It's such a pleasure to be with your audience today.

Conor:

Chief data scientist, and the GAO. Firstly, for those listeners who may never have heard of GAO, can you just give us a little bit of background about the work of the GAO and then a little about your role there.

Taka:

Sure, happy to. The Government Accountability Office is an independent, non-partisan, oversight agency. We're often referred to as the congressional watchdog. We are very unique in a sense that we have a pan-governmental purview across wide ranging program, operations and policies. Something that we're really quite proud of is our return on investment. For every dollar invested in GAO in FY2019, we were able to return 338 times that investment. So for that fiscal year alone, we're talking about over$215bn of financial benefits that we were able to identify. And over the past 20 years or so, that added up to more than a trillion dollars in potential saving for taxpayers in the form of recommendations that we make to a federal agencies. So it's a significant agency that does very serious work across a variety of topic, whether they're infrastructure related, whether they're funding related, whether they're science and technology related. We're quite proud of our track record and the kind of quality product that we produce. And especially in the COVID time where we really had to be flexible and adaptive, we have more than a hundred or so COVID related oversight product ongoing right now. And that's in addition to all of the regular work that we're working through. So it's definitely been a challenging time, but I'm so proud of the GAO, my fellow colleagues that have really stepped up to the plate and do the work that the American public expects us to do. Now, relative to my role at GAO, I'm the first chief data scientist appointed by the Comptroller-General of the United States. And I'm very honored to also lead our newest venture, the Innovation Lab. And the innovation lab is really a recognition by GAO that for us to thrive in what we call the fourth industrial revolution, we really need more tools in our toolbox. A lot of these traditional audit methods are not necessarily able to cope in the era of algorithm and the era of blockchain and 5G and many other implementations that we're called to assess. It is a very interesting duality that we do have to tackle- on the one hand, like many organization, we aspire to adopt AI, blockchain, 5G, AR, VR, etc. to help our mission team do their work, in greater scale and greater efficiency and be able to dig deeper around the research question. But we also need to figure out how might we conduct audit of these technologies that are being implemented by Federal Government. So you know those methodologies have sort of a unique flavor to it. And so a lot of what we do is experimentally figure out how might these capabilities apply to GAO. And also how might we think about the kind of oversight questions that need to be incorporated in our audit methodology? Never a dull day, and I'm very proud of a lot of great work that we've already been able to accomplish in less than a year of existence.

Conor:

How has the Innovation Lab been set up within GAO? Is it an internal service provider or how does that work?

Taka:

That's a great question. It is an internal, almost like a service provider. We very purposefully established innovation lab, almost like a research and development entity, separate from our production system and not directly integrated within our audit team. And part of that is to really maintain the ongoing engagement, to make sure that we don't impact both the quality and the timeline of those ongoing engagements. And we do have the latitude to take a parallel path to really think about the art of possible. Now, when you focus on the art of possible, it's always a non-zero chance that you might fail. So by putting ourselves in that research and development type of environment, where we have much higher tolerance towards risks, and that's one of the reason that we're able to move fast and we're able to pivot. And once we're able to demonstrate a solution then we come back to the individual mission team to say, how might this new capability now fit under the existing audit methodology? And then one of the reason that we decided to stand up an internal organization is precisely that pan-governmental purview of GAO. It's very difficult to find one service provider that has experience in all of the areas that we cover, especially the kind of program that are not necessarily appropriate for public disclosure while the work is ongoing. It's really a part of the necessity for us to in-source a lot of these experimentation work, while trying to drive the audit capability that GAO is able to project forward.

Conor:

So would any of your team have direct interactions on specific audits with the, I think you referred to them as mission teams?

Taka:

They do. The way that we tackle the problem is really start with the kind of challenge that individual mission teams are facing. We don't walk around the hallway taking sort of a hammer, looking for a nail, approach to innovation. We really sit down and we do a series of white boarding sessions on a given idea to try to scope it, to try to expand it, to try to play devil's advocate on whether those are in fact the right questions that they're asking. And really start putting some thoughts around how might we approach answering this particular problem without being too concerned about audit methodology, without being too concerned about some of the constraints that we do operate under. Part of our success is our ability to integrate our own flavor of the agile methodology so that we decompose these problems into a series of two week sprints. As we develop a solution towards it, the mission team themselves can quickly see their progress. But it also allows us to pivot when circumstances change, when requirements change, and Covid is a great example of that, where we're having to be very adaptive, very flexible in terms of the changing data environment, the changing oversight requirement, the changing requests coming from the Congress. Combination of this problem centric approach, as well as our adoption of the agile methodology, have really driven a new operating paradigm where teams can see and be able to quickly pivot. So when we say fail fast, that's exactly how we implement our approach going forward.

Conor:

And all of these problems that you're working on or collaborating on with the mission teams, they're data focused or data related problems?

Taka:

Yeah, there's certainly a fair amount of data related issue in terms of how we might be able to ingest more information, in order to define certain researchable questions. There's always questions around how we can analyze the data that we have, differently. One example I'll provide is, it's probably not surprising for the audit community to hear that statistical sampling is still a prevalent technique. But how might we use data analytics techniques to now look at 100% of the available data. And that really becomes the underpinnings for more prospective capabilities, such as machine learning and natural language processing, where, simply taking a sample is not always a sufficient way to proceed, but if we can think about how we can leverage a, sort of a commodity, computational horsepower coupled with advanced analytics that might start yielding different avenues of questions that we can ask. And this is directly meeting the expectation of the Congress, in that we don't simply do literature research, we don't simply just interview key stakeholders, but we actually dig as deeply as possible given the proliferating volumes of data all around us. But certainly there are processes related challenges that are presented to us all the time. For example, how might you automate certain financial functions, certain analytical function. Data engineering is a good example of that. If there are repetitive engineering work that needs to be done, can we automate that as much as we can? Again, I'll go back to COVID as an example, given the various streams of epidemiological data, funding data, etcetera, at least in the United States, the way that those data are being presented throughout the past six months or so have changed dramatically. So how can we adapt our automation so that our efforts are not spent on piecing together these data, but spend our energy on analyzing these data and be able to opine on what does that mean relative to operations, relative to programs, relative to funding relative to fraud and so on and so forth.

Conor:

So that seems like a very extensive program of work that you have there. How many people have you got in your team in the Innovation Lab?

Taka:

Yeah, we're quite proud of our track record of being a force multiplier. Right now, we're at eight full-time equivalent, with two interns that are helping us. Even within the first year of existence, we're now embarking on 20 or so different projects, across different teams. And certainly we're continuing to grow. We continue to hire. And so I imagine we will start to be able to tackle even greater volume of challenges. We also are coordinating with our inspector general community, our state and local auditors, and, integrating within the international level, across organizations, such as INTOSAI and OECD and the World Bank as well. The idea here is whether there are success stories or lessons learned coming out of the innovation lab, we intend to share them so that everyone can benefit from the work that we've done.

Conor:

You mentioned there previously that your team sometimes provides information or some analysis to Congress. So that being an important external stakeholder. How much of your work is split between internal demands with the mission teams and the external facing analysis you guys do?

Taka:

For the most part, our demand right now it's internally generated. And that is mostly by design. We want to make sure that since it was resources at the GAO that were used to establish an innovation lab, that we first and foremost support our mission team, making sure that our teams themselves are culturally becoming more familiar around the concept of agility and flexibility and even concepts around cloud computing and machine learning and what they can do. We certainly intend on expanding our scope. We're starting to interface directly with Congress in the form of technical assistance. And this is not necessarily in the form of a formal GAO report, but it's more of an interactive, contemporaneous conversation on certain topics of interest to Congress. So a couple of example, there was congressional inquiry around why certain modeling produce certain results, depending on the techniques applied. One is much more statistical in nature, projecting forward, one is much more empirical observation, so they were projecting different results. And so we provided technical assistance in explaining what each of those models can tell us and cannot tell us. And that was something that we had to do quickly and not necessarily wait for several months before we publish something.

Yusuf:

Picking up on something you mentioned earlier, but switching to the conversations that have been going on around the ethical use of AI and ethical use of machine learning and I'm not going to say algorithm bias because I personally think that there's more bias in data than in algorithms, but what is it that you are doing as the GAO to contribute to that discussion? And secondly, is there potential for reduction in some of the overlap that we're seeing between agencies? So there's all sorts of agencies that are creating frameworks around the ethical use of AI. And it seems as though, as opposed to just standing on each other's shoulders, we're just recreating things across the globe. How can you, and are you participating in that conversation to reduce the effort to enable a more efficient and higher quality outcome?

Taka:

Your observation is absolutely on the mark. Earlier on in the beginning of the segment, I talked about GAO lives in an interesting duality relative to AI. We want to use machine learning capability as much as any other organization, but we also know we're going to be called upon to audit these implementation that are fast and furious coming across all different corners of the federal government. We have started an engagement really to look at the question of how do we conduct oversight of AI solutions out there and through our research process and discovery process, you're absolutely right, there are all sorts of governance principles, but they are all at a very high level. You can almost boil it down to a thou shall do no harm, which sounds great. But what does that mean to the day-to-day responsibility of the data scientist or the program manager? If you're implementing an autonomous vehicle, those requirements are much different than let's say computer vision or mortgage underwriting modeling. What we've done is we actually convened a set of cross sectoral experts back in September, to discuss issues that are relevant to AI oversight. Number one, what are the criteria that we should use to evaluate these solutions? Number two, what are the evidentiary standards that we should consider collecting? Do they include data? Do they include code? Do they include any other technical artefacts? And number three, how do we actually evaluate them? So for example, if we take the code outside of the agency and try to replicate that within GAO, can we reasonably expect that we can produce similar results or because of the operational tuning, something that happens in one agency may not necessarily be reflected operationally in another environment such as GAOs. We're tackling all of these conversation and it was actually a fascinating discussion over two days. We had about 27 or so different experts, that really got into the nitty gritty of it. Part of our requirement was that we didn't need the experts to come here to admire the problems. We know what the problems are. We want them to come to the table to really discuss plausible, practical solutions that we could consider. At the same time, we did our own due diligence and looked into various analogues that we could draw upon. So for the government of Canada, OECD, EU, and UK and Singapore, etc, are all in various stages of experimentation relative to AI governance. We looked into those, we looked into what the literatures are telling us, and we convened this forum of experts. And so now we're in that distillation process to say, when we encounter an AI system, what are the practices that auditors will adopt in terms of the kind of evidence that we would collect, the kind of audit procedures that we will apply. Certainly this is just the first step of many that we will have to take. Right now our focus is on all of the common denominators of an AI system. But like I was saying before, there are nuances between different implementations. A subsequent evolution of this AI oversight framework would now take a very specific branching towards whether it's computer vision, whether it is a risk algorithm, whether it's HR benefit processing, etc. Part of it is that our recognition, we didn't want to wait until certain technological maturity for AI before we talk about verification. At the speed in which AI is evolving, if we did that, we will always be playing catch up. So our concept here is really to co-evolve with technology, recognizing that we're probably not going to get it a hundred percent right. But the reality is that there is no other voices, that we can find that we can say "Yep, that oversight framework works perfectly. Let's just adopt that." So given that gap in the conversation and that gap specifically, there's a lot of conversations around trusting AI, but not a whole lot about verification of AI. And GAO is in the business of verification. So how do we take that evidence-based approach to do our assessment credibly? Very much looking forward to the draft report coming out in early 2021. And that's something that we're quite proud of to having undertaken, this particular challenge so early on in the existence of the innovation lab. But we'll certainly have other ideas, in a planning stage as well around blockchain, for example. That fundamentally, when it involves that level of cryptography, how the audit methodology will have to adapt to meet those kinds of operational requirements.

Narrator:

The Assurance Show is produced by Risk Insights. We work with performance auditors and internal auditors - delivering audits, helping audit teams use data and coaching auditors to improve their data skills. You can find out more about our work at datainaudit.com. Now, back to the conversation.

Yusuf:

So we spoke about a range of different agencies that use data and that we interact with. And there'd be some sharing with those agencies of some manner, depending on memorandum of understanding, etc. Internally within GAO, how do you see the way in which data is governed, and in particular, the balance that you need to achieve between security and making sure that very sensitive data sets are secure, but also that data can be shared across the organization for use by several auditors or audit teams.

Taka:

Yeah, thank you for that question. It's a topic that's near and dear to my heart. We're actually starting a data governance effort. It's a transformational effort that tries to accomplish a couple of things. One is making sure that we have a single source of truth. And only a single copy of that truth. But also be able to widely enforce data access policy, so that, for example, a member of my team that belongs to let's say the DOD portfolio may be able to access certain data. But me sitting in another part of the agency while I may see the data, but I'm not able to provision the data because I don't have a need to know. But at least I can see the existence of data. That's on the mission data. But we also want to make sure that we ingest publicly available information so that we can really start promoting this whole notion of data mashing. I think the more data sources that you combine together, the more opportunities you can start asking interesting question. And I'll give you one example, back when hurricane Katrina really causing havoc, alot of oceanographic data, seismic data could be combined with some of these sales data from commercial sector, as well as certainly looking at Instagram photo posting. You can be much more granular about where the flooding actually occurred, and be able to ascertain if they didn't occur will some of those loss claims be legitimate or might they be fraudulent? So this is where by just looking at a singular set of data, you will never be able to answer that question, but combining three, four, five different sources - some of them are open source, some are government data - you're able to now start being able to really ask interesting questions. So we're trying to do something along the line as well. GAO has traditionally really protected the agency's data, because we have to. Our pan-governmental purview, it's not a given. While it is statutorily mandated, if GAO doesn't actually safeguard agency data, they will not give it to us even if the law says to do. so. So we've always safeguarded individual agency data. There's some agencies that have a very specific memorandum of understanding of how we access those information. TAxpayer information, for example, healthcare information is another example, that are certainly heightened sensitivity. Beyond that there are classified data, that GAO have purviews around as well. So this is just taking that journey further, to strengthen our data governance in general, so that we can actually promote this cross-boundary analysis without sacrificing the protection that we have to place on individual teams' data.

Conor:

You mentioned there the importance of open data and that's something that is not well understood in how useful and valuable that can be for auditors. We've got a long way to go to capture that value. Does GAO have a direct focus on trying to source and harness open data as part of its work?

Taka:

We absolutely it's very heartening to see that within the executive agencies, under the federal data strategy, more and more agencies are making their data available, through USA spending and other websites that not just GAO any member of the public can essentially download and draw their own conclusions. So I think that is very encouraging. For GAO, we certainly rely on a number of open source data. One example is FPDS, it is a collection of all the contract award information that the general services administration maintain. Certainly that is a useful source of analysis for us and understand the kind of spend where they're going. There are also single audit databases that we tap into to better understand, the patterns and trends of different kinds of audit findings, that can help us inform the kind of researchable question that we need to include in GAO's own assessment and evaluation of those similar programs. So those are just two examples. We also look at a federal register which is the publication of all legislative activities, to understand what is mandated, what is not mandated, what is in proposed state, what should GAO be aware of coming down the pipeline? And another example is regulation.gov where public usually have an opportunity to comment on proposed legislation. You can imagine that being a high noise to signal ratio problem. Certainly a lot of passionate folks will submit relevant insights, but there is a significant volume of submissions that are, I'll just say less relevant, right? Without asking our individual analysts to read through every single one of these, how can we apply natural language processing to prioritize and triage those public comments that may be of higher value and higher relevance - review those first, before we get to the other population. But that's another example where, we want to make sure that we can integrate, as many open source data as we can to our analysis, but not if that's going to tie everybody up into a knot, to how to deal with the data quality or that data volume. So it's really applying data science in a way that not only can we open the aperture to ingest more data. But being able to analyze those data in a very meaningful, and a rigorous way that they can actually generate interesting finding. Otherwise this is just sending people down to a different rabbit hole that may or may not yield good results.

Yusuf:

So you mentioned, the use of NLP there and that's really interesting.

Taka:

We certainly have been busy applying NLP in a number of ways. One specific example is we're now inwardly looking at years of GAO publication and see how we might measure the sentiment of our own writing. We're also looking at whether we can turn NLP into a, more of a robust search engine. So next time Congress asks us to tackle a topic X, we can go back to say, related to topic X, GAO has already done work in these areas. So let's not repeat the scope. Let's see if we can identify new and better researchable question. Those capabilities are super valuable to our mission team.

Conor:

You've mentioned several times about the art of the possible. So making staff aware of things like NLP that may be useful in their work. How do you get that message out to the people within your organization about what is possible and what they can do or what your team can do to support them do their work better?

Taka:

Counter intuitively I don't have to, and I'll explain why. When I first joined GAO and very excited about the mission of innovation lab. But there was a question in the back of my mind - when I hang that open sign, am I going to hear nothing but crickets - in which case you're absolutely right, then I need to do a bit of "business development" to make sure that people are aware who we are, what we can do. Categorically that has not been an issue. We have more requests coming in than we know what to do. And because right now we only have eight full time within the innovation lab, we actually measure partly our success in how we say no. And so we actually do take a very rigorous approach in assessing the kind of question that they're coming to us with. For example, there may be requests to say, I want to make this particular chart more visually friendly. We don't need to occupy one data scientist to do that kind of work. We will try to empower those individual team to say, these are the capability. More than happy to advise you how you can do so, but we need you to do the heavy lifting. That then allows our data scientist to be a force multiplier across multiple engagements. And then there are instances where we flat out say no to perhaps good ideas, but they're not always feasible. But yeah, there's such a pent up demand for innovation, I think all across, not just GAO, but certainly other oversight bodies as well. So this is where we want to make sure that we capitalize on that demand. And we're now in the Renaissance of commodity computational power, advances in data science, that a lot of these things are now possible. We're quite clear about, how we approach the issues with mission team. There's no guarantee of success. One of the reasons for innovation lab is that we can own higher level of risk tolerance. So if we find something good, we'll come back to you, and maybe that will change your audit methodology going forward. If we don't. No loss, right? I mean, There's lesson learned in that. Maybe next time we encounter a similar problem and we'll take a different approach. But whether it's a success story, whether it's a lessons learned, we consider that a part of the innovation lifecycle.

Conor:

You guys have, as you've described it, a higher risk tolerance, obviously because you're doing innovation, so you need that. And a lot of your work as you've described, involves experimentation. Have you had any challenge with working with assurance professionals who perhaps are used to operating within a very strict and rigid set of standards and guidelines? Have you had any challenge or pushback in bringing them into the experimentation mindset?

Taka:

All the time, all the time. And as a matter of fact, this is a very common theme across the INTOSAI community as well. I think auditors are, by their nature, risk averse. And so the technology part is largely easier than the cultural transformation and the change management issue that we have to tackle. So couple of key strategies that we have adopted. One is the notion of what I talked about before. How do we take on more risk and, by extension lower the risk for those mission team to really think about the art of possible. So if we do our work in parallel to their existing engagement, they can continue down the path as though we never existed, but we can come back later on to say, look, we were able to identify certain methodology, certain techniques, certain datasets that may be useful next time you have a similar audit problem. And that's worked quite well. Another strategy that we adopted is essentially make it free to mission team. Nobody wants to be encumbered with budget complexity, the deployment, hiring their own data scientists, and trying to figure out how they can fit in with their own mission areas. So this is where we take that all off the table to say, we have, GAO itself, its own data science, we have our own cloud infrastructure. We can help you tackle this really challenging problem. And by the way, we won't even charge your engagement code. So there's no impact to your budget. We do go out of our way to make sure that the barrier to entry is as low as possible. It doesn't always necessarily work. There are certain type of audit, there are certain type of engagement that just doesn't amene themselves readily to be innovated. But we keep trying, that's the goal here. I think auditors are pretty good about thinking in abstract, all of the possible risks to the nth degree, to a point that they convince themselves, not to take a novel path. We always tell them understood, but if we de-risk the problem statement, if we make it free for you, if we promise that we will not impact your operation in any way, might you be open to some experimentation? If it doesn't work, you can walk away. I will take all of the blame and all of the lessons learned around those. And that has worked quite well. If you take innovation and meet them where people are and address their problem, they're usually receptive.

Conor:

Is it a sign of success for your team and your guys when perhaps auditor's work with your team on a particular problem, and then they walk away from that process with their own personal capability uplift?

Taka:

That is part of the success metrics. We talked about developing prototypes, but eventually we do want to make sure those prototype leads to some sort of production solution. So there's scaling issue that we need to deal with. Theres data transfer. technology transfer issue that we have to deal with. But counter-intuitively, and we don't know the exact percentage to this, but we want a significant portion of our project to actually fail, because that means we're taking sufficient level of risk. Whether that percentage is 20%, 30%, I don't know. But if all of our engagements are wildly successful, you could ask the question, are we actually taking enough risks? Ask me again in a couple of years, on whether the right metrics for GAO, whether that's at 20% or 30% of their engagement that we undertake might result in some form of failure. I think counterintuitively, we actually take that as good metrics for our success.

Yusuf:

When you want to experiment, you want to have, as you said, that level of risk taking, but you know, there's success in changing mindsets of the teams that you're working with that may go along with that apparent failure that you see.

Taka:

Yeah, failure, it's a tricky word, right? That doesn't mean our data scientists have failed. That doesn't mean the innovation lab has failed. It could just mean that we were not able to address the very specific problem posed to us. At that point, we have a decision to make. Do we try an alternative method, or do we try to rephrase the original question? Failure doesn't mean we just stop right there. Most of the engagement that we tackle, we intend to write a white paper and make some of these artefacts available. But there may be a version 2.0 of that problem statement that we ended up tackling. Even if we're not able to address the specific questions that were brought to us, I consider that as a lesson learned for us to apply that same thinking differently, next time we're asking something similar.

Conor:

Given your role and the interactions you have both with external parties and your internal people, what's your pitch to assurance professionals about why they need to go and engage with the data science people?

Taka:

Yeah, it's all about how you can stay relevant as more and more data become available, and the more tools become available to analyze those data. Sampling approach is something that we generally accepted back in the 80's, back even the 90's, even the early 2000's, as the only way to do some extrapolations. But more and more the computational cost has lowered in a way that, really, a hundred percent sample is very much a reality. A lot of open source tools are out there that often times take the cost equations away - R, python, and all those related packages for example, are open source. Is it always cost neutral? Of course it isn't. But by trying, the goal that we strive towards is relevance. Because as the fourth industrial revolution continues to churn along, more and more of these legacy traditional techniques are showing their age. They're not able to keep up. For example, we cannot spend 12 months cycle to generate an AI oversight report. By the time we issue that report, the industry and the rest of the world has already moved on to something else. So how do we stay relevant and continue to use audit as a forcing function for accountability, as opposed to just a compliance exercise? So that's one of the reason I truly believe in the mission of GAO and that's one of the reason I joined. We really have a very aggressive innovation agenda forward. And that's by design. We do want to operate with extreme sense of urgency. And by taking the risk, by really asking the tough questions, we think we can really usher in a new era of oversight capacity, not just for GAO, but all of our partner organizations.

Yusuf:

You obviously would collaborate with a range of other similar agencies, but there would be both performance audit teams and internal audit teams and a range of other integrity professionals that I'm sure would want to get in touch with you and learn more about the work that you do.

Taka:

On the GAO website gao.gov, there is a series of ask the expert pages. So if there's data science, innovation related question, I'm certainly happy to answer them. If there are accounting standards, methodology related question, there are others that can tackle those issues. We do publish various standards. So the yellow book is the standard for external auditors. The green book is the standard for internal auditors. I'm quite proud of the fact that we do tackle various oversight issue very deeply. And sometimes I think we're too humble about the kind of work that we do. As a matter of fact, there was a study that recently compiled all previous 20 years of GAO recommendations and one of the findings was that GAO was being too modest about the financial benefits that we identify. That's something that I think we have to work on internally, but that's part of the goal of the innovation lab as well, to continue to be out there to work with our oversight community partners to elevate each other's capability, to continue the idea exchanges. So we're very much looking forward to the road ahead.

Conor:

Taka, it's been a pleasure speaking with you. Thank you so much for your time and all the really interesting work that you and the GAO are doing at the minute.

Taka:

Thank you for having me. Was fun.

Narrator:

If you enjoyed this podcast, please share with a friend and rate us in your podcast app. For immediate notification of new episodes, you can subscribe at assuranceshow.com. The link is in the show notes.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Lunchtime BABLing with Dr. Shea Brown Artwork

Lunchtime BABLing with Dr. Shea Brown

Babl AI, Jeffery Recker, Shea Brown
2Bobs—with David C. Baker and Blair Enns Artwork

2Bobs—with David C. Baker and Blair Enns

David C. Baker and Blair Enns
Ditching Hourly Artwork

Ditching Hourly

Jonathan Stark
The Business of Authority Artwork

The Business of Authority

Jonathan Stark and Rochelle Moulton