The 311 Podcast

01 - Governing A.I. with Ashley Casovan

July 20, 2020 Guest: Ashley Casovan Season 1 Episode 1
The 311 Podcast
01 - Governing A.I. with Ashley Casovan
Show Notes Transcript

Ashley Casovan, AI Global's Executive Director and former Director of Data and Digital for the Government of Canada, joins host Paul Bellows for a frank discussion on the ethics and responsibilities of governments using Artificial Intelligence technology, and the many uses to which A.I. can be put.

Government of Canada's Algorithmic Impact Assesment
https://open.canada.ca/aia-eia-js/?lang=en

Ashley Casovan on Twitter:
@ashleycasovan

AI Global
https://ai-global.org/

INTRODUCTION

Paul Bellows
Ashley?

Ashley Casovan
Yes

Paul Bellows
Could you introduce yourself?

Ashley Casovan
Yeah. I'm Ashley Casovan, I'm the Director of Data and Digital for the Government of Canada. I'm situated within the treasury board Secretariat, and so we're responsible for doing, ah, strategy and policy as it relates to supporting departments.  Within my portfolio I get to work on enterprise data, so really the enterprise architecture components related to data, so how we're collecting, using, managing, storing, moving data. 

The second component is really now we're focused on different types of innovative technologies, the hot one right now is A.I. So all of the strategy policies associated with that.  I think we'll talk a little bit more about that.

And then, finally, open source. So everything related to how we're thinking about and using open source that doesn't incorporate the use of open data and the release of that  but that's kind of in the broader data context.

It seems like a lot, but they actually are all really related to one another, at first when I took on this job I was like what? How? How do all of these pieces work and how can I do all of these things? But there's significant dependencies on building each of them in a parallel fashion so that we can actually achieve the outcomes that we want to achieve.

So to me I really see working on data and open as the foundational building blocks for outcomes that we want, like open data, but also like artificial intelligence.

Paul Bellows
Yeah.

Ashley Casovan
Yeah.


BACKGROUND

Ashley Casovan
I come from a background of political science and economics. That background and education made me think about community and politics and the role that we play and actually I started by thinking like, "okay, well, I'm gonna do stuff that's gonna save the world", as like most political science students that I know want to do, but then I took Ian Urquhart’s course on Alberta politics and I became really attuned to the fact that we actually have a lot more local and domestic issues that I think that people who are curious can kind of help resolve and fix, and so that really started me down a path.

I then got a lot more involved in community organizing, worked on political campaigns, but when you're working on political campaigns, there's a big difference when you have data and you don't have data, so I was really fortunate to work on the first Obama campaign, in 2008 where this kind of concept of evidence-based decisions was at the heart of the campaign. At that point in time as like a lowly political organizer, I did not know what was happening, but I knew that it was different than how we had organized provincial politics or municipal politics in Alberta at that time.

So having that comparison, I then, when I got into actual government and bureaucratic roles, I was constantly questioning why we didn't have access to data, where that data was. Which then led me down an increasingly technical  path.

So as a result, I worked for the city of Edmonton, built the open data portal there and also got to see for my first time how government works on the inside. So that was really interesting as opposed to—

Paul Bellows
(Or doesn’t sometimes.)

Ashley Casovan
Yeah, there's that, for sure. Um, and then also I, uh, then got the opportunity to come work with the federal government and build out open-dot-canada-dot-CA here, Canada's open government portal. But then when I was there I realized that we had, again back to these outcomes of wanting to release more data, well, you can only release data if you first of all know that it exists, you know where it is, and you are able to release it, and what I realized is that, in many circumstances, it's really boring to talk about policies, but like we didn't have the policies or the governance in place in order to allow that outcome to be achieved.

So for me, I get sick of kind of asking why, and just want to do something about it. So then I moved over to the Enterprise architecture team where then I took on more of a data-focused roll and then again recognized that, in a similar fashion to open governments, the goals and objectives;  releasing of open data particularly, artificial intelligence was the exact same sort of problem that we want to achieve all of the things that we can with artificial intelligence tools and because we wanted to really focus on  doing that in a transparent and accountable fashion within the government we brought in that open source component as well



ROBOT BUTLER

Paul Bellows
So, and I love  that, you know, like the policy exists to protect people, you know, and to protect data, and to protect the systems. You sorta step forward to something that's even more emergent like artificial intelligence, and I think the first question to ask is when do I get a robot butler?

Ashley Casovan
After I do.

Paul Bellows
Okay fair enough, you first then I am second in line. I just want to make sure that we're clear on that.



AI ISN'T EMERGENT ANYMORE

Paul Bellows
So what does AI mean in a government context today, like, just define that for us a little bit.

Ashley Casovan
Yeah, so it's interesting.I was going to pick up, you use the word "emergent" in your question, and one of the things is that  when we're talking about AI, because there isn't really a good globally recognized definition around what we mean when we say “AI". And so for me, artificial intelligence, I think of it in the broadest terms. That includes data analytics, predictive modeling but then also things like cognitive automation and machine interactions. That said, I would say that some of those things that aren't emergent, we've been doing them for 20 and 30 years and even when we're looking at, and if we just think about cognitive automation, for example, as another type of technology, we still have ways that we've ensured the appropriate use of those and so I think that we need to really approach them in the same sort of fashion.

And so from a Government of Canada context, where we're seeing a lot of different work is around predictive monitoring. Transport Canada has train derailment detectors in the wheels that they've put in. We have flood mapping for flood and fire prevention. We see it with a potential mapping of predictive mapping around health outbreaks, and so trying to kind of stop things and the government of Canada has been really involved in the slowing of the spread of Zika.

So like different things like this and, it's interesting to think about the federal government versus municipal or provincial government, that I've also been involved with, which is just kind of like more tactical, tangible, thinking about transportation, thinking about 3-1-1 calls coming in and how those services get dispatched better.

We kind of have those things with 1-800-O-Canada, but it's like often kind of these broader systemic issues that we're dealing with and so the ability to actually, um, be able to use historical data for a long period of time and getting good quality data from StatsCan to be able to do that type of modeling, gives us that opportunity to like overall make these big kind of systemic shifts that we need to I think that economic and social development in Canada's recently released an AI strategy or they will soon be and the stuff that they're doing is really thinking about improving how they provide services for benefits, so that people are getting better access, like these are all really important things. So that so those are some of the things we're doing.



AI NEEDS POLICY

Paul Bellows
What can go wrong with data from a citizen perspective, from a privacy perspective, when we don't bring good policy from government?

Ashley Casovan
Everything!

Paul Bellows
Like what are some of the dangers that we're trying to protect against?

Ashley Casovan
Yeah. Like it's, to me, and I say that in a way, you can't see my face but I'm like "obvious", like “duh”.

Paul Bellows
I can see your face and you have the look of "like obvious, duh".

Ashley Casovan
Yeah, and that's the thing is like it's just really, to me it's so foundational and critical that, whether we're using data to make decisions or we're using it to consume services, you have to rely on good quality data and, sure it's not going to necessarily be perfect, but it is  reliant on having something that… like, trustworthy data, so that you are getting an accurate service or, again, your privacy's not being breached. That said, in order to ensure that that happens, then that's reliant on good policy and governance and management, stewardship, however you want to label it, of that data.

And so, sure, we need policies of all different types to think about how we move [that data], and interoperability and standards associated with that, and we often talk about that when we're talking about “quality of data”. But just even having policies around the collection and storage of that data to ensure that there aren't certain types of breaches from a privacy perspective, but that it's collected also in a way that doesn't have bias, and it has fairness and thoughtfulness into how that's being collected… We're really reliant on good thoughtful policy around that.

How we're approaching this kind of policy concept is really making sure that we're setting the appropriate guard rails around what responsible and ethical implementation of that looks like. And so, making sure that the data that's being used in order to provide those outcomes, and what we're seeing up to this point and when I'm talking about predictive modeling, we've seen that those are mainly for the purpose of making decisions. So what we've done is we've created a directive on automated decision making and that applies to kind of this broadest bucket of artificial intelligence by saying that when a decision is being rendered in part or in whole by a machine, then you need to kind of follow this directive. And so it outlines not only that any sort of data or methodologies need to be released in an open and transparent fashion that are associated with that decision, it also indicates that you need to do an algorithmic impact assessment.

So the Algorithmic Impact Assessment is that framework that I'm talking about that allows departments when they're designing these projects to actually answer questions related to what the impact would be. Like, is it going to have an impact on somebody's health and wellness. Is it going to have an impact on whether or not somebody goes to jail? Is it going to have an impact on whether somebody gets a benefit? All these different types of things provide different risk levels.

And so then there's also questions related to mitigation. So, have you done things to mitigate the impact that it would have on the public? And so how it works is, the first subset of questions gives you a raw risk and then the second set are kind of like bonus points, like you're doing a good job to mitigate that kind of impact. And so from that you end up in Category 1 through 4, 1 being lowest, 4 being the highest. And treating this in the same sort of fashion that we're treating other types of technology and tools, we're saying "Have you done training, testing, monitoring of the system?", but we're adding in for purposes of artificial intelligence (because in some cases there's that ability to learn and train these models), we're adding in peer review.
And then also, when are you keeping a human in the loop? To ensure that there's kind of a human oversight when there's an interest in making these end-to-end automated decisions.

All that to say, we want to really balance innovation and protection of the public, and so the low risk level versus the high risk level is only just to say that we're not going to treat your subsequent actions, the training, testing, monitoring that I just mentioned, in the exact same way. We're not going to have as much scrutiny over a low-risk type of application versus something that's higher risk that kind of deserves that. But again building off of existing types of technology practices, business practices that we already have in place and not treating it like it's an emergent technology.

Paul Bellows
Yeah.  So, and just to kind of play back through that, because there's just so much in what you just said, it, it's amazing, you know, and I love that Canada's taking a leadership role here, but let's look at something old school like the Weather Service where we’ve got years of data, then we can look at today's conditions and then we can write some math statements to tell us what tomorrow might be like based on the data we have.

Ashley Casovan
Yeah.

Paul Bellows
…and that's fairly banal and you know that data is publicly available, but then you sort of maybe take that same approach where someone's going to write some math that's going to make a prediction of what was going to happen tomorrow, but now we're looking into children's health data and you might be able to identify who those children are, you know, who might have had a serious disease and then you know, someone… bad people could say “I want to find the families of those children and I want to sell them a product or some insurance” or some something, you know, so that idea of  privacy and protection…you know. And then so the AIA tool you're talking about then is, the framework by which we identify how freaked out should we be and how much oversight should be over this particular case?

Ashley Casovan
Yeah

Paul Bellows
That's really interesting.

Ashley Casovan
And it also, just to add to that, we approach it from the perspective of defensibility. And so we're really trying to just say "Okay, we need to make sure that we can explain how we arrived at that outcome”.

So one of the things that the AIA is not doing, and I just want to be clear on, is we're not saying what’s good or bad, we're just saying “This is the degree of impact,” and then people way above my pay grade are going to make the decision whether or not that is something that is good and should be used and to what level or degree we're releasing the code or the data associated with it. And I should clarify that even when I say we want to release that code in an open fashion, and the data associated with it as well as the report, we're still following the existing Privacy Act, and so we're not going to release private or secure information. It's open by default…

Paul Bellows
Right.

Ashley Casovan
…as opposed to just being closed by default.

Paul Bellows
How's Canada playing a global role with the AIA?

Ashley Casovan
One of the things I should have mentioned is that we're actually responsible for the development of... um... or we developed the AIA in a collaborative, open fashion, so we contravened existing government rules, uh, uh oops!, and put everything up in  Google Docs, um um, but it was a good way to be able to truly do  open policy, , uh and we thought open policy, , uh  and we thought for this it was important because nobody's an expert on AI. If you don't know what AI is like how can you be an expert in it?

So, we really wanted to make sure that we were getting industry, Academia and civil society and other orders of government involved in this conversation. So, that's what we did and that's what we're going to continue to do by working with other governments internationally. Some have already adopted, and more will continue to adopt the Algorithmic Impact Assessment that we built, but then also be able to add more nuanced questions or regional-specific questions to it. So we're really excited for that.



WRITING AI POLICY IS HARD

Ashley Casovan
When I first started in this role, I was like, I'm super fraudulent, like I should not be the director of anything related to artificial intelligence, and then my first day on the job, I was actually on a panel about it and I was telling my team, I was like "I should not do this" and they're like "No, no, no, you'll be fine. You'll be fine. You'll see." And, anyway, the chief statistician, Anil Arora, was the moderator, and all of his questions were really about data and then how that relates to A.I., and I was like, "I got this". And so, not that I have definitely mastered or would say I'm an expert in A.I., what I would just say is that I was fortunate to be in a position where my expertise in data allowed me to be able to play an influential role in this work, because they're kind of common, similar problems. And so then that's kind of how I've continued to adopt that.

Paul Bellows
So, the AIA… I’m not aware of parallels in other jurisdictions, other people doing something quite like this. Is Canada the first out of the gate to build this type of a tool?

Ashley Casovan
Yeah, at a federal government level, um, the city of New York had done a similar version. It's been something that academia has been talking about, um, as a need-to-have. I think that that's also a best practice that we're seeing with privacy protection. So EU GDPR, there's assessments around that, so it's not a new concept, but how to actually take that concept and put it into implementation and use in a government? Canada's the first to have done that.

Paul Bellows
So there is so much interesting work on technology and digital transformation happening in Canada. Why Canada? Why now? What's happening here that is making us step out in front of so many things right now?

Ashley Casovan
Yeah, that's a really interesting question. I think that there's a combination of things one is that we do have a lot of subject matter experts at universities. We have the institutions in Edmonton, in Montreal and Toronto that are really amplifying...  they've been doing AI work for a long time, but I think they're kind of amplifying this conversation right now as many are internationally are looking to Canada for [that] expertise. I would say that that always has an influence on the ecosystem that exists which naturally has prompted the government of Canada to then,  recognize that AI is coming in terms of how we're then providing services to Canadians and so what we wanted to do was just make sure that when we identified that there was a policy gap we wanted to be able to address that.

The other factor, I would say, is that, because a lot of people kind of raise flags about gaps that exist and there's not always a prioritization around that, uh, we've just been really fortunate to have the right leadership in place, uh, with our CIO Alex Benay and others — actually the president of the treasury board was really supportive in all of this work as well. So we've been really fortunate to, have the right stack of leadership at the right time recognizing that AI in Canada, but also just internationally, is really important and so it gave us the leeway to fill that policy gap that we saw so, um, I guess just right place, right time.

Paul Bellows
Yeah.

Ashley Casovan
Yeah.

Paul Bellows
So all that it takes is decades of investment in AI and visionary leadership.

Ashley Casovan
Yeah, just that's it, not a big deal at all.

Paul Bellows
“Just that's it.” Excellent.

Ashley Casovan
And the people willing to do it.

Paul Bellows
And you. So yeah, those two things and you.

Ashley Casovan
Yeah, well not just me. I would say that I, I was really fortunate to  just join the team, so I, the team when  I took over was just Michael Karlin who was the one that really kick started identifying the issues and drafted a Responsible AI in government white paper. That's where we determined that automated decision making systems were how we're thinking about and using AI at this point in time. And he came really from a policy and an ethics background having worked at the public health agency previously and other government departments.

Um, and then Noël Corriveau is from a legal background, or he's a lawyer, and he thinks about how law needs to change associated with this and then with my data background, it was just kind of right place, right time with the right people to be able to think through this. So I think there's a lot of luck that's in that as well.




OUTCOMES

Paul Bellows
So just to sort of close things off, people are maybe not so afraid of predicting the weather but  there is that sense of, you know, artificial intelligence, if we truly have something that's intelligent, it's a little bit scary, but just sort of put the scary side aside for a little while because now we have a really good impact assessment that will help us to guard against that. What are the things you're excited about the things that could be possible if we're able to unpack some of the benefits of automated decision making? What are some of the things that might be possible tomorrow that aren't possible today because of some of this technology?

Ashley Casovan
As a good public servant, the reason why it's compelling to be in public service, is because you're allowing people to get access to the services and the tools that they need in order to live happier and healthier lives.

And so, if we can do that in a more efficient fashion, I think both of us having worked in or with government know that that's not always the case, and having been in government for... that's the only thing I've ever done. There's a willingness and an interest of people to provide good services. I see it every single day.

We're not always equipped with the tools that we need to do it, and there's also like way more of a demand than a supply, and so you're kind of always balancing these things, and I think that, in the whole, like, scary side of stuff we're talking about, there's lots of conversations going on about people losing jobs as a result, and what I really see is this opportunity to transition just how we're doing our work and retraining a lot of people and different types of areas in order to just do government better.

And that's what I'm really, really excited about. And that's why we want to make sure though that we're... as we're still kind of ramping up to using these tools, we're starting to learn from how we could possibly do this in a better way, when we're kind of dealing with those scarier issues.

Paul Bellows
In a nutshell, you know, and there is always that worry of, you know, "Does automation mean lost jobs?" but, you know, the point is, there's more work for government to do than government can afford to do  all the time. So where are the candidates for jobs that can be delegated [or] automated.

Completely, yeah. When I'm still seeing people dying of opioids or still seeing people that are on the streets or still seeing people who are not getting access to healthcare, like, we haven't finished the job. So, if we can use these tools to enhance that then, and we're doing it in a way that's trying to be transparent and open and most beneficial, and I don't mean to make this seem easier than it is.  We're going to have to come down and make these hard decisions  on what do we mean by public good and how do we really balance the needs and rights of the individual over the needs of the community, like we make these decisions as humans all the time, but when we're actually going to have to write down what that criteria is for how a machine's going to make those decisions is going to be so, so difficult. So I don't want to underscore that but I just think that the potential outweighs any of the harm. But while we're getting there, let's learn from how these systems are working and make sure we keep building better guardrails.


ROBOT BUTLER RETURNS

Paul Bellows
That's very cool. So robot butlers… 18 months?

Ashley Casovan
Yeah, totally working on it right now.

Paul Bellows
Okay. Fantastic. I can't wait. Ashley, thanks so much for your time.

Ashley Casovan
Thanks so much.