Center Stage: The Voice of The Project Economy

Ethics, Technology, and Innovation

November 24, 2020 PMI Season 2 Episode 2
Ethics, Technology, and Innovation
Center Stage: The Voice of The Project Economy
More Info
Center Stage: The Voice of The Project Economy
Ethics, Technology, and Innovation
Nov 24, 2020 Season 2 Episode 2
PMI

Artificial intelligence is revolutionizing the human-technology interface, the value proposition of products and services, and the future of work. This technology, like others, can advance positive or negative social outcomes, but often, the associated ethical considerations are considered lastly if at all. This episode of Center Stage explores a range of the ethical implications as the use of AI in business applications from credit scoring to autonomous vehicles explodes. The podcast also proposes some practical steps organizational leaders can use for developing a framework for AI ethics. Incorporating findings from the most recent The State of AI Ethics Report, Abhishek Gupta, founder of Montreal AI Ethics Institute, and Joe Cahill walk us through examples of design and use cases that reflect that dichotomy of positive and negative social outcomes.

Show Notes Transcript

Artificial intelligence is revolutionizing the human-technology interface, the value proposition of products and services, and the future of work. This technology, like others, can advance positive or negative social outcomes, but often, the associated ethical considerations are considered lastly if at all. This episode of Center Stage explores a range of the ethical implications as the use of AI in business applications from credit scoring to autonomous vehicles explodes. The podcast also proposes some practical steps organizational leaders can use for developing a framework for AI ethics. Incorporating findings from the most recent The State of AI Ethics Report, Abhishek Gupta, founder of Montreal AI Ethics Institute, and Joe Cahill walk us through examples of design and use cases that reflect that dichotomy of positive and negative social outcomes.

JOE CAHILL: Hi everyone, I’m Joe Cahill, the Chief Operating Officer of the Project Management Institute. Welcome to our next episode of the Center Stage Podcast. Today will be again on the topic of artificial intelligence, commonly referenced as AI. In order to do this with the utmost style and credibility we’ll be talking to an AI expert, Abhishek Gupta. 

Abhishek is the Founder and Principle Researcher at the Montreal AI Ethics Institute and a Machine Learning Engineer at Microsoft Corporation. He is representing Canada for the International Visitor Leaders program administered by the U.S. State Department as an expert on the future of work. He has built the largest community-driven public consultation group on AI ethics in the world. The institute recently released the State of AI report and today we will explore some of the themes, recommendations and questions raised in that report.

So Abhishek, welcome to Center Stage.

ABHISHEK GUPTA: Hey, thank you for having me, Joe. It’s a real pleasure to be on here and thank you for the very kind introduction. I’m really looking forward to our discussion today. I think with everything that’s going on in the world it couldn’t be more timely to talk about everything that artificial intelligence has been used for, but more importantly the impacts that it’s going to have on society.

CAHILL: You’re going to help us explore AI and looking at it specifically on individual and societal considerations as it relates to AI. So let’s jump into it. One are the key themes of the State of AI Ethics report, which just came out in June 2020, is that technology can be designed and used to advance positive or negative social outcomes. 

Given the use of AI in business applications from credit scoring to autonomous vehicles, walk us through the examples of design and use cases that reflect this dichotomy between positive and negative.

GUPTA: If we were to, let’s say, pick one example where there are both positive and negative outcomes, which scholars and keen observers of this space will note that AI is a general purpose technology, it can always be used for both good and harm. A simple case that we can think about is autonomous driving, right? We are looking at a fully autonomous vehicle-driven grid where we don’t actually have any human actors and the entire grid is one where you only have self-driving vehicles and the possibilities of accidents taking place goes down tremendously, has the potential to save many millions of lives that are lost every year due to accidents.

On the other side, there are many different negative outcomes that can happen. Starting with about thinking about the transition as we go from a human-centered driving experience and grid at the moment, to something that has a mixture of both automated and human actors, and then to one that has only automated actors. 

In that transition period is where we are going to face a little bit of a hill that we need to climb, which in terms of how that transition takes place, the potential for accidents might actually go up just because of the difference in expectations, in terms of how those vehicles are supposed to behave, how humans are supposed to interact with them and while we figure out some of the robustness challenges when it comes to all the sensors and the AI systems that are onboard a self-driving vehicle.

On the other hand, you can think about some of the labor impacts that this is going to have. And once you have autonomous vehicles that are used very widely those people are going to be out of a job and there are places where you might have some of these vehicles being driven remotely. And it’s not a far-fetched idea. It’s already happening in some places where some of the freight trucks are being driven by people who are former truckers who used to drive these trucks in person who are just driving them remotely now. 

And that is again interesting in the sense that the labor impacts will be varied in the sense that you might have some people who are able to make that transition into perhaps piloting some of these vehicles remotely while that is still needed. But others who will just simply be out of a job. And that’s where the work that we are doing and a lot of the other organizations around the world are doing in terms of thinking more carefully about some of these impacts is very important. 

Because we are in a place now where we have the potential to shape how this technology is going to impact all of us. And rather than relegating control and being in a position where we succumb to technological determinism we have the ability here to be more proactive, to shape how this technology is going to impact us and sort of get ahead of the problems and try to solve them and be deliberate about it, rather than hoping that the market is going to create positive outcomes on its own.

CAHILL: That’s really smart to anticipate what the possibilities are in the future, instead of being governed by them, by being surprised, right? So that’s the whole point of what you and your institute strives to do. I can tell you that when I was looking at the report, there was a particularly striking and somewhat chilling statement about information and knowledge.

I’m going to read it to you and let you interpret it for our audience, and it goes like this. “We are paradoxically disempowered by more information and paralyzed into inaction and confusion without a reputable source to curate and guide us. More so with highly specialized knowledge becoming the norm, we don’t have the right tools and skills to even be able to analyze the evidence and come to meaningful conclusions.” 

Wow, that’s quite a statement. So, help us understand what this means for our stakeholders who develop these technologies or who use these technologies to get work done in their project and agile teams. 

GUPTA: Yeah and a part of that statement was about the fragmented nature of our ecosystem. And I should probably situation that in the larger context of how knowledge itself has evolved over the past 100, 150, 200 years where we are specializing very, very deeply along different axes in all of these areas of knowledge. 

I mean, take for example the field of AI, right? There is a lot a lot of work that is happening in the domain of computer vision, in terms of information retrieval and natural language processing, you can think about robotics, and then if we think about AI ethics we’re talking about the fields of disinformation, we’re talking about privacy, machine learning security, interpretability. There’s so many specialized subdomains within each of these areas that it’s almost impossible to imagine the sorts of people that we used to have in a bygone era where they had sort of deep expertise across a broad set of areas, because what constitutes deep expertise today is... the threshold for that is much higher just because of the vastness of the knowledge and the depth of the knowledge in each of those sub-fields.

And so part of that statement that you read out was that it’s... Because of this degree of specialization and the amount of new information that keeps... or new knowledge that keeps coming out as a consequence in these spaces, there are sort of two things that happen. One, we tend to send a larger chunk of our time trying to keep up with the developments in our own space, which necessarily means that you don’t have enough time to explore some of the developments that are happening in the other spaces. But what this also means is that oftentimes you have these sort of parallel efforts that are trying to solve similar challenges while being unaware of what’s happening in the other space and hence, thinking that some of these challenges are really hard - and some of them are really hard to solve. 

But as a consequence, we are failing to learn from the lessons that some of the other fields have already experimented with. You’re now entering this redundancy phase where you’re trying to reinvent the wheel when somebody else has probably already figured it out. And so in the sense that we are disempowered and sort of paralyzed, there is so much information that it’s hard to be able to effectively navigate all of it in a manner such that you can assimilate it and utilize it for your work. 

CAHILL: So because of the speed of change and the enormity of information on this topic, you can’t see the forest for the trees, that’s one of the biggest challenges, right? Just because it’s running past you so fast. So a key discipline here is keep your head up in terms of looking at the horizon and looking across these different fields and disciplines, I think that’s... that has always been an important thing, but I think it’s even more of a challenge here is what you’re telling us. 

GUPTA: The cycle for translating research into practical outcomes where they become deployed in products and services has also shortened quite a bit. So, you know, before, it would take perhaps five, ten, fifteen years before you were able to take some technological development that was happening in an academic lab and actually have that be deployed in practice, in the products and services that consumers at a mass level are using. That cycle has shortened to perhaps something as little as a year.

So you can think about two trends that are happening here. One, we’ve come up with these sorts of models where these research labs are now deeply integrated with corporations, which are able to quickly take some of these ideas and put them into practice. And then on the other hand, with the emergence of  preprint servers and not necessarily always waiting to go through an entire publication cycle or review process, some of these developments get short circuited in terms of not having to go through that entire academic process and people evaluating the results, using both the code and the data, the information that’s provided in an open-access manner and see if that’s something that is useful to them and just simply put it into practice without having to wait. Which is sort of exacerbating the problem but is also helping us bring some of these developments to the world faster.

CAHILL: I’m glad you brought up the topic of product and the changes, the speed of product design and the integration with the research and really compressing that cycle time. The report points out that product design and development seem to be moving to capture human wellbeing as a metric to optimize for. And it shares examples from healthcare and the gig economy. So what is the risk you see in AI optimizing for human wellbeing?

GUPTA: Trying to quantify that means in a sense we are sacrificing a degree of fidelity there. The example that I think that comes to mind immediately for me is the recent wildfires in California where the sky had turned orange and people of course rushed to capture that on their phones and realized that some of the pictures that they were taking weren’t actually reflective of the true colors that they were able to perceive. And that’s sort of what’s happening here in the sense that... of course it’s a very loose analogy…

But the richness of the world, when we are trying to distill that down into metrics, what constitutes happiness? And the first question should be, should we measure that by the degree of engagement that someone has on a platform like Facebook, let’s say? Is their satisfaction or their happiness, in terms of their interaction with friends, can we quantify that by the number of messages that they send, the frequency of the messages, the length of the messages, the tone of the messages, the number of likes that they get? Or, is that something that we just can’t capture at all and that we shouldn’t even be making attempts at it?

And this is not just a theoretical exercise. Because when we are talking about optimizing for wellbeing, there are many other facets to it. So if we’re thinking about the field of healthcare as an example, there was this recent work where doctors were being nudged to have end-of-life conversations with their patients who showed early signs of perhaps heading down a terminal path. There is a degree of uncertainty to it but nudging the doctor to already have that conversation means that you are altering the state of the wellbeing of that patient. No one likes to be told that they are dying. 

Distilling down or quantifying this notion of wellbeing, as an example, into something that we can feed into our machine learning systems necessarily means that we are stripping away a lot of the richness of what makes us human, stripping it away of a lot of dimensions, which numbers just can’t express. 

CAHILL: And does it not in some ways drive towards some kind of commoditization of human behavior, or could?

GUPTA: It already is. You’re exactly right. It already is. And there is a lot of profit to be made from it. If we can keep you stuck on watching one YouTube video after another and feeding you ads while you’re at it, of course there is money to be made. And if I can use some “dark design patterns” to keep you hooked and engaged in the service of profit then that is something that is an exemplar of our commoditization of our behavior.

CAHILL: It’s interesting because I’ve always been a proponent of the phrase what gets measured gets done. So if that’s what you seek to do, you’re going to get a lot of it. So that’s a very interesting topic. 

Let’s shift a little bit to solutions. It’s safe to say our PMI community by definition is a solutions-oriented audience. So when you start to look at applying solutions, many of the core ethical considerations and issues are beyond any one individual’s effort to resolve, right? It’s very complicated. So what are you hearing and learning from executives in business and government about how they are leading or perhaps not leading the exploration and resolution of these issues?

GUPTA: There are so many efforts around talking about AI ethics that one would imagine that there would be more deliberate efforts to put this into practice. But it seems that the larger conversation is still stuck on some of the theorizing aspects of these problems and they are also stuck on posturing and figuring out the taxonomy or the semantics of all of these ideas, which is where the emphasis on being solution oriented is very important. 

Because at the end of the day this is not an academic exercise and it shouldn’t be, either. It ultimately does have impact on people in a very visceral and real manner. And so without having solutions that are practical, it doesn’t really do anybody any good. 

And so on that note I think the challenge that people have been surfacing again and again is how to integrate some of these considerations from the rank and file all the way to the executives within different organizations. The concerns at the rank and file level are to take some of these guidelines - let’s say we’re talking about privacy - and to put that into practice in a way that makes it very... not to use the word easy but it makes it as frictionless as possible within the existing processes and workflows. 

When we are talking about some of the middle or senior management, the concerns that they express are around measuring the efficacy of these measures that are being put in place. At the end of the day, the implementation or the deployment of these measures also needs to have some sort of a return. And I am not just talking about the return in terms of increasing profits or getting a higher return on investment or anything like that. But really understanding or being able to measure the impact as stated by the goals of putting these solutions into practice in the first place.

CAHILL: So the report does address some AI headline level issues, like job losses and displacement, as we discussed earlier. It also talks about helping students to focus on skills that will take longer to automate or that cannot be effectively automated. What are some of those skills that you can share with the audience? 

GUPTA: What is interesting is perhaps some of the fields that we will be operating in going into the future, they might either change dramatically or cease to exist every decade or so, which means that we need to become comfortable with the idea of lifelong learning. And not lifelong learning as the cliche that people like to throw out, oh, I enjoy learning and I do that all the time, but being very deliberate and serious about it akin to embarking on a university degree. And there is of course a lot to be said there in terms of whether the traditional university degree even makes sense anymore. Having that continual learning mindset is important. 

And you know, if we’re talking about other skills, I think some of the things that your audience will really appreciate here is the ability to really interact with people, some of those soft skills and being able to effectively coordinate and manage a large group of people, align their efforts and guide them in being productive and coordinating their efforts in working with machines in the workplace, be that in the form of workers on the factory floor, working side by side with industrial robots. Or, journalists, let’s say, using tools to craft pieces of articles that they are working on, at the same time then being able to outsource some of the more fact-based stuff to an AI system that can crawl and source that for the journalist while they focus a bit more on the critical and analytical pieces of it.

So I think that piece around being able to manage people, to think more critically. Those are the skills that are going to be timeless and eternal. 

CAHILL: That’s interesting because we certainly, at the Project Management Institute, talk quite a bit not only about the technical skills but the power skills, the soft skills that are necessary. And in a broader context of lifelong learning that you can’t stop, right? There’s not an end point to learning and improving yourself as a project manager or an agilist, it’s a journey. So we definitely echo that in what we do. 

So I think I heard you a little bit talking about the topic of augmentation, you were touching on that, so how humans and machines work as one. The examples of this are growing exponentially. I look at my wrist right now and I have an iWatch on my wrist and I think it’s a pretty good example and it’s something that tells me how far I ran or walked and it monitors my health and who knows what it’s going to do next year in the next version. 

From your research and community engagement, what notable industries or examples best illustrate this human-machine augmentation and how is it affecting how work actually gets done?

GUPTA: Being able to transcribe doctor’s notes, that is something that before was something that a lot of doctors had to type in manually. It’s no surprise that doctors don’t like doing data entry work because that’s not why they got into the field. Same goes for nursing staff, same goes for the other folks who are in that healthcare and medical ecosystem where a lot of the burnout that happens to physicians, to nursing staff, happens because of these technical measures that we have injected into the healthcare system where it requires more and more effort on the part of the actors within the healthcare ecosystem to spend time putting in data, trying to collate records. 

When we are thinking about some of the capabilities in machine learning today, I think it just makes more sense to strip out some of those data-entry pieces of work because we are getting to a place where, for example, we are able to read in handwriting from pieces of paper, we are able to, for the most part capture, effectively and transcribe voice input. And to take all of this and to surface things in a contextual manner actually boosts the ability of a doctor to provide healthcare services to their patients. And being able to sort of supercharge their abilities by stripping away some of these menial tasks and letting them focus more so on patients.

To give you another example, when we are thinking about the ability to augment the kind of work that people can do, there’s an emerging permutation of robotics where we are trying to build up exoskeletons that are smart in the way they are able to take on the weight that workers on a factory floor often have to go through when they are moving heavy parts or operating heavy machinery and it often leads to workplace injuries. 

We have now the ability to reduce them to a certain extent by utilizing such technology that actually boosts the ability of the worker while retaining the dexterity that we have as humans in terms of being able to be very flexible and adaptable in an uncertain environment, but also giving them the additional ability to prevent injuries by taking on some of the more harmful aspects of their work, which is let’s say repetitive stress injuries, which can arise from having to lift up weights from the floor, putting it on our shelf, restocking, doing some sort of packaging work et cetera.

CAHILL: That just makes me think of back injuries. That may be the number one problem that a lot of these type of industries face from an employee perspective, right?

GUPTA: Yeah.

CAHILL: Caring for their employees but also from worker’s comp and the reduction in quality of life. So it’s very encouraging, the positive aspects of this.

The report, it signals on some troubling signs that we may need a counter movement to drive some balance. For example, we discussed that there’s a hyper focus in the media on robots as it relates to AI or on how the automation will replace labor. It’s very singular in its approach. What is the Institute doing to raise these concerns with business leaders and academics to introduce a more balanced conversation and have more balanced considerations around AI and ethics?

GUPTA: One of the places where we have had the opportunity to make the biggest impact is to add a fair degree of nuance to these conversations. And for the most part, it’s almost trying to debunk some of the popular depictions of let’s say the labor impacts of automation and AI. And a part of it is really helping the business leaders gain a more in-depth understanding of what is the actual value creating work that their employees are doing. And also understanding which of those pieces of work are automate-able with near-term advances in technology. 

And I think that layer of nuance is very important because yes, we can think about some of these fantastical claims about what AI systems can do but those might come 35 or 40 years from now. And I’m not one to say that we shouldn’t be thinking about concerns that are long term but when we are thinking about making policies, a lot of those tend to be things that we want to put into practice today, and hopefully that day would be aligned with the long-term concerns as well, but ignoring the short-term realities and only thinking about these labor impacts from a very long-term horizon where maybe some of these technological improvements may or may not pan out. 

So really, when it comes to the value that we are able to add to that conversation its helping business leaders realize the nuance around what those impacts are going to look like and tempering the enthusiasm for AI being this panacea or this silver bullet that’s going to address all of their troubles that that organization might be facing. 

CAHILL: So, Abhishek, PMI just recently announced a new initiative that is focused on insuring quality outcomes for citizen developers. These are people that use low code, no code technologies to develop apps for their stakeholders. Some of the points raised in the report around privacy, security and transparency and how the code operates seem to be relevant here. So what thoughts and considerations would you suggest for these citizen developers?

GUPTA: The considerations that a citizen developer should have first and foremost in their mind is to look at some of the existing tools and techniques that are already out there in an open-source manner that help to operationalize things like privacy, as an example, or things like bias mitigation as an example. 

When we’re talking about privacy, a lot of people firmly believe that anonymization is necessary and sufficient in terms of maintaining data privacy and security. We have known for decades now that that isn’t the case. I mean, there are many examples, for example the case when Netflix released their movie ratings database, which was anonymized, and removed all customer information. Ingenious researchers were able to correlate some of the ratings that were in the Netflix data set with the public rankings of movies on IMDB. And what they had done is they had picked the movies that were highly esoteric and had few ratings and they were able to cross relate them with the ratings in the Netflix data set and they were to de-anonymize the identities of those people.

And there are more recent examples with the fitness data and trends that were released by Strava, the fitness app, which led to unintended disclosure of some of the U.S. military base locations because of how regimented the soldiers’ exercise routines and practices are. So we know from decades of example here that anonymization doesn’t work. 

So we have known this for a long time. And there are more effective techniques, like differential privacy, that do a better job at it. And to give you an example there, the reason that differential privacy ends up being more effective is because of two reasons. 

One, that it helps to provide a certain degree of guarantee against future disclosures of other related data. So let’s say I have two or three pieces of information about Joe but tomorrow there is an unrelated data set that gets released from some other provider that Joe was interacting with. Then I might be able to now put together these two data sets and create a really rich profile that knows a lot more about Joe than either of these data sets would have been able to provide me with. 

And differential privacy helps to protect against some of these sorts of “attacks” on privacy. And then it also gives a sort of attunable parameter which helps you control the degree of privacy that you can get. Think about this as that, if you were anonymize all the data and remove all the interesting attributes, of course you would get a very high degree of privacy protection, but also you would strip away the usefulness of that data and being able to make predictions from it in the first place. So there also needs to be a degree of trade off in terms of the usefulness and in terms of the privacy protections. 

So again, to answer your question then in terms of what citizen developers should be aware of, it’s really spending some time and critically engaging with these notions of ethical considerations beyond what might seem to be the most popular approach and really thinking about it more critically in terms of which of these actually helped them achieve the goal that they are striving to achieve.

CAHILL: In closing, what other AI related considerations should PMI be addressing on behalf of our stakeholders?

GUPTA: For the folks who are associated with PMI, I think one of the biggest things that they can do in terms of integrating ethical considerations when it comes to AI within their organizations, it’s to really think about operationalizing these in small, digestible chunks and finding places of intervention within their products and services development lifecycle in a manner that doesn’t disrupt their workflows that also empowers the various stakeholders, be those rank and file employees, be they middle or senior management, or the executives. 

Because that is one of the other things that we have found to be particularly problematic is that when we are talking about some of these ethical considerations, a lot of them often look at these as absolute solutions and end up disempowering the people who have the necessary context and nuance to be able to reason about some of these ethical considerations. Because at the end of the day, the employees are the ones who are closest to this work that they are doing, and they have the highest degree of context in terms of what is effective and what isn’t. 

And if you put in place measures that strips them of that ability to control the deployment of these measures, that disempowers them or removes them from the loop. That, I think, is problematic. So business leaders, people in management, I think need to be very acutely aware of that, that they continue to maintain that degree of empowerment that employees have and strive to actually improve the degree of empowerment that their employees have when incorporating ethical considerations when it comes to AI.

CAHILL: Thank you. Fascinating topics here today. And I would say what I said at the top again that it changes day to day, this is a very evolutionary hot topic, AI, and it’s not going away. So I would encourage our listeners that are highly interested and they want to double click some more to go and read the State of AI Ethics report that was published in June by the Montreal AI Ethics Institute. 

So I want to thank you Abhishek Gupta, thank you so much for your time here today, we greatly appreciate you spending time with our audience and relaying to them the evolution of this really super-hot topic that affects all of us. So personally, I’m looking forward to meeting you face to face in the future months and years as we continue on our PMI Knowledge Initiative journey. So, thank you again.