IdeaScale Nation

Zoe Szajnfarber: How to Make a Challenge Prizable

November 20, 2019 IdeaScale Season 1 Episode 8
IdeaScale Nation
Zoe Szajnfarber: How to Make a Challenge Prizable
Chapters
IdeaScale Nation
Zoe Szajnfarber: How to Make a Challenge Prizable
Nov 20, 2019 Season 1 Episode 8
IdeaScale

GW Professor Zoe Szajnfarber's research group seeks to understand the fundamental dynamics of innovation in the government space and defense activities, as a basis for decision making. Current projects include mapping the innovation ecosystem at NASA, ESA and the DoD, modeling the  interactions between organizational and technical systems architecture over time, and valuing alternative technology investment strategies and their impact on individual preference structures. In this interview she talks about how crowdsourcing challenges can bring value and how to identify that value based on the type of problem you want to solve. 

Show Notes Transcript

GW Professor Zoe Szajnfarber's research group seeks to understand the fundamental dynamics of innovation in the government space and defense activities, as a basis for decision making. Current projects include mapping the innovation ecosystem at NASA, ESA and the DoD, modeling the  interactions between organizational and technical systems architecture over time, and valuing alternative technology investment strategies and their impact on individual preference structures. In this interview she talks about how crowdsourcing challenges can bring value and how to identify that value based on the type of problem you want to solve. 

Speaker 1:

So then we're talking not just about experts in particular field , but experts who are adjacent to a field. Sometimes

Speaker 2:

what you're looking for is someone who comes at it from a different perspective. Um , and that's really to me the biggest power of what this new model of open innovation is. The distributional thing is, is valuable too. But changing how we think about who has the kind of knowledge that you need to break these heart problems. Knowledge comes in a lot of different forms and people's experiences and forums , how they look at the world. And we need to be a little bit more open about who has the , the great solution that's gonna break the problem barrier.

Speaker 3:

[inaudible]

Speaker 1:

hello everyone. Welcome to our idea scale nation podcast where we're talking to change makers, innovation leaders, futurists , entrepreneurs. And this month we're talking to um , an academic researcher named Zooey Shane , Shane barber. Uh, dr Zoe . Shane barber's research is really interesting to us and our customers and our network because it focuses on some of the fundamental dynamics around innovation, specifically in the government space around the systems engineering aspects of problem definition and decision making, which we know is something that a lot of innovation leaders are concerned with . She's had a great opportunity, she's worked with organizations like NASA, the U S air force, the national science foundation and others to help them like dive deep into how they identify and position some of their biggest problems. And what's really interesting is she's learned a lot of the details about what makes innovation systems work. She studied at MIT, she's an associate professor at George Washington university. And what's so exciting to me is that she's found a way to organize and label how you approach problem solving when you're reaching out to the crowd, whether they're a crowd of experts or a crowd of laymen . So without further ado, I guess we'll get started. Zoe, you've had a very interesting academic career versus MIT and then working with really interesting government research subjects like NASA and the air force. How did you start out studying this field? Is this where you intended to end up?

Speaker 2:

Um, I definitely haven't had , uh , a linear path to an area that I find fascinating but would never have known was a job that you could have when I was thinking about, you know, when you're supposed to start planning your career. Yeah . I've just been really lucky to find a lot of interesting opportunities and follow them and not to worry too much about whether things were gonna work out. When I was in high school I was actually in a fine arts program and I was spending most of my time painting and playing hockey and rugby. But then when it came time to think about where I wanted to go to college, I realized that I wanted to have a job when I was and at the time what people did who are generally academically inclined with go into engineering. Um , and I did and the major that I picked was aerospace mostly because making robots that go to space sounded really cool and sounded like a good application for my mix of arts. And just curiosity about how the world worked. Um, and I had oriented myself really well. Um, I got an internship. Um, it was normal at the university of Toronto where I was studying to take off between your third and fourth year to spend a year in industry to set up the job that you would do after you graduated. And I got my then dream job working at MD robotics, which is the company that made the Canadarm the robot that's on the space shuttle. Oh no. That was what I was going to do with my life. Wow. Um, but between the time that I took my job, I accepted the job and when I was actually gonna start it, the Columbia accident happened, the space shuttle blew off on re-entry. And so the job that I had signed on for was totally different than the job that I ended up doing. Um, NASA, before they were allowing any of the, the shuttles to return to flight asked all of its contractors to go and assess whether the system was doing what it was supposed to be doing. Um, and I got involved in doing what was called the recertification verification, which was probably the most powerful academic experience that I had, even though it was an industry because it really changed what I thought being an engineer was. Um, when you're in school, you're solving problems, there's a right answer. You're trying to do the best analysis that you can come up with the single best solution. Um, but when we were trying to figure out whether the system was operating the way that it was supposed to and making the story for why we need to change things or not, and there was this really compelling notion of people's lives are at risk. We had to think a lot harder about why we were making the system, what it was going to do and all of the organizational and political contexts that affected which technical solution gets chosen and what's the right answer. So instead of going back, finishing my fourth year and then taking a job at this organization that I really wanted to work for, I decided I needed more school. Um , and I learned that MIT let you do a technology and policy masters at the same time as an aerospace masters. So I wouldn't have to give up maybe still being an aerospace engineer doing the space robotics thing that I thought I wanted to. Um, but at the same time I could learn about all this other really important stuff, policy organizations, how the all of them interacted to pick what actually really happened. Um, and the research assistantship they ended up working on was about innovation in space. Yeah. And so I started looking at how you could even think about measuring and quantifying innovation in space, which was a good intellectual problem for my master's but wasn't really where my passion was. But what I realized when I was doing that is that people didn't really know to think about , um, do you use large, multi decade, decades space programs where you make decisions well in advance of when you're going to know what's going to be used? And there isn't the normal market that we trust to make good decisions about what the right technology is. So even more than in the really competitive areas where you think about innovation happening, governments needs to worry about all the choices, how they interact with what work gets done, making sure that incentives align. And that's where I kind of dug in with my PhD. Um, and since then I've really just followed wherever the most interesting research question is and spent a lot of effort trying to understand the context and making sure that the questions that I'm asking , um, were connected to the problems that people were really facing in the industries that cared about.

Speaker 1:

It's really interesting because like NASA government projects like these longterm projects, they certainly are these microcosms for like how can we anticipate the decision that we're going to make nine moves from now? But the enterprise does need that too. So your , your research is definitely going to be applicable beyond the government space that your path is so interesting. I , one of our other podcast guests once said, you know, nobody, nobody is, spends their time as a kid being like, I'm going to work in innovation. So it , it's interesting how like robotics and the space program and your natural curiosity lead you there. But it does , uh , we ask everybody who comes on our podcast, what the word innovation means to you. Is it meaningful, you know, as , especially now after researching it for such a long time, how do you feel about it?

Speaker 2:

I have a pretty mixed relationship with the word innovation. When people need me to summarize what I do in two or three words, I say I study innovation. Um , but I don't actually use that in any of the, I would say quote unquote real work that I do. Um, I find that the problem with innovation as a word is that so many people use it for so many things. And it's hard to imagine any self-respecting organization not saying that they're innovative, which in the end means that the world has almost lost all of its meaning. Um, when I think about how to study innovation, and when I think about what it means for me, I try really hard to separate all the different ways that you can achieve progress. If you focus too much on novelty and new technology, then it's the new part that you're focusing on and you're always trying to replace things with something different. But different isn't always better. You can make as much progress by finding a more efficient way to do something, finding a faster, cheaper way to do something. And so in my own work, I try to use a more specific term when I'm actually trying to study something but still as a , a guiding star of what we're going for. I think it's great to motivate people around.

Speaker 1:

Right ? Well, so our listeners are primarily innovation managers and they're running programs that try to solve their problems collaboratively. Um, some of them are looking to target very specific talent. Some of them are looking for just a broader set of inputs into their problem solving. And obviously this is exactly your field of study. Um , you've talked about four types of open innovation approaches. There's garden variety, crowdsourcing , um, what you call distant experts are saying, which I love as a term expert targeting and force multiplying. Can you give me some examples of those and what each of them means?

Speaker 2:

Yeah , so I think it's really important when you're deciding to go outside of your traditional employees inside your organization to think both about what kind of a problem you're trying to solve and where the expertise exists outside of your organization. Because even though you may use the same mechanism of broadcasting or opening up a prize competition, what you get is totally different depending on the nature of the problem and the distribution of the capabilities. Um, the idea with distant experts is not that you're looking for random draws from outside your organization anymore. It's realizing that most knowledge areas and most problem areas draw an expertise in a lot of different forms. And so maybe your organization is best at doing the one thing in context, but there might be someone else who has relevant expertise that just doesn't usually apply to your domain. The example I like to use here is one of NASA's challenges that they ran on coming up with a better astronaut glove. I'm trying to make the astronauts hand get less tired when they're doing lots of work. When they're doing their spacewalks . One of the best solutions came from a guy whose background was doing set design. He actually designed the mechanisms for the Victoria's secret runway model wings, which sounds like a totally different kind of area to be coming from, but it turns out that the underlying mechanisms are really quite similar. And he had an insight because you saw the similarity of those. Now this isn't someone just getting lucky and sinking a hole in one kind of. They got lucky and you get to pick them. This is someone who has real deep expertise, just not in the area that you think about. And so when we think about posing problems, if it shares knowledge areas and we can transform the problem into something that they can access, then you can take advantage of all those different perspectives that can actually be better than your own in your context.

Speaker 1:

So then we're talking not just about experts in particular field , but experts who are adjacent to a field sometimes.

Speaker 2:

Yeah, that's right. Um, I have completely changed how I feel about what expertise is , um, through this research in my own life as well. But also in terms of what I study, what you're looking for is someone who comes at it from a different perspective. Um, and that's really, to me the biggest power of what this new model of open innovation is. The distributional thing is, is valuable too . But changing how we think about who has the kind of knowledge that you need to break these heart problems. Knowledge comes in a lot of different forms and people's experiences and forums, how they look at the world. And we need to be a little bit more open about who has the , the great solution that's gonna break the problem barrier.

Speaker 1:

That's very interesting. And so obviously you can misunderstand some of these concepts and you know, maybe not position a problem to the right crowd or , uh , be too limited in your like and who you're reaching out to. Can you tell us a story about someone who failed to position a challenge properly?

Speaker 2:

So it's always hard to talk about failures in this context. Um, because there are at least two totally different kinds of failures and there's always a way to position , uh, open innovation activity in a success. I think the, the first kind of failure, which is maybe the one that you think about , um, initially is when you don't get enough solutions, maybe you didn't position the problem, right? Maybe you didn't pose it in a way that they're are actually people who can solve in. There are examples like this, I mean even the , the lunar expertize , which is now kind of restored and we had some really big press about recently, they didn't award a winner. So I guess in some sense it's a, it's a failure, but it created a whole new market and motivated a lot of people to do a lot of interesting work. So I don't know about talking about as a failure is necessarily the right thing even though it was probably too hard for the context. The other kind of failure is when you get a great solution and it isn't used. Um , and that actually happens maybe a lot more than people think. Um , one of the famous examples that's also kind of a complicated story is uh , the Netflix, the Netflix prize,

Speaker 1:

right?

Speaker 2:

I think a lot of your listeners will remember that Netflix wanted to improve its recommendation algorithm by 10%, which a lot of people thought was pretty crazy cause they were pretty good already. And they did and they never use it. And so the answer for why they never used it, like I said, is a little bit complicated. Their business model was shifting. They were moving from the DVD model to the streaming model and that changed what recommendations should be looking for. But I think an important part of the story is thinking about also what kind of a solution you're going to get depending on the way that you approach the problem and how much effort is required by the organization to use it. Most of the real problems that we have today don't exist in isolation in Netflix context. They have all of the rest of their software, all the rest of their processes and whatever solution that got would need to be integrated. And that's a really common problem that not fully thinking through what the clean interfaces , if I think about it from a systems engineering perspective so that I can plug and play a great solution in. Because a lot of the times your great solutions are going to look very different than the way that you might've thought to solve it. And so you have to really think through that part. So not only was the business model shifting , um, but they also underestimated how much effort would be required to adapt, incorporate the solution that they got into their existing system .

Speaker 1:

Right. Yeah, I heard that about the Netflix prize and they, you know, they still felt, it was sort of like you were saying, I'm not a failure because they still learn things and they created this , um, a lot of excitement in the public about the Netflix prize, but yet , uh , an idea that doesn't get implemented has far less value than something. I mean that's sort of the difference between an idea and innovation. Is implementation so interesting too that you , you pointed out that uh, it's true that failure is not always the word that people want to use because it usually is an opportunity to learn. A lot of our , our customers will position it that way. Like what did we learn from this rather than, you know, why did this fail?

Speaker 2:

Yeah. And actually in a lot of the more recent work that I've been doing now, I've been even surprised as a researcher to see just how much the learning about the solution space is an output in itself. So a lot of times when we think about open innovation, we think about the winner . It's the event, it's we have this competition and then someone wins or they don't. And we see how much performance it is. Um, but if you set it up to see all the different perspectives that I've been talking about, what you can get is some of the best exploration of what the options are. Then you can get in any tools that we use. In my core field of systems engineering, we spend a lot of effort talking about how do we come up with alternatives to analyze so that we can pick the best one. But practically speaking, it's usually one team coming up with the alternatives they can think of. But here, when you pose the problem to whoever wants to come, you're going to get all these different truly independent perspectives. And that information has maybe even more about more value than whatever one solution you get,

Speaker 1:

right? So even if you don't get the painting, you get the colors in the palette ,

Speaker 2:

and maybe that it shouldn't be a painting in the first place. It's also valuable.

Speaker 1:

So let's , let's talk some more about your research. Um , how has it changed over the years? What , what did you, what were you learning about and what new questions are you asking today?

Speaker 2:

I almost don't even know how to answer that question because my research has changed so much. Um, I wrote my dissertation on the path taken by a selection of astrophysics technologies that ended up flying on some billion dollar missions. Um, and I was studying the relationship of how the technology evolved and the organization that it lived in. Um, I still study scientists. I still care a lot about space , um, but almost all of my work has moved away from any one particular context and it's just been a bit of a path of every project. Teaches me about the next interesting question. Um, and that's one of the most amazing things about being an academic cause I get to just pick problems and continuously redefine myself around the interesting questions. So the one thread that has been common over all of my research is I'm very interested in how the way you break up a problem affects who is interested and able to solve it. Um, but what the has meant has evolved drastically. I started becoming, as I've said, very interested in different kinds of expertise and what engineering expertise actually means and how do we leverage it. Um, this notion of what you use the word adjacency , um, how different people with different distance bring new ideas to the problem. Um, but also just fundamentally how do we break up problems. There's a lot of things that we take for granted that you can just break up problems because you can do that cause we do that in a lot of fields. Um, but it turns out when you actually go to try to do that systematically, there's so much to be learned and so much guidance to be gained about the right ways to break up problems, particularly when you're sending them off to new kinds of solvers. Um, if you ask pretty much any aerospace engineer how to break up a satellite, it would give you almost exactly the same set of subsystems. It's just a way that we do it. But it turns out that that's not necessarily how knowledge is distributed in the world. So if you want to move from a traditional model with a few normal contractors to getting input from people in other countries or people from other disciplines, then you have to really rethink about what the nature of a problem decomposition is . Um, and that's not something I ever thought I would be asking questions about when I started my career.

Speaker 1:

Can you give an example of like how you've, how you've taken a problem and then defined the principle part of it?

Speaker 2:

Sure. Um, so we've actually just wrapped up , um, one of the largest of its kind field experiment that we ran through NASA that was designed to explicitly answer that question. Um, so we took a moderately complex problem , uh , robotic arm that was, that's going to be used on the international space station. So reasonably simple problem. The robot flies around, the space station, grabs onto a handrail and then positions itself to help the astronaut look, look with a camera or something like that. So it's not a terribly hard robotics problem, but it's harder than you'd expect some random person off the street to be able to solve. And so we ran the competition as a sequence of 17 challenges where we broke up the problem in a bunch of different ways so we could really explore how if we made it easier or harder or more one discipline or another, we might get more different kinds of solutions from the crowd. And we were able to instrument the solving process so that we got information about the background of people who showed interest, the people who actually decided to solve and how good all their solutions were. So we can finally ask these questions about, you know, how the way the break it up matters. Um , and that has given me insight on the data that we collected, but also so much insight about what the process of actually decomposing these problems are. Um , silly little things like, you know, we think of mechanical engineering as a discipline that everyone knows what it means, but it turns out that someone who's a mechanical engineer who spent their career in aerospace thinks about the world quite differently than someone who's a mechanical engineer who said, who say has been working on elevator controls. Um, and those questions are really fundamental to using open innovation as a tool really well cause you need to know how to pose the problem. So it sounds like something that each of those two different kinds of people know how to solve.

Speaker 1:

That's interesting. And I mean thinking about it in that way, are there some things, some thoughts, some common misconceptions about crowdsourcing and uh , you know, praisable challenges that your research is sort of turning on its head?

Speaker 2:

Yeah. So if , if I can leave kind of one, one theme, it's about how important the problem formulation is. I think for a long time the conversation has been about is this problem praisable or not? Let's look for the problems that are prideful. And I think that's really the wrong way to think about it. Almost every problem is praisable to some extent. And the real key is to figuring out what aspect of it is. And so my research has been developing strategies for how to think through breaking up a problem that recognizes the inherent structure of that problem and also the capability of people who you might reach because you don't want to just be broadcasting in general. You want to have some idea of the kinds of solutions you might be able to get because while it's possible to get a game changing solution from anywhere, practically speaking, you want to make sure that there's some amount of efficiency. You do have to be able to review all the solutions after all

Speaker 1:

and plan for implementation as you said, so having a good set of criteria for once you receive those solutions when they come in too .

Speaker 2:

Yeah, and for me that's an important part of the problem formulation part. You want to really think through what you're going to use, what aspect of it you're going to send out, what the interfaces are that you don't want an interface because it turns out that if you define that too carefully, you're closing off a lot of the potential solution space and you're going to get more solutions like you would expect, which to some extent defeated the purpose of going outside in the first place.

Speaker 1:

Tell me , does your research offer any insight then into how challenged sponsors should structure or coordinate one of these praisable challenges? Some best practices they can follow?

Speaker 2:

Yeah, so as an academic I do have a tendency to stay a little bit more abstract from the day to day implementation, but I think that there's a lot of important takeaways that are relevant to managers. So maybe I'll speak a little bit generally and then try to be a little bit more specific. But as I said, to me the most important thing is to really understand the nature of the kind of problem that you're looking for. Almost all the time. People when they state a problem have a solution mind. And that's really the wrong starting point. If you're going to get the most out of open innovation, people have said, you know, ask why, why, why, how, how, how just decontextualize as much as possible, make sure that you're posing the problem that you actually need to solve and not the solution that you think will solve it. Um, so a colleague has an example in a paper and so you can pose the problem as slowing the growth rate of growth of bacteria. Or you could pose the problem as not letting any bacteria in, in the first place. And the kind of solution that you would get is both really useful to the actual objective, but leads you to totally different kinds of solutions and would lead you to reach out to totally different kinds of people. So really thinking hard about what the inherent problem is is key. And so then the next question is how to balance how much breadth in the problem you need to give , um , to be able to get rich solutions. The variation that you want to get new perspectives , um, but also to make sure that you're going to get something that you know how to use. The Netflix example is a bit of a good example here where if you don't know what kind of algorithm you're going to get, it may not be convenient for the way that you're planning on implementing it or a little bit more esoteric. But one that I'm more familiar with. Um, uh , one of the famous NASA challenges was an algorithm to reorient the solar panels to make sure that they got enough power to run the station space station but not to wear out the joints cause it turns out reorienting them all the time is obviously really hard on the joints. And so they wanted to reach out to the crowd to find a way to optimally maneuver them to maximize power and minimize where it, and they've got a great solution, but the solution that they use could never have worked on the computational resources that they had. Um , because it was kind of a brute force algorithmic approach. That makes sense when you take away all the context. And so that was great as an award, but it wasn't the first solution that was the one that would make the most sense for them to implement. Um, so they needed to think really hard about are there constraints of the problem that you really do want to constrain the solutions and make people focus their efforts in that space? Or do you want to relax them and realize that you're just going to have to do some filtering after the fact? But those are the decisions that really need to be made in advance. And then the last thing that comes out, not directly from my research, but has come out from just being around these problems a lot, is I think that people often underestimate how much work before and after is needed to make open innovation successful. I think this is known for the people that have experienced a plan , the problems. But if you think about the general public or people, organizations that are just getting into this, they often focus on the event. It's like you think of your wedding day, but there's a lot of planning that goes into it to make it actually go off the way that you want it to.

Speaker 4:

Right . Minimally curious afterwards.

Speaker 2:

Yeah, I didn't want to get into that part, but it's exactly right. Um, it, it's , it often ends up being years of work in advance or at least many years of people hours before and afterwards. And if you don't plan that in, then it's really easy to call it a failure without putting in enough effort to actually take advantage of what came out of it. Um, and really thinking through that in advance and planning for it and knowing how you're going to talk about the success internal to the organization and outside even while you're still figuring out how and what to implement is really critical.

Speaker 1:

Well, and that's, it's interesting because I feel like this is a really good primer for those who want to get started with open innovation. So that's really good advice. You know, first start by understanding the problem that you want to solve and then thinking about the work that goes before and after it is there, what advice would you give to open innovation leaders who are just getting started? Is that all or is there any encouragement that you can offer as well?

Speaker 2:

I think it's , I think it's worth trying. I think that the people that I've seen get started with this, most people come to it either as a skeptic or an evangelist and I don't see a lot of people in between and most of them end up a little bit. Sorry.

Speaker 1:

That's our experience too, I'd say.

Speaker 2:

Um , yeah, and it's, it's hard to find people who are in the middle, but most people end up moving in a little bit. And I think part of the issue is it depends what you focus on. Um, so I think having realistic expectations and realizing that this is not a magic bullet, it's one innovation tool in a toolkit , um, is a really important starting place. And also that although it's been around for awhile , the management practices around open innovation are still relatively new. And again, even though it's been applied to lots of different problems so far, my sense is that people have been going after low hanging fruit, which is a bad basis for experimentation. And I know that's not their purpose, but to really understand where it works and when it works, I think we're only really scratching the surface because we've had such a concentration in a few particular areas and I believe in, that's what I'm really trying to look for is where those in-betweens, where the real value might be and how do we actually unlock that.

Speaker 1:

Well, I think that that's a good challenge to throw out to those people who are moving into this next stage of innovation. I'm trying to find, I think you're right. I think some of the low hanging fruit is gone, so let's find out what's next, right.

Speaker 2:

[inaudible]

Speaker 1:

all right , well that, those are all the questions that I had for you today. Thank you so much for coming onto our podcast and for sharing the findings of your research. I hope it helps to find the next generation of innovation management.

Speaker 2:

It's been fun. Thanks for having me.