Lab to Market Leadership with Chris Reichhelm

From NASA to Startups: How TRLs Became the Universal Language of Deep Tech | John C. Mankins

Deep Tech Leaders Season 2 Episode 1

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:21:51

Technology Readiness Levels (TRLs) are the universal language of Deep Tech – used by NASA, the DoD, VCs, corporations and startups to measure innovation progress. But where did they come from? 

Professor John Mankins co-invented TRLs and wrote the 1995 white paper that gave them to the world. In this Season 2 premiere, he reveals the origin story: from Apollo-era NASA research centres, through Stan Sadin's combination of 'technology readiness' and 'levels' in the mid-1970s, to John adding TRL 8–9 in the late 1980s, to publishing the framework on the early internet in 1995 ('nobody even knew what the internet was'), to a 1997 Government Accountability Office (GAO) briefing that led the Pentagon to adopt TRLs across DoD. 

John explains why TRLs are a 'contact sport', not a calculation tool–you need constant negotiation between technology developers and end users to assess readiness meaningfully. The hardest transition isn't the famous TRL 4–6 'valley of death' – it's TRL 1–2, the spark of innovation itself. He introduces two complementary frameworks: R&D³ (degree of difficulty – how hard to reach the next level) and Technology Need Value (how strategically important the innovation is). Together with TRL, these formed the foundation for managing NASA's $800M+ exploration portfolio. 

His advice for founders at TRL5: validate your market, test for scalability and double-check your foundations before scaling up. Essential listening for anyone navigating lab to market. 

Learn more about Deep Tech Leaders at www.deeptechleaders.com


Let us know what you think...

Learn more about Lab to Market Leadership: https://www.deeptechleaders.com

Follow us on LinkedIn: https://www.linkedin.com/company/deeptechleaders

Podcast Production: Beauxhaus


John Mankins:

[00:00:00] The one that is I think the most important, which is a little different, it can also be really hard, but not necessarily is between one and two. So one I have, I observe that, uh, heat flows from A to B. If A is hotter, it flows to B. If B is cooler, there's one of the fundamental laws of thermodynamics. I observe that I have a bright idea for how to use that physical phenomenon to do something useful.

I've, I've come up with the idea of a heat engine. I can heat steam and I can use the steam to push something that moves and then I can use that moving object as it turns out, perhaps piston and do work that that moment of creativity. [00:01:00] Is, is absolutely critical and is one, is the one of the hardest if you're, if you're not steeped in a, as a practitioner in the field, so that you can just draw on your intuition and draw on, on your knowledge.

And I've been looking at these things in the shop for years now. I know what I can do with it. Um, this is, you know, a eureka moment. That one's really critical. Sometimes it's easy. Sometimes it's hard. 

Chris Reichhelm:

Welcome to the Lab to Market Leadership podcast. Too many advanced science and engineering companies fail to deliver their innovations from the lab to the market.

We are on a mission to change that. My name is Chris Reichhelm and I'm the founder and CEO of Deep Tech leaders. Each week we speak with some of the world's leading entrepreneurs, investors, corporates, and policy makers about what it takes to succeed on the lab to market journey. Join us[00:02:00] 

on the last episode of Lab to Market Leadership. I had the pleasure of speaking with Hailey Eustace from the PR and communications agency, Commplicated, and we spoke about the criticality of clear communication. When you're running a deep tech company, it allows you to set expectations with your various stakeholders and allows you to build trust the framework and language.

That these different stakeholders use, regardless of who they are, is one of technology readiness levels. TRLs, specifically, the TRL framework and the TRL framework has been around for 50 years. It's been published for the last 30 years, and it's used by everyone from NASA where it was defined through the US Department of Defense, to the European Space Agency, to ISO standard bodies, to large technology corporates, to VCs, to startups and scale-ups.

Everyone uses to some extent tls [00:03:00] and they use TRLs because TRLs represent an objective way of communicating about and measuring the progress of innovation. Well, today we're gonna do a deep dive in TRLs, and I think we should because TRLs are simply so foundational to deep tech management that we have to.

Joining me on today's episode, there is no better human being alive than the co-inventor of the TRL framework itself and the author of the original TRL paper in 1995, John Mankins, John was a director at NASA. He led various space programs. He also worked at the Jet Propulsion Laboratory in California.

Today we're gonna talk about the origins, the evolution, the limitations, the future of measuring science, technology, and engineering progress. This is gonna be [00:04:00] a wonderful episode if you care about bringing innovation from lab to market, as I know so many of you do. I hope you enjoy. Let's get into it.

Professor John Mankins, thank you so much for joining me today. 

John Mankins: It's a pleasure to be here with you, Chris. I'm looking forward to our conversation. 

Chris: We're gonna talk a lot about the TRL framework, but before we do, I'm curious as to the origins and, uh, so to start us off, I wanna go back in time. I wanna go back to, let's start in late 1960s to the Apollo program, I guess the height of the Apollo program.

Can you describe for me, based on your understanding the, the underlying setting for technology development within NASA at that time and, and how [00:05:00] communication about this development actually looked from the inside? 

John: So, um, most people don't know that at the same time that NASA was implementing the Apollo program and the first Moon landings with robots and all those things.

That there was a very substantial and robust technology program, research and technology program at NASA that was focused on what is nasa, what is the US gonna do next in space? So it was like a billion dollars a year in today's dollars in research and technology development. Uh, and, and that was all carried out predominantly through three research centers that used to be part of the NACA, uh, the national, uh, aviation or, or the National Advisory Committee on aeronautics before there [00:06:00] was a NASA, uh, like NASA Ames, NASA, uh, Lewis, NASA Langley, and they wanted to communicate with the people who were doing the flight projects and with the leadership of NASA in language that that could be understood.

At that time, everybody was doing, uh, you know, launch, launch, launch, and every time there was a launch, there was a flight readiness review. And so they started talking about technology getting ready to be used in future mission in terms of technology readiness reviews. And this was the, this is in the late sixties.

This was the earliest instance of, of somebody talking about a technology readiness. Um, and in the way that we talk about it with the technology readiness levels. 

Chris: Right. Okay. And, and who was the instigator of this? [00:07:00] You know, was this something that evolved naturally? Was there an instigator or a catalyst for this, you know, for the emergence of this common language?

John: So the, the, the, um, the, probably the, the original source was in the flight. Projects community. 'cause they were using flight readiness review and the technology community, uh, like these NASA research centers, they started using, or they used the technology readiness reviews in the same way in response. And, and then it, it kind of evolved, um, after, in the 1970s.

Um, uh, and, and uh, it took a, a brief detour and then it came back around to, to readiness. I dunno if you want me to tell me that story at this moment or you want me to wait. 

Chris: Oh, I think you should. Does this involve Stan Sadin? 

John: It does, but so, yeah. So what, what happened, [00:08:00] what happened in the mid seventies?

Of course, the, uh, Apollo program had been canceled. Yeah. NASA's budget was slashed by 80, 90%. That was the flight mission. And, and inside the space technology community, they were just collapsing. And so, uh, uh, the, they, the, um, the aerospace industry, the US government, the NASA research centers especially, they instituted a, um, a series of workshops and meetings trying to figure out how they could get space technology back on the agenda.

And that they developed an outlook for what space could be between the mid seventies and the end of the century year 2000, now, a quarter century ago. And they, uh, used a language in those documents called the state of the art [00:09:00] level. So there was actually, and there were 10 levels, there was actually this, this state of the art level in the circus, 75 or so.

And, uh, one of the people who was involved in that was a young man, I was still in college, a young man, Stanley Sadin, Stan Sadin, who was involved from the, uh, space technology organization at NASA headquarters, the Office of Aerospace, uh, aeronautics and Space Technology. Um, Stan started, started, led a number of these workshops.

He was in the, uh, space systems organization inside the space technology office. And he brought in the language that had been used, um, earlier technology readiness, the technology readiness review, plus the word level from the state of the art levels. And he started [00:10:00] calling them technology readiness levels.

10 was too many. There were, it was, uh, too confusing, too complicated. So he said six and then seven. Uh, and he used those in a series of documents, the space systems, technology models that are massive, massive documents. And, and they tried to lay out for the space systems community and for decision makers, all of the potential of space technology and all these advances that could be made if only you give us a little more money.

And so there was always this, how do we recover from the budget destruction in 72, 73 and try to get back to a, a robust budget and robust investments in the future of space, uh, by around 19 79, 19 80, 19 81. [00:11:00] Developing, uh, this, this language or transporting this language from these two different sources and creating this new one was initially a way of convincing for greater budget.

But clearly based on, and this is just my assumption, but based on the nature of this language, it was communicating with trust. We can give you an accurate understanding of where these different programs are. Thereby we're building credibility, thereby hopefully will increase our chances of, of getting more budget.

Yeah, absolutely. And, and the way it was originally, every technology, the state of the art level, every technology state-of-the-art technology readiness review, every technology expressed maturity in a different way. We're gonna get to this kind of test. We're gonna get to this kind of, and with the di details and the great thing that Stan Sadin did.

Was he made it more [00:12:00] general. He expressed it in, in a, in a really short way. Um, probably a little too short, but in a way which was sufficiently generic that you could try to tell people about rockets and electronics and, and life support systems and their maturity and their readiness. You know, just give us a little more money in a, in a language that was common regardless of your technical specialty.

Okay. Okay. Well, we'll talk about its applicability and how far that may or may not extend, but as I understand it, Stan, Stan developed seven as you've, well, six and then seven. You subsequently added two more. What did you feel was missing that wasn't included in the original seven? So, I, I ended up going to, I was at NASA JPL when I got outta college, uh, for a number of years.

I worked on [00:13:00] all sorts of things, long story. Um, but one of the things that I was involved in was system studies and as well as flight projects, as well as technology development. And I ended up, uh, taking an assignment at NASA headquarters in this same spec space technology office. This is circa 86, 87, so it's about 10 years later.

And Stan was still there no longer the, the young whipper snapper that he had been in the mid seventies, but still there, still pushing on this noodle. Um, and there was this enormous effort called the Space Exploration Initiative under George, um, uh, HW Bush and under, um, the, uh, pres the last couple of years of President Reagan's administration to try to get a New Moon to Mars exploration program.

Well, there was a, there was a, um, a need to [00:14:00] communicate, need to use the tls. I was in charge of the exploration technology investment portfolio, and so I adopted using the TRLs to talk about things with the flight project planning people at NASA Marshall in Alabama and NASA Johnson Space Center in Houston.

And they started talking about having their own readiness levels for flight readiness levels that would sort of be up on the high end, and we would just be down on the low end. And, and it was basically an issue with us losing control of the narrative. And in order to combat that, my, my response, which I, uh, put in I codified, um, was, oh, no, no, no, no, no.

We are gonna go to nine. You just march your way up from, from one to the, through the original six [00:15:00] or seven, and then you keep going to eight and nine in order to, um, get it all the way to the operational system. And therefore there's no need for two separate languages in the two different organizations.

You just one language for everybody. And that was the motivation. It was a, it was a, um, uh, I will say it was a defensive measure, but it was one that was appropriate. It really was made sense that you needed to know when something had been tested in space or tested in the operat, in the actual operational environment.

I mean, so the flight projects guys were right, but having two scales for two different organizations at nasa, who knows It could have. And, you know, every, every organization could have had its own scale and, and that just is a mess. And, and, um, so I fixed it. Was there res, was there resistance? Absolutely.

[00:16:00] Absolutely. I can imagine there was, imagine there was a great deal of resistance. Um, but then, but then it all got settled because of, um, other circumstances and, um, cir uh, circa 1989, I got appointed to be the technology lead for the flight projects planning activity. So I was, I was wearing both hats, the space technology hat and the flight projects planning hat.

And so I just said, there's just one. 

Chris: So it was yours. It was yours. Funny how that worked out. And few, it's, it was great. And, and I'm guessing five or six years later now, you write your, what becomes a seminal white paper on this framework Techno, and it's published in 1995. Um, did you have any sense, uh, at the time of how, how influential this would be, not just obviously within aerospace, [00:17:00] but across, you know, any ambitious science engineering or technology platform development?

John: Well, I'll, I'll jump back for just a second. So, of course, at the end of, at the beginning of the nineties, there was another election, uh, and Clinton Gore were elected. Um, Bush did not win reelection in 1992. Two, yeah. And so there was a, a new plan for space and there was an action that had been put into, in, uh, in motion at the end of the bush, uh, uh, term, Bush, Quayle uh, to develop an integrated technology plan for the civil space program.

And this was after the space exploration initiative had been largely. Put on hold, but there was still a lot of interest and there were a lot of activities. Um, un unfortunately a lot of it gotta be, not bipartisan, but very partisan. But [00:18:00] the, this integrated technology plan for the civil space program was an action from the White House, came down to NASA's administrator and through the system to me.

And, um, you, you may have seen, there's a very famous slide, a chart showing technology readiness on a thermometer scale. And that diagram, which I made up using an early version of slide software, um, uh, was done for the integrated technology plan for the civil space program. And in that, I looked not just at exploration, which I've been working on before, but exploration science, um, uh, earth science, launchers, everything.

And, uh, and so that was my 90, that was, uh, 90, 91, 92. I really started working on how can I apply these [00:19:00] technology assessment tools to the full spectrum of NASA's R&D activities. And then of course the, the integrated technology plan was finished. There were two or three reorganizations. The organization that I'd been in was gone.

I was moved. Then there was another reorganization. I was moved again. Um, and, um, and it just seemed to me that, and, and the integrated technology plan wasn't focused on technology management. It was focused on mission requirements and technology needs and all those things. It seemed to me that there was a need to codify the technology readiness levels.

Um, across all of the, the civilian space community, but I really wasn't thinking about DoD. Um, and so, but, and, and there was a, there was a, a lack of, um, a common language, common [00:20:00] understanding. Uh, budgets were still suffering very badly. And so I wrote the white paper 1995 technology readiness levels, and there really was no vehicle for publishing it, but the internet existed.

Netscape was out there. Um, and so I made my own document and I published it myself. Um, just while I was at NASA, it was my job, but, but I, nobody said Go publish it. I just said, okay, I'm gonna go publish it. You just did it. I just did it. Was that okay? Was that allowed? Were NASA okay with you just saying, I'm gonna publish that.

It's gonna go out into the wider community? Nobody even knew what the internet was. Nobody, nobody could tell. They weren't thinking in those times. Yeah. They, they weren't thinking about it at those times. So it just, um, it just was just from my desktop to amazing to, to the internet. 

Chris: Um, that's a great, that's interesting. A great story. 

John: So circa 96, [00:21:00] so 30 years ago, there was a big issue with budgets and with, uh, budget overruns and with proceeding with mature, immature technologies in the DoD and the general accounting office. Used to these days is the general accountability office, I guess GAO, doing an audit at the direction of the Congress on the DoD and um, and some auditor at NASA at at, at GAO.

This is circa 97, heard about the RLss and they invited me to come. The GAO offices and give a a slide deck. Give a talk on TRLs in this process. And so I put together a, uh, some history things that we've been talking about right now, uh, how it was used, the methodology things we're about to talk about. And I briefed and I went over, [00:22:00] you know, maybe there'll be half a dozen people who knows, 60 or 70 auditors, every single auditor who is working on the DoD, which is, we're all sitting in the room and they said, okay, this answers our question.

This solves the problem of how to make sure we're not going forward with technologies that aren't ready and we're not gonna get budget delay, budget overruns, and, and schedule delays. They adopted it, they recommended it to the Pentagon. The Pentagon said, okay, this checks the box. This gives us the tool we needed.

We don't have to invent something, we can just use this. And it was off to the races. 

Chris: Didn't they invent MRLs? 

John: Oh, after, after the DOD adopted TRLs. Yeah. 30 years ago. There's all sorts of readiness levels now. The, as a meme, as a concept for how to think about things, technology readiness levels have been [00:23:00] incredibly, uh, prolific.

There's, there's, um, integration readiness levels, there's software readiness levels, there's AI readiness levels, there's manufacturing readiness levels. Uh, every, every, every topic that you can think of has got their own little version of readiness levels out there somewhere. 

Chris: Let's dive into the framework itself now and, um, you know, for those listeners who, who have maybe heard about TRLs.

Obviously we've talked about their use, their utility as a framework and as a language. Um, TRL1, um, you know, it's a, it's a scale, a framework that starts with TRL one, which is basic principles observed. It runs all the way to TRL9, which is a system proven in an operational environment. Um, as I understand it, I think when I speak to innovators and young companies trying to do big things in various domains, it seems, John, that a lot of the drama [00:24:00] happens between these stages.

Um, and, and so I wonder from your perspective, if you have a view on which transitions along that journey along the TRL framework are the hardest? 

John: Um, so the, the, the one that is I think the most important, which is a little different. It can also be really hard, but not necessarily. Mm-hmm. Is between one and two.

So one I have, I observe that, uh, heat flows from A to B. If A is hotter, it flows to B. If B is cooler, there's one of the fundamental laws of thermodynamics. I observe that I have a bright idea for how to use that physical phenomenon to do something useful. I've, I've come up with the [00:25:00] idea of a heat engine.

I can heat steam and I can use the steam to push something that moves, and then I can use that moving object as it turns out, perhaps a piston and do work. That, that moment of creativity is, is absolutely critical. Is one, is the one of the hardest if you're, if you're not steep in a, as a practitioner in the field, so that you can just draw on your intuition and draw on, on your knowledge.

I've been looking at these things in the shop for years now. I know what I can do with it. Um, this is, you know, eureka moment. That one's really critical. Sometimes it's easy, sometimes it's hard. Um, uh, doing some analysis, doing a, um, a prototype, you know, TRL three, uh, four fig testing it in the lab or testing it analytically.[00:26:00] 

Those things are very important. Um, and depending on the actual feasibility of the invention of the idea can either be hard or easy. Um, when you start getting to higher levels of maturity. When you wanna go from just, you know, some glassware and a couple of tubes on a, in a chemistry lab in a high school, and get to the point of actually trying to translate this idea and this experiment into functional hardware, into operational and operational technology.

That is when you start getting into, into bigger budgets, uh, a harder lift than, than the, than the earlier stages. And, and it can, yeah. And sometimes it gets easier later and sometimes it, it gets much harder depending on the technology. [00:27:00] Chris: So, so, so from one to two, that was, I did not anticipate that, but I totally get why you're saying that.

And I, and, and actually now that you, I think, I think we, we underestimate. The difficulty in that, that fundamental innovation. But that is also why we put so much, uh, status on that, on that founding spark from, you know, typically a scientist or an entrepreneur or, you know, you know, they're one and the same.

Um, but I think we underestimate how, how important that is. So I thought you were gonna say something closer to kind of the transition from TRL four to six, um, which is, you know, one of the valleys of death, there are a couple of valleys of death, and that's one of 'em. And, and, and so, you know, that's one we hear about a lot because we're moving from a little prototype and now we're gonna be thinking about what a pilot looks like, and we have all of the complications that go along with that.

We've gotta start engaging partners. Typically, we've got commercial and, [00:28:00] and, um, and capital changes and requirements, but we've also, you know, you've got the brutality, you know, the, the brutal reality of, of, uh, of taking your. You are working prototype into something that exists that is real, that is more real, and that is more of an operating environment.

And that's where we see so much struggle. 

John: Yeah, absolutely. And, and, and another, another aspect of of of that whole, um, transition from, from like four, which is in the lab to like six, is not just the environment which is critically important, but also the fidelity of the design in which the technology is being manifested to the eventual application.

If you go from a very generic, uh, you know, uh, high school chemistry lab experiment to, and now I'm going to try to make nylon, or I'm gonna try to make rubber for [00:29:00] tires and I'm gonna do it to make a tire or to make whatever it is, or, you know, a, um, a ceramic heat shield for a space shuttle. Um. That gets hard because I've now gotta make it to fit into the rest of that design as I go from 4, 5, 6, 7.

And, and so it's, it's, it's certainly is the environment, but it's also the, um, uh, focused and constrained by the application into which it is going. Yes, yes. The, as you were speaking, the what occurs, the framework, the TRL framework distinguishes between, let's say a technology readiness level and a system readiness.

Um, you know, with TRL online being an operating system, a system operating in an environment. Which is, and that's distinct from a technology [00:30:00] which, you know, may be a subsystem of that. And I think it's an important, it's one i I overlook and it's one I think a lot of other entrepreneurs and a lot of other, uh, uh, you know, participants in this world, uh, can forget about.

But I think it is an important distinction that a component can be at, let's say a higher TRL TRL7. And a system can only be at a lower TRL say four. 

How do you think about the relationship between subsystem and system readiness? Yeah, so, so it really, it really is, um, uh, because this is a language and, and not a, um, uh, uh, not hard and fast.

It's all the same all the time. Uh, you have to be flexible in how you consider whether you are, um, in objective. Is a, uh, for example, is it a GPS receiver [00:31:00] or is it maybe an atomic clock that's gonna be in a GPS satellite or is it the entire GPS satellite including all of its technologies? Or is it the global positioning satellite system with the system of systems?

And so, and you, you march along and you've got your goals and objectives for your technology r and d effort, but they're always part, there's always, you know, it's, it's maybe turtles all the way down, um, like on the back of the earth, uh, or the earth on the back of the, of the tortoise or elephant, the picture choice.

Um, but, but you've got to recognize you and it, and it's clear that your, um, TRL as you said, your TRL seven or eight or nine. Your system may simply be a part, a part of somebody's larger system, [00:32:00] in fact, almost inevitably is. Mm-hmm. Mm-hmm. I, you know, that's a, I think that's something that gets overlooked a lot.

I think, uh, it comes back to, and we'll talk a little bit about whether this is, you know, is the TRL framework, do you see it as a communication, you know, as a language, a communication tool? Do you see this as a way of measuring progress? Do you see it as both and the, certainly, yes. Um, but, uh, I, I, I, the way I, the way I often talk about technology readiness assessments or the other methodologies that I think we're gonna talk about in a few minutes, is it, is it, it's not.

A calculation tool. It's not something where you can make a spreadsheet and you plug in some numbers and it gives you an outcome, and then you base your, your [00:33:00] decisions on, on, it's got a TRL of 4.77. I mean, that's just stupid. I mean, it's, it's unwise, it's deeply unwise to, to try to take it to that level.

But technology readiness assessments and using the methodologies is, um, a contact sport. Mm-hmm. So you have to keep in mind what you're trying to achieve. You may, you may always be using a ball and you may always be on a field, but there may be different goals at the end of the field. IE this one I'm trying to do, uh, uh, uh, uh, a um, atomic clock for a satellite.

This one, I'm trying to do an atomic clock for a submarine. This one I'm trying to do and so on. And so where I'm going, the application. Guides and constraints, what it means to say the technology is ready, that you've reached readiness levels six or seven or eight. And [00:34:00] so, um, you have to constantly have a negotiation, just like with the, um, with the ball on the field where you're passing the ball from one player to the next IE one part of a technology innovation portfolio to the next, the valley of death that you described is I kicked the ball and there's nobody there to take it.

Uh, I'm gonna stay with that metaphor for just another moment or two. Um, and you, you, you're hoping that the people who are doing the flight systems are not wearing a different jersey and that as you're, you're getting near to the goal, they kick you back down the field. So, so you gotta hope everybody's on the same team here, but it's a, but it's a contact sport.

Not a, um, an abstract ivory tower analytical tool. Uh, and uh, and that's why I I characterize it as a language that you use to talk about things so that you can advance the state of play. [00:35:00] Yeah. How confident should we be in TLS assessments then that are self-reported? So if you do it self-reported by teams or Oh, yeah.

You know, whatever. If you, if you do it the right way, uh, it's when I'm, when I'm in the lab and I've got an innovation, I've invented something based on a physical phenomenon or what have you, I, um, am probably the only one who's qualified to say, I've really made this innovation. But there are methodologies, like published papers that, or, or patents.

You have improved confidence, not perfect continent, because I've seen some really silly patents, things that could never actually be implemented, but they've got patents be because the examiner said, okay, it satisfies all the checkbox, doesn't mean it could actually ever be done. [00:36:00] Um, but there are, and there are known trip wires, like, no, no, um, uh, uh, perpetual motion machines, even though, you know, they try to get through all the time.

As you march along in technology maturity and you keep your eye on the application, on the goal that you're shooting for, then you have to bring in the people that are responsible for those decisions. And so, you know, if I self-reporting that I've reached TRL nine for this application, you should be pretty, pretty skeptical.

If I'm self-reporting that I've reached TRL three or four and I've got photographs and I've got technical journal papers that have been peer reviewed, you can be pretty confident that the technical community agrees with me. But as I march up the TRL scale and I'm, I'm gonna say my algorithm is ready to go into this bank and run their financial software, you'd better have had [00:37:00] your algorithm vetted by people who are writing that software and operating that software and are involved in getting it installed into the system in order for investors or, or the, the people who are making decisions about the future of those systems for them to have confidence.

So there's this, this transition, not just in the doing of the technology development, there has to be a transition in who gets to assess. The technology maturity and, and riskiness and so on. If I'm an investor, then just on that point, so the real assessment or validation could come from an industrial partner, a potential customer who has and due diligence run the platform, due diligence as part of the dt.

So that's what I should be thinking of. And to be fair, I think most investors do that, but yet, uh, I still know many. Um, and I guess it [00:38:00] depends, it's a situational thing to an extent that will be reliant on the company's view of where they're at along that journey. Um, do you, and, and, and by the way, by the way, that can be entirely right, because, um, remember they, they, they wanted to burn Galileo because, because, you know, everybody knows what he's talking about is nuts.

But he was right. And so, and the same thing is true in technology all the time. Where you, you've got a bright idea. Everybody else does it a different way. And so the the due diligence process, the, the, the, the vetting has to be done by somebody who's able to understand what the innovator's talking about.

Yes. Yes. You, you originally designed the framework for aerospace. It's, it's subsequently gone on, and you mentioned this earlier in referencing the DoD for, it's almost [00:39:00] become a meme across every discipline and domain known to, known to man. Um, do you think the framework can extend across all of these different areas, you know, fusion and quantum and AI and, you know, high performance computing software, for example.

You know, does it have legitimacy regardless of domain or are there limitations? So in generally speaking, it's a absolutely applicable because when it was first conceived, it was just, let's do a technology readiness review, and then it was, let's do an assessment of the technology readiness level a decade later.

Um, looking at all of these things, looking at flight systems, looking at rovers, looking at spacesuits, looking at life support, looking at software systems, so flight computers, all of these things are functional [00:40:00] capabilities. And the, the sort of, the detail of, of whether it's a, um, an AI chip it, or it was an original eight bit processor 50 years earlier, it's still a com, it's still a computer, it's still a chip, it's still in a, in a, in a machine.

Um, so it's, it's very highly applicable. One, one key thing to keep in mind is how am I going to test the technology readiness of a particular component in a particular tech, in a particular application? Um, and so, uh, coming up with, uh, illustrations or examples, evidence is critical. And so if it's gonna be a piece of flight hardware, you're gonna wanna see photos, you're gonna wanna see spec sheet, you're gonna wanna see test results.

If it's an algorithm, you're gonna wanna see trial runs, you're gonna wanna compare speed and time for doing a [00:41:00] a, a known job that, so, so the evidence that you're looking for can change depending on the, the particulars of the application and the technology, but it's still a TRL five. You mentioned a point there that, um, actually acted as a nice segue onto additional.

Frames of reference for evaluating progress. And, uh, and so I want to get into these additional frameworks that you've developed. You, you created one called the r and d degree of difficulty, or  R&D Cubed, and which is designed to serve as a compliment to technology readiness levels. Um, where TRL tells you kind of where your technology is along that journey.

R&D Cubed, the degree of difficulty tells you roughly, if I'm getting this right, how hard it's going to be to get to the next [00:42:00] level. Is that about right? That Absolutely. And and for the next level for the application that you're targeting. Okay. Yeah. You see, you're always coming back to this for the application and I have to bear, okay, so we have to bear that in mind.

So you've gotta be running these frameworks or systems, you've gotta be thinking about the application all the time, or it's not just one general platform, right? It's always with regards to that application. I get it. Yep. Absolutely. I mean for, so a, a good example may be for, you know, for a, um, for a Starlink, I'm, I'm gonna be operating in low earth orbit, I'm gonna be inside the Van Allen Belts.

I'm only gonna, the satellite's only gonna live six years, five years. I don't have to think about a, a solar array that's gonna last for decades. Or I'm not gonna think about radiation hardening 'cause I don't have to. So that's my application. So I do all of that with that application in mind. If I'm talking about a satellite that's [00:43:00] outside the Van Allen Belt, stationary Earth orbit, or a Mars orbiter, and I want it to live for 20 years and it's gonna be out, out there with galactic cosmic rays and God knows what.

I've got a whole different set of specifications for the same functional areas, ev arrays, light computers, and so on, depending on the application. And that's true for everything. Yeah, that's, that's that, that how hard something is going to be. Oh, is that something that can be calculated? Is that something that is judged again, given your earlier point that this isn't something you can plug into a, a computer.

Right. You know, how do you evaluate? So when I, when I devised the R&D Cubed research and development degree of difficulty, and it's, uh, sister metric, the technology need value, uh, what I had in mind, I, 'cause I had been, at that point [00:44:00] I'd been working with technology investment portfolios for years and, and what I had observed was that.

Uh, depending on how hard the problem is to solve, you always need to have belt suspenders, buckles, and buttons. You, you need more than one functionally different way to solve a problem. Now, if you look at some of the great innovations in history, uh, what you see is a large portfolio of approaches. But because of the limitations of the laboratory, Menlo Park and the, the team at Edison's crew, they did bulb after bulb after bulb after bulb sequentially.

So they had their portfolio, but they did them in in series. If you look at, um, the development of the ICBM, uh, and all of its systems, um, by, um, [00:45:00] uh, general, um, uh, oh, his name has gone outta my mind. I'll think of it in a minute. Great, man. Um, what you saw. The, uh, parallel, uh, Atlas and Redstone and, and, and solid rocket motors all, and, and they were in parallel because it was time critical to not lose the Cold War to the former Soviet Union.

And so you see the same approach, uh, a um, General Schriever that came back. Great, great man. I met him, uh, on a couple of occasions, really, uh, uh, before he passed away. Great man. Um, and a tremendous in innovator and tremendous, um, uh, manager of RD portfolios, but at the system of systems level. And so, um, in the, in case of, of R&D Cubed, the, the way you determine that and, and the R&D Cubed goes from either it's a slam dunk or it's next to impossible.

So a TRL, sorry, an R&D Cubed to one, [00:46:00] I only need one path and I'm almost certain to be able to solve the problem, I can take the. A known piece of, of steel and I can fold it up and make a paperclip that's a R&D Cubed to one or an R&D Cubed of, um, five. I'd better be, I'll be allowing lots of time and lots of money 'cause this is gonna require effort after effort after effort to solve it.

So it's really the R&D Cubed is the inverse of the probability of failure in my research and development effort. And so if I've got a r it, it's a, uh, a p probability of, of, of, of success equals one minus the probability of failure, power of all the pads I have to try. And I can either do them sequentially or I can do them in parallel depending on how critical it is.

And, and, and that's a lot of what we see today is you see some big, uh, [00:47:00] market driven opportunity. You see stupendous amounts of money being thrown at them. I, you can't imagine what I have in mind. Um, maybe AI, um, and, and, and because they're all competing to get there first. You see this portfolio, it was a really hard problem, but so much is riding on success that you see all of these parallel paths being pursued by competitors and they're all trying to get to the, to the, to the goal line, uh, first, um, a very different way to play, uh, football.

Um, yeah, but so, so the r and d cubed, in fact, is intended. It, it can be used more generally just as a way of thinking about it. One is easy, five is, is next to impossible. Uh, but the way it was conceived, it was intended to be, um, an actual, literal way to judge a technology, research and development [00:48:00] portfolio.

In terms of how many parallel paths, how many attempts is it gonna take to solve this functional problem? And that can give you insight. No, but that can give you insight into the length of time. It's like it may take the resources you need, the different partners you're gonna have to bring on board whether you have, you know, I dunno, make buy decisions or, you know, whatever it is.

But that actually adds quite a, a lot of context. Knowing that can bring quite a lot of context to your journey. Yeah, absolutely. It's one of the reasons why, why prizes are so interesting as a, as an innovation tool. Because with a prize, you're putting out a lump sum of money and you're inducing a whole group of innovators, competing innovators, to all try different approaches in, because the R&D Cubed was high.

Yeah. And if it was low, you wouldn't use a prize, you'd just buy. [00:49:00] The TNV uh, uh, metric is there to measure strategic value. And, and how important is this particular innovation, this r and d effort to the application that I'm trying to pursue for my, my own, my, my strategic purposes. So it's a, it's, TNV is strictly measured in terms of the application.

How badly do I need the goals and objectives of this investment item in my portfolio? Hmm. Do you know of anyone who uses all three, the TRLs, the R&D Cubed, the TNV, so, and what's the kind of, you know, relationship between them? The interplay it's coming. It, it sort of comes and goes. It's a, it, it really requires a decision maker who's, um, uh.

Really got a responsibility for a broad [00:50:00] suite of types of technologies coming into a smaller number of applications. Somebody who's working on a portfolio. Um, so I've, I've talked about at different times, one of the tools, uh, there are two ways to aggregate. So talking about them separately is not that useful.

Um, the two ways that I've got to aggregate them are all, both in what's called the technology readiness and risk assessment, which is a successor of a t Technology Readiness Assessment. And for T-R-R-A, a technology readiness and risk assessment, you're looking at all three simultaneously. And then there are two different ways to sort of summarize the, what you've learned from doing the assessment.

One is to map, uh, the, the technology readiness remaining to be achieved. Versus the consequences of failing to achieve it so that the, the [00:51:00] risk of failing to achieve it, and this is the traditional risk matrix, but for technology r and d. Uh, and so you've got a, you know, low end is green, high end is red, in between is yellow and orange.

And you can actually do that plot and look at either competing systems or the subsystems in an application on the matrix and try to figure out where's the tall pole, where is the probability of failure highest, and the consequences of failure worse? And, and that tells you your risk, of course. And the other one is you can collapse all of that down into one number, which I call the integrated technology index.

And you simply take the product of how hard is the technology, how mature is the technology, and how badly do I need the technology? And you multiply those three numbers together. That'll tell you, okay, for, and this is at the system of systems level, is is the, [00:52:00] uh, Redstone or the, the, um, Atlas the better solution for ICBMs now and that kind of thing.

And you can do this for, uh, any kind of, um, uh, system of systems level decision. It works really well. Uh, I, I, I honestly, I, I continue to use it. I continue to promote it. Uh, I have not had the opportunity to, to pitch it to the ga o yet, to, to, to get them to force it on the DoD. Yeah, yeah, yeah. Um, there is a critique that I hear from time to time from founders on TRL driven thinking that it can be too linear, too.

Um, I dunno, too much of a stage gate type process. And, uh, it fails to account, or the perception is that it can fail to account for the place where [00:53:00] it's perceived. Real breakthroughs take place, which is when you have to go back, you have to cycle back. You have to question the assumptions on which some of your initial moves were made, um, and or, or you have to be operating on, on different planes simultaneously.

Um, is that, is that fair or are we not using the framework? Right. I think it's that the, that the framework's not being used. Right. Okay. Absolutely. Because, yep. Because as we've been talking about, it's incredibly adaptable and, and it really is. It's, uh, it's uh, um, uh, I guess it's like a one-on-one versus zone defense.

Yeah, maybe, maybe, maybe changing. I just got Yes. Yes. If you're, if you're in zone defense and what you're, and your objective is to stop somebody from progressing, then the, you know, TLS can feel really, uh, constricted. But if your objective is to make the goal, [00:54:00] then they can be enabling, they're, they can be profoundly enabling because you can say, here's the photo of my widget working in a vacuum.

I'm ready for the next level. I'm ready for that next tranche of funding. And, and it depends on what your, what the, um, philosophy is of the one who's using the methodology. Yeah. Now, you've applied all of these to, you know, we were talking about really big projects earlier, and in your time at NASA. You were involved in some very big projects and managing some very, very big portfolios, which I'd like to come on a little later, but I wanna talk a little bit about how this was applied.

I mean, the thing that I, I know you for the TRL framework, and then I also got to know you through your solar powered satellite, um, uh, alpha, uh, uh, program, which is, you know, which is, you know, which people who are in this world know you for, I think probably just as much as they know you for the TRL framework.

And I wanna talk a little bit about [00:55:00] its application on a big example like this, on a big mega project, you know, because that is a mega project. Mm-hmm. Um, and you, you wrote a paper, which I had the, I had the good fortune to read, uh, a 2023 paper, which was, uh, submitted to the International a Article Congress, and it was on modeling megaprojects.

And I'm gonna read from my notes here, but you argue that early stage analysis. Has to be both physics based and it has to be parametric, which means that it has to obviously reflect the real physics of the system, but it also has to allow you to vary key assumptions, um, in order to see what breaks. Why.

Can you talk a little bit about the relationship there? I appreciate we're diving and, you know, now quite deeply into systems and, uh, management of progress within those systems and the relationship, but I, I think this is quite important for, for, for those involved in [00:56:00] big mega projects as a number of our listeners are.

So, you know, can you dive into why the combination between the, kind of the physics based and the parametric based matters so much? Yeah. So, um, if you're, if you don't do high level modeling and, and maybe it's just on a, on a whiteboard or it's, it's just in, you know, uh, a, uh, a desktop based tool. But if you don't connect the dots.

If you don't try to build a model of the thing that you're trying to develop, uh, and do it analytically and do it based on, on pretty good physics doesn't have to be perfect, but it has to be pretty good. Like, I'm gonna use the rocket equation. I'm not gonna cheat and somehow say that, you know, fuel tanks are weightless.

Um, then if you don't do that, you can't get a good understanding of what you're trying to accomplish. Now let's let go to the [00:57:00] case of the, of the solar power satellite and, and a critical issue is the end, end-to-end energy, conversion efficiency, starting with the incoming sunlight and ending up with the outgoing voltage into the grid on the earth.

Well, in between these two, there's a whole series of stages. Of conversion steps first. I'm, I'm, for example, I might have my photo take cells, and then I might have my RF transmitter, and then I've got the RF receiver. So let's just take those three. Um, and, and there's like a 15 or 18 of them, but let's take those three now.

Um, the RF transmitter, um, has an efficiency dced rf. The PV cells have an efficiency sunlight to dc and on the ground there's RF to DC efficiency. [00:58:00] And so, um, in, in one case, the, uh, on the ground, if I, if I, and I wanna, what I wanna be able to do is to go in, I've built my model and I want to change that efficiency from the baseline, make it either better or worse.

And at the end of the day, I'm gonna look at high level figures of merit, like mass and cost and levelized cost of electricity. And in the case of the, um, and in the background, there's all sorts of things like transportation and destruction, all this stuff, like any big energy industry. But I may, I, what I, what I, if I can't do these kinds of variations, I cannot understand how important to my goal, which is to be a competitive cost of energy, is an improvement in the efficiency of the transmitter, or an [00:59:00] improvement in the mass or the efficiency of the, the PV array.

And if I do these kinds of analyses, I can see the curve and I can see where's the knee in the curve and if my, if my, my curve is such that, you know, the state of the art is pretty darn good. I can make changes of five or 10 or 15% in that number, and it really doesn't change the cost very much. Or I could look at something like the, the efficiency of the transmitter and say, you know, if I could just get this from 70% up to 80%, it cuts the cost in half because it does these different things, which I can't understand unless I do the modeling.

And so the, that's the reason why this high level systems thinking, systems analysis, thinking is so critical if you're going to be doing, um, uh, development, [01:00:00] that that has to work in the real world. Yeah. Um, and the, um, uh, it's a, it's not a question of of getting in there and trying to do everything and understanding everything, but if you don't do it, then you don't understand nothing.

Is there, is there a parameter? Is there a single parameter? Which, um, I don't know, which tends to have the most leverage on the economics of a large scale energy is or space system to use your examples. Um, you know, is it, so you mentioned launch cost, conversion efficiency, you were just talking about is it something else?

So there are some, there are some, and by the way, all of this, this sensitivity and where's the knee in the curve? All of this relates to the technology need value. It's all a way of gaining insight into how badly do [01:01:00] I need the results of this R&D effort? Yeah. Because if it's a, if it, if it, if doubling the efficiency makes no difference at all on the cost, then why bother?

Um, certainly for, for the space solar power case. The overarching, uh, logistics. It's, it's, you know, uh, a general study or captain study tactics, general study strategy and, and statesman study logistics. Um, for, for the, for the overall, um, system, things like earth orbit, transportation costs are not the most important parameter, but they're the thing that affects everything.

So you, you've gotta worry about cost of transportation. Um, the, the one that is interesting, uh, which, which is really been neglected is cost of hardware. Um, and it turns out cost of hardware. This is, you know, it's, it's, it's the eec, it's economic stupid, [01:02:00] it's the economy, stupid. The cost of hardware. Going back to Clinton Gore, Clinton, yeah.

Cost of, um, the cost of the hardware was a sort of a hidden variable. You know, space systems have traditionally cost. Thousand dollars a pound or 50,000 a pound to use US numbers, or $50,000 a kilogram or hundred thousand a kilogram. Um, when they got, when suddenly these people started building mega constellations, like Starlink and so on, they're making thousands of copies in factories.

Instead of making, you know, bus size Swiss watches in laboratories, suddenly the cost of hardware goes from a hundred thousand dollars a kilogram to $900 a kilogram. That the, the number of units is a, is an implied figure of merit that is so vitally important and is [01:03:00] directly related to the overall economics, more, more than the cost of launch.

Um, but when you combine the two, that's, uh, just on that score, that the two things that are the most critical these days for. SPS concepts space, solar power con SSP concepts is if you've reduced the cost of launch from $40,000 a kilogram to 400 a kilogram, that's at 99% reduction or better reduce the cost of the hardware from a hundred thousand dollars a kilogram to a thousand a kilogram, you've got a 99% reduction on top of a 99% reduction, and all of a sudden everything else changes.

So, uh, yeah, that's another good one, by the way. Um, the combination of factors, uh, I improve my efficiency, not because it makes a big difference, but because it reduces the mass of my system and if I improve the efficiency, I reduce the mass of my system. I reduce the cost of putting it out into [01:04:00] space and, and because it's lighter.

And so there's a, there's a cascade effect where, which is what makes it so important to do these integrated. Physics based models. So you can see what connects to what

the, um, I have one more question there that I really wanna ask. The, what do you do when the manufacturing of your, whatever it is, but you know, you're lowering the cost because of manufacturing and these, and manufacturing the availability of materials. And over time, the more units you manufacture, the more the costs can come down.

What about when you are trying to build something or produce something? Uh, let's, let's use Fusion as an example. So they rely on, depending on the, on the, on the platform being built on high temperature superconducting magnets. The materials [01:05:00] for those magnets may not be, you know, we may not have those ready yet.

You know, you know, I think a big part of the issue of, uh, of the realization, the challenge and realizing fusion is the supply chain doesn't entirely exist yet. So we have to build the supply chain and, you know, and catch up. You know, we're not building the materials in great volumes yet. Uh, at some point we may, and then obviously with more manufacturing, the prices can come down.

But when you're still, you know, decades away in terms of being able to realize your innovation and you're relying on manufacturing, that hasn't really occurred in the volumes you need yet, how do you model that? How credibly can you, you know, how credible can your model be when you don't yet know what that kind of future looks like?

Well, you're, it, especially in this case, which is a good one. Uh, and there are, there are several others. Um, yeah, but the, the model can be fine. You haven't reached TL one yet. [01:06:00] So at the heart, at the heart of your machine, I, I have to have those room temperature superconductors in order to have the magnetic field strengths that are gonna allow me to do, you know, fusion with helium three and, you know, which is the only way I get a fusion machine that doesn't become radioactive and imp brittle and all these things.

So, you know, there's this, there's this food chain from really high magnetic field strengths all the way through everything else to, and now I need to mine helium three on the moon. Yeah. Or, or get it from, from Jupiter or something. Um, but if you, if you, you can, you can do the modeling. You can build a prototype, you can build a mockup.

But in the absence of some critical component, that's why it's gotta be physics based. But the, the, the modeling may help you to have the insight to say, okay, I [01:07:00] built my mockup. I've learned a lot. I'm gonna apply it over here and over here and over here, but the fusion machine is gonna have to wait until I figure out how to make, but 50,000 kilometers of, of room temperature superconductors, and, and you're not there yet.

Space elevators are the same kind of thing. You know, they got the solution, but they have to figure out how to make 70,000 kilometers of one molecule, uh, fiber. And, and that's the thing that they're working on because they've recognized that's their achilles heel. Yeah. To get the market. Yeah. Yeah. Yeah.

Yeah. I get you. Um, let's, let's talk a little bit about you, uh, as I feel we should. Uh, you've been so generous with your insights so far, John. Um. You left NASA in 2005. Mm-hmm. And, uh, you were responsible for the exploration systems research and technology program. Uh, more than [01:08:00] 800 million a year in budget, over 100 projects, more than 3000 people.

Um, that's innovation or that's managing innovation at real scale. Um, what does, what does that teach you when you're managing a portfolio of innovation at that scale that, that, you know, you just cannot learn in a smaller environment? Um, so, uh,

four things. Hmm. One, details matter. And that's not only the numbers, it's also the technical content. I, I, uh, I started out as a physicist. I think it gave me a distinct advantage. By not being a specialist when I was, uh, an undergraduate or a graduate. And to, to, to look at a wide variety of technical issues and solutions and applications and the, the details matter.

[01:09:00] So just being a manager who doesn't worry about the technical information and the budget information and the people, and so it, it doesn't work as well. Um, second, um, I'm a huge believer in form follows function. So clearly identifying what it is you're trying to accomplish in terms that can be ified.

Like I'm trying to improve the, um, cost of this system. I'm trying to improve the, um, the pace at which we will do exploration on the moon or on Mars. I'm trying to re on and off, so, so, really. Looking at the, in both qualitative and quantitative ways at the goals and objectives of your investment portfolio is critically important.

And then form follows function, creating a program work breakdown structure, [01:10:00] both the technical work and the budgetary structure that follows what it is you're trying to accomplish. So you create a program, a work breakdown structure, budget, line items, pieces that all fit into this framework of goals and objectives.

And then they translate down into the numbers. And lastly, lieutenants who get control of their piece and who are competent and follow your philosophy. So, so you've got, let's say you had five big work packages in your billion dollar program. You get five people who follow your philosophy, who know what you're trying to accomplish.

They agree with it, or at least they, they, they're willing to go along and they're super competent in their part of the portfolio, and then they follow, they work the, the, the flow of information up and down. So it's, it's, um, [01:11:00] uh, it's, uh, details, it's goals function. Yeah. It's organization and it's people.

Yeah. Yeah. How do you, um, and then I had no problem. I could handle literally thousands of people and, and hundreds of projects. And, and because because I, I knew who was responsible for what. I knew what they were trying to accomplish. I knew how much of the money they had. I knew what they were supposed to be spending this year.

It was great. How, how, um, how diverse a. What kind of diverse, if you had any diverse attitudes, you know, within that environment? Or does everyone come up through NASA? It'd be pretty much aligned in terms of how they, how they're going to manage things. I wouldn't say how they see the world, I don't expect that.

But in terms of how they're gonna manage their time [01:12:00] and manage projects, is there a common platform? Is there a common management approach? Well, there were, there were differences in terms of how different people prioritized either making research progress at low TLS versus moving forward on applications at higher tls.

And if you're gonna move forward with an application, you've gotta concentrate your money and set aside making more progress at the materials level and focus on building a widget. Uh, so those kinds of differences, and they're also discipline differences. Even coming up through NASA, there are different research laboratories.

There are different people who have background in propulsion versus, uh, space suits versus structures, and they have their own feelings about what's important. And, you know, all of those things may end up going into the application. I know we need to be working on this rocket [01:13:00] propulsion system when in point of fact it may be all, all the issues may be in manufacturing, but yeah.

But so, so there are those differences and those are just honest disagreements. Yeah. Um, uh, there are some, there are always institutional barriers where, where my loyalty is of course to nasa, but it's also to the organization that I came from. And so when I make my decisions or I, when I, when I'm in the field, I'm, I'm trying to pass the ball to my cousin.

I'm, I'm, I'm not always looking at the goal and trying to pass it to this guy that I hate. I, I'm speaking now metaphorically. Sure, sure. Uh, I wanna see, I wanna see resources go back in, into the laboratory from which I came. So there are the in institutional, um, predispositions, I'll say. Mm-hmm. Um, and there are always, and this has been a, a, a perpetual struggle [01:14:00] and goes all the way back to the seventies.

Um, where does the leader of the organization put their emphasis in terms of working on what we're gonna need for tomorrow versus working on the crisis that I've confront today and over and over and over again. Money for tomorrow, money for the day after tomorrow gets cut out of the budget and we'll fix it later.

We'll fix it later and, and put into the crisis that is there today. There's always a crisis today. And, and, and this is like you mentioned, the valley of death, uh, term. I, you learned when I first went to NASA in the eighties eating your seed corn. A good farmer recognizes that if they eat the corn that they've set aside for next year's planting, they're, they're gonna be moving next year, next summer.

'cause there's not gonna be any crop. [01:15:00] And so you, you have to have strategic leadership that is willing to accept the pain of a failure today in order to assure that they have a future tomorrow. And, and that's really hard. It depends on senior leadership, senior, um, uh, strategic leadership. Yeah. If you were, if you were advising a founder of a deep tech company.

Um, they're currently at say TRL5. So they've got, uh, uh, a prototype, an advanced prototype. Um, they've demonstrated their technology at lab scale. They're about to go out and raise their series A, what are the three things they most need to understand about their journey ahead? What's your market?

Validate your application, what it is you're trying to develop. Make [01:16:00] sure that's what you wanna develop. Um, test to see if the thing that you're developing, is got legs. It's gonna be adaptable, evolvable scalable to what you're gonna need to be doing tomorrow. Uh, and uh, lastly, I would say always double check your foundations.

Before you scale up, before you go to market, before you start building a system, make sure that the, the kernel of your, um, of your, your technology is, is there. That is not just, oh, I can make this work at TRL5, but it turns out to get at the TL seven. I need a miracle. Because, because here it looks, it works great, it works great, but it turns out that in a, in a system, the damn thing runs hot and it's gonna [01:17:00] overheat and it's gonna blow up everything because I gotta have a big radiator and I gotta have this.

So make, make, check your, check your key performance parameters on, on the kernel, the thing that's important to you, and, and make sure before you go forward that it's gonna scale, adapt, evolve, and work in the application. Yeah. Um, one last question. Where can, uh, where can our listeners keep track of what you're doing, of what you're writing, of what you're thinking?

Uh, so, uh, at the, at, I've got a, I've got a business website. It's just there for people to be able to find me. It's Artemis Innovation. That's my, that's my, and Artemis Innovation Management Solutions is my consulting company. Uh, I've got a couple of different startups. I work a lot with the, uh, international Academy of Astronautics and the International Astronomical Federation.

Uh, I publish frequently at the [01:18:00] International Astronomical Congresses. Um, I, I, I do webcasts, I do broadcasts. Uh, I teach a course on innovation and entrepreneurship, uh, in the Kepler Space University, uh, which is an online university. Uh, I have a whole program, a, a, um, a master's program with six courses at make up a two year program.

Uh, and, and I'm working on a couple of books. I'm, I'm gonna be coming out shortly. I've got, I got one book. It's sitting up over here. I can bring it forward. This is my book from a decade ago on Space Solar Power. Uh, it's got a lot of my thinking in it. Uh, the case for Space Solar Power. Uh, it's a good way to have you inspired Elon.

Well, I don't, I, Elon worked hard for years to oppose space, solar power. Um, and, and, um, and I, and I, it's a Do you have time for a 32nd story? Oh, yeah, yeah, yeah, yeah, yeah. So I [01:19:00] used to, I used to actually be able to speak to Elon before he became, you know, one of the rich, the richest man in the world and, and, and really busy.

Um, he used to get a 15 years ago, he would get, whenever he went into public. He would get asked when he's, he wants to talk about his car and he wants to talk about SpaceX and he wants to talk about his plans for Mars. And somebody in the audience would raise their hand and say, why aren't you working on solar power satellites?

He hated that. He absolutely hated that, and it came up every damn time. And I think he, he reacted as much as anything to being pestered, which he just hated, you know? 'cause lots of people are, are, they either hate solar power satellites or they love it. Um, but they're, they're either, you know, they're, they're cultists, they don't, I, I don't mean that in a negative way.

They, they believe something on it as an article of faith rather than [01:20:00] based on analysis and, and, and work in the field or reading the literature. And Elon just hated that. So he turned against it. But as it turns out, AI in space and, and big power systems. SpaceX and Starlink. Starlink at 10,000 satellites is something like 40 megawatts of, of solar power in space.

It's the biggest space, solar power system that's ever been built. And so I think he's now got the evidence of his own venture. It's, it's 400 times bigger than the space station. It, I mean, for God's sake, it's amazing. Uh, more, more power than the space station. Yeah. And so, and that's all come in the last five years.

So I think, uh, I did not get to him. Uh, I think the evidence got to him. I think the, the fact that it's right there and he did it himself. Yeah. Yeah. It makes it evident. 

Chris: Yeah. [01:21:00] Well, we will see. We will see what he does next. John, I cannot thank you enough for, uh, the last hour and 20 minutes. This has been marvelous.

Thank you so much for your, your wisdom and your. You have given our listeners, I and me a huge amount to go away with. So thank you so much. On behalf of everyone. 

John: Well, Chris, thank you for the conversation and uh, for your, your extensive preparation. Uh, uh, the conversation has been just a pleasure.

Chris: Wonderful. Thank you again. 

You've been listening to the Lab to Market Leadership Podcast, brought to you by Deep Tech Leaders. This podcast has been produced by Beauxhaus. You can find out more about us on LinkedIn, Spotify, apple, or wherever you get your podcasts.