Inside CVC by u-path
Welcome to Inside CVC —Inside CVC by U-Path is the podcast where corporate venture capital meets strategy, leadership, and systemic change. Hosted by Philipp Willigmann and Steve Schmith, the show brings senior voices from across corporate venture, startups, investment, academia, and policy to the table.
Each episode goes beyond buzzwords to explore how capital, technology, and leadership shape the future of business and society. From AI and robotics to geopolitics, board governance, and inclusive innovation, Inside CVC is designed for executives and policymakers who want to understand not just what’s happening — but what to do about it.
Inside CVC by u-path
Inside CVC: Edward Tenner on Unintended Consequences, Deep Organizations, and Boardroom Risk
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
What do the Hindenburg, the Titanic, Boeing, and the Challenger disaster have in common?
According to historian and author Edward Tenner, they were not failures of incompetence. They were failures born from success, confidence, and blind spots.
In this episode of Inside CVC, Tenner explains why highly capable organizations still make catastrophic mistakes. From the smoking lounge on the Hindenburg to safety measures that destabilized the SS Eastland, which capsized in 1915 while docked in the Chicago River, we explore how innovations designed to reduce risk often create new vulnerabilities.
We discuss:
- Why disasters often strike respected, well-managed institutions
- How shared assumptions normalize risky decisions
- The illusion of safety and how protections can backfire
- Why reputational risk and public outrage should be top board concerns
- What today’s leaders can learn from long-term “deep organizations”
Tenner argues that resilience is less about speed and more about depth. Organizations that protect R&D, maintain reserves of expertise, and adopt a 20 to 25 year time horizon are better positioned to navigate uncertainty.
For boards facing AI acceleration, automation, and geopolitical volatility, this conversation is a reminder: the most dangerous risks are often the unintended ones already forming beneath the surface.
Catch up on all episodes of Inside CVC at www.u-path.com/podcast.
Welcome to Inside CVC, the podcast that brings together leaders in innovation and capital investment to explore the trends shaping the business of corporate venture capital. I'm your host, Steve Schmidt, and together with Philip Willigman, we're speaking with corporate investors, entrepreneurs, and ecosystem builders driving the future of innovation. Inside CVC is brought to you by UPath Advisors, helping corporations and startups unlock sustainable growth through strategic partnerships. To learn more, visit uPath.com. And to catch up on all of our episodes, search InsideCVC on your favorite podcast platform or visit uPath.com forward slash the podcast. Now, a quick heads up. If you're a regular listener, this episode may feel a little different. Instead of a polished handoff and clean transition, you're going to hear us drop into a conversation midstream. An unexpected technical issue forced us to troubleshoot in real time. No reset, no do over. And honestly, that's kind of the point of this episode. Because today's guest has spent his career studying what happens when complex systems collide, when assumptions fail, and when well-intended design produces unintended consequences. Ed Tenner is a historian of technology and culture. Best known for why things bite back and why the Hindenburg had a smoking lounge, his work explores how safety, efficiency, and progress can paradoxically introduce new risks, especially inside large, successful organizations. So rather than edit around the disruption, we leaned into it. Because innovation rarely breaks cleanly, and neither do systems that govern it. So let's jump in. Here's our conversation with it. Ed, welcome to the show. We are right. If you tend to listen to the show, this is going to be a little bit different because we are now about six minutes into trying to identify and echo and really create the type of show that I think our audience has come to expect. So let's just, why don't we just pick it up there? This is this is something that is disruptive to what we are trying to do today. We've been troubleshooting it. Um, Ed, you write about situations like that. So um why don't we just hop in there? Talk about what advice you would give Philip and I as we try to troubleshoot this and and maybe take this a conversation in a different direction.
EdwardWell, it happens all the time that unexpected interactions among technical systems occur when people least expect them. For example, there was a point after the episode in uh Philadelphia uh of the uh the uh legionnaires disease that suddenly computers, IBM computers around the country were breaking down. And nobody knew what was happening but the tape drives, and there were still tape drives at the time, the tape drives were freezing up. And finally, that was referred to a team at IBM's Almaden Research Lab. And here's what they discovered. They found that after the Legionnaires outbreak, the HVA system, HVAC systems around the country had bactericides. The bactericides had tiny traces of tin. And the tin molecules wafted uh from the HVAC systems to tape drives that were located near the vents, and those particles of tin were what shut the tape drives down. Now, here was a fascinating thing because this is the first case that I know of in which a machine suffered indirectly from a human disease.
PhilippThank you so much for for sharing that. And I have not heard about that, but um, I'm so excited to have you on the show. Um welcome to uh Insight CBC. And um, you know, I read your I read your book and uh would love to maybe um you know, you you can give our audience a little bit of an insight there. But the title of the book, you know, is uh why the Hindenburg had a smoking lounge. And um you talk a lot about, and that's your area of research, of unintended consequences. And um, you know, really look forward to the discussion today. And um my my first question just to you know jump in, maybe you can give our audience a little bit of a background of your area of research and um yeah, how how you got into the into the field of really looking into some of these tragic accidents and um how yeah, how unintended consequences have an impact on yeah, on everyday life, especially of innovation boards and also on society.
EdwardWhat got me into unintended consequences was my experience as an editor at Princeton University Press in the 1980s. I was Princeton's science editor. I sponsored Richard Feinland's last scientific book, QED. I noticed that as more and more of our computers were networked, and we were supposedly moving into the age of the paperless office, that the recycling bins were filling faster and faster. And that intrigued me, that led to an essay called The Paradoxical Proliferation of Paper, in which I showed how indirectly, through many social mechanisms, computerization was leading to more paper use that has since uh declined but is still pretty prominent. And the resulting essay was what inspired a book, Why Things Bite Back, which resulted from a successful Guggenheim proposal. And that launched me into the study of many positive as well as negative unintended consequences and a kind of Copernican attitude toward the unintended. That to me, it's really the unintended that needs understanding and investigation. Most of what happens, positive as well as negative, is unintended. And yet it's often dismissed, it's often neglected as something that uh is a kind of interesting exception, but not worth considering. So I decided to reverse that and focus what I was doing on unintended consequences.
SteveEd, you talk a lot about neo-institutionalism, why resilient deep organizations matter now than ever? Um certainly if you look around the rural, if you're a business leader, there's a lot of uncertainty, etc. So can you talk a little bit more about why deep organizations are central to resilience and what do today's boards miss when they dismiss institutions as slow or bureaucratic?
EdwardI define deep organizations as those that have an extended time horizon, that whether they are for profit or not for profit, whether they are secular or religious, uh they believe that they have a long-term responsibility to the community. They are definitely interested in profit. They may be very keenly interested in profit, but they believe in profit over the long term, which may mean sacrificing some profit in the short term. IBM is one of those corporations, obviously, uh, but you could also apply that, for example, to the Catholic Church, to other religious organizations. There is a time horizon very often of 500 years. For example, Harvard's new science center in Alston was explicitly built to last for 500 years, as was the Roman Catholic Cathedral in Los Angeles. So this sense of responsibility means that you have a different attitude toward short-term crises. That attitude is, well, we have to get over this and we have to make some concessions and we have to cut back on this or that or do things differently. But we have to keep our eyes on the long term. During the depression, these corporations made sure to keep their top researchers, and they also made sure in some cases that they used the extra time to take courses at universities.
PhilippEd, thank you, thank you. I mean, one of the examples you just brought up is kind of very contradicting with what's happened today, right? You you you know, I love what you're saying that some of these big organizations are, you know, they decided to stay course, they decided to stay focusing on RD and investing in innovation. What we're seeing today oftentimes is that, you know, the moment something is changing in the ecosystem or the environment, there might be uh economic pressure. The first thing which ends is innovation or RD. And if you, you know, you wrote about the Hindenburg, you wrote about, you know, the Titan, the submarine. So why do leaders allow obviously irrational risks to persist, right? Why would they, why would they end innovation even knowing that it's actually harming them later on? Can you talk a bit about that and and maybe connect it to things like the smoking lounge on the Hindenburg, where people today would say, why would there be a smoking lounge on the Hindenburg? Right. And also why would people jump into a submarine which is which is not certified? Would love to hear your thoughts on that.
EdwardBy the way, there are two different issues there. So uh could I talk about them separately? Yes, absolutely, please. Today there is a different attitude toward corporate accountability and responsibility that grew out of changes in business schools in the 1970s and 1980s. There was not the same doctrine of uh shareholder responsibility before. There was more of a sense of what are called today stakeholders. And the idea was that corporations had a collective responsibility for assuring the welfare of the community. They were supposed to be public spirited and they were supposed to be setting standards for the future. The exhibitions of the 1930s, in which, for example, General Motors presented the city of the future, the highway of the future, were a very good example of this. So the paradox is that during the Depression, when everything was so very tight, corporations very often not only didn't cut back, but they often increased their research. For example, synthetic detergents were developed by Procter and Gamble in the 1930s. Using a patent that the Big German chemical companies had rejected, they managed to make it work and created uh the brand tide, which is still uh still a dominant one. You could also see how the uh IBM continued its development, Dell Labs continued development. They all sensed that their depression could be a source of opportunity. And studies of patents have shown that the patents during the depression were, as a decade, the most important in American history, even though the results of the patents weren't fully apparent until after the Second World War. This introduces another concept that we have a generation of really less we have a generation of people who are more short-term in their outlook. The people in the depression were longer term in their outlook. Even if they were poor, they had an idea of a time horizon that was important to them, and they were willing to make sacrifices for that. There was a great resilient spirit during the depression, and now among all our prosperity, we seem to have lost it. This was true of individuals, of families, and also of corporations. We've become very spoiled.
PhilippAnd and if you think about that, right, and and uh in bringing this to the today's world, um, and you haven't, you know, we haven't touched on the smoking lounge, right, and the the submarine, um, but these unintended consequences, right? What is that equivalent today around AI, robotics, climate tech? How can you how can you compare that?
EdwardWhat intrigues me about the smoking lounge of the Hindenburg was what I call a community of expectations. The community of expectations means that there are assumptions about risk that are shared by the corporation, by its employees, and the customers. In in the case of the Zeppelins, the smoking lounge in the Hindenburg, I argue in the title essay, was there because the Hindenburg was twice as expensive as first class on the most luxurious ships of the day like the Normandy. It was the Concord of the transatlantic steamship era. And that meant that half the passengers were smokers, and the ship could not be economically viable unless you could accommodate them. That was the first part. The second part, though, was a technological accommodation. Uh there was negative air pressure, so there wasn't any chance that a fire uh that broke out there would uh would necessarily spread because the the the uh the the it would not you know that it would it would not escape. However, if the steward who was mixing drinks was neglectful, a passenger might step out of the protected lounge with a lit smoke and disastrous results. But people were willing to accept that risk because they thought it was very unlikely, but also because smoking was so normal. They were also flying with Zeppelin because Zeppelin was an astoundingly safe company. One of the things that I found in teaching a seminar on disasters at Princeton is that disasters very often happen to companies with the best records. The Zeppelin Company had never lost a passenger, although there had been well publicized and horrific disasters uh in American and British airships. We still don't know what brought what caused the fire on the Hindenburg. It was almost certainly a spark, but we still don't know what caused the spark. And this is typical of many technologies that we have we have a new technology with new chemicals, new processes, and we have to wait in real time to see when something goes wrong. Right now they're investigating the the uh the the Lisbon tram. Uh the cable broke. It broke in the middle of its uh its uh its lifespan. We don't know what made the cable broke, whether there was a problem in inspections, whether there was a hidden defect in the cable. It may take months before we we have the answer. But it had a remarkably safe record for over a hundred years. Many other disasters happened to highly competent corporations. The White Star Line was technically excellent. The Board of Trade was meticulous in inspecting the the Titanic, and in fact, the uh the Titanic had more lifeboats than were required under Board of Trade regulations. Captain Smith was the most respected of the North Atlantic captains. But organizations don't realize that there are latent problems that are developing, that there are unusual circumstances, and they usually don't plan enough for those unknown unknowns.
SteveSo, Ed, you talk about illusion of safety, and what you describe is perhaps is is a is an illusion of safety recruits innovations. How do safety visions or safety feature sometimes maybe all foster overconfidence, maybe new vulnerabilities, those unintended consequences that you describe?
EdwardThe classic case of safety measures biting back was a ship called the Eastland, which capsized in Chicago Harbor in 1915 with a loss of hundreds of lives. It is said that there were more passengers lost on the Eastland than passengers on the Titanic. This happened during an excursion of young workers from a Western electric factory in the western part of Chicago. There is a book about the Eastland by an economist who studied the history of the ship, and what he found was that the ship, which had gone through a number of changes of ownership, was not terribly stable to begin with. And after the Titanic, it was decided to equip the ship with more lifeboats. The problem was that the additional lifeboats and the deck stiffening that was needed to install the lifeboats made the Eastland even more unstable. And combined with the mistakes of the captain during loading and the behavior of the passengers, the ship uh the ship uh the ship capsized before it it was uh before the voyage began during this excursion, and many of the passengers were lost. This is a classic case of a safety technology that was adopted without thinking about how it affected the entire system.
SteveAaron Powell So then what are some of the hidden trade-offs? Should corporate boards sort of surface before green lighting innovation when you consider this illusion of safety sort of theme?
EdwardAaron Powell The most important thing to me is reversibility. An extreme precautionary principle in which you don't do anything because it may have adverse consequences is a formula for stagnation. And that is not what I recommend. The problem often is that the signs of danger are disregarded, and that people don't believe that they can pull back. So the most important thing is to launch new technologies carefully, monitor them carefully, and be prepared to retreat quickly if a danger arises. To me, that's the only way to deal with it.
PhilippAnd Ed, you you you wrote a lot about an iconic US or American company, Boeing. And I I read I read your piece um on Boeing a few times. Can you maybe help our you know um listeners understand, you know, how you know framing fragility inside one of you know America's most iconic companies. So you know what happens when one's deep organizations kind of hallow out? And then your thoughts on that.
EdwardWell, Boeing was a classical case of how a merger could affect corporate culture. In this case, it was Lockheed Martin executives rather than Boeing executives who shifted toward shareholder returns as a primary criterion. Before that, Boeing, of course, was concerned about profits, but it also took a long-term view toward its leadership in the field and toward its reputation. The trade-offs seem to be worth it at first. So very often the economy measures can seem to be paying off. But the problem is that in accounting, there really isn't an effective way to account for that risk when when a company is uh is is showing its its results to to shareholders. Uh there are lots of things that are are are excluded from calculations that that should be there. And that was clearly the case with Boeing.
PhilippAnd you know, based on based on the research you have done around the what are what are the lessons um for other boards and boardrooms um from Boeing and Lockheed Martin?
EdwardTo me, the lesson is to focus on the longer-term reputation of the company in its field, to relationship with its customers, and to the quality of research and development.
SteveI want to turn a little bit or talk a little bit uh about uh Schumpeter, who emphasized innovation and the creative distrust from that. comes from Navese. Are today's corporates automating themselves into irrelevance? And and what's the new board mandate in that context and you consider that that warning?
EdwardWell Schumpeter didn't say as I understood his book, which was published in the early 50s, um he didn't say that corporations weren't innovating. What he said was that the day of the heroic individual inventor was over and that invention now was teamwork and it was done in the settings of large corporations. And it it it's uh I think you know he he was he was right in that even though there have been some outstanding individual inventors. The problem for corporations is that they for a long time have had a division of labor with universities and they believed that the basic research could be best done in academic settings in connection with graduate teaching and that they would work increasingly on the development of products using the patents and other ideas that had been developed in academic settings. When I was a science editor at Princeton University Press in the late 1970s and 1980s I saw the corporate research environment in transition both at IBM and at Bell X. The two had a lot in common in that they both had many people there who were working on theoretical questions that did not necessarily have immediate applications in products but it was very important to have people there who were the experts because you never knew when you were going to need them. That's a mark of the deep organization. It is overprovisioned in expertise. One of my authors wrote a book on the oceanographers of Scripps Oceanographic institution in the University of California at uh at San Diego. And she was interested in why the government supported them when most of their work did not have immediate Navy applications. And the reason she discovered in her field work was that if an emergency breaks out you can't wait to train a new crop of PhDs. You have to have reserves. The critical thing about the Great Corporation and the deep organization is that it has reserves. When I visited IBM I met the world's leading keyboard designer and he said that although he didn't have much day-to-day interaction with the people who built keyboards it was really important for IBM to have the best keyboard designer there. He didn't describe himself as the best but he he said it was really important for IBM to really be on top of keyboard design. He was one of the people who later left for academia and he we talked on the phone uh afterward and he said that um he said that he missed it there there was really uh a uh I think that we we both used the the the metaphor depth there it was a place it was a a human encyclopedia Bell Labs II you could find an expert on just about any any subject there uh and um and that was um that was true for example of uh of some people who could have been professors anywhere but who decided to remain with corporations uh uh uh and and they they were there because they had a chance to collaborate in a way that they couldn't in a university where they'd be members of a department their colleagues would be other people in their department they would be producing PhDs in their department but at Bell Labs you could go down a corridor and and see people in half a dozen disciplines uh you could you could chat with them it it was in a lot of ways a more productive atmosphere for cutting edge research than universities but there was a consensus both among universities and among corporations that the division of labor should change and the people who left the universities very often had economic opportunities that were greater than those that they would have had as senior scientific staff members they had not only relatively light teaching loads they also had opportunities for consulting and entrepreneurship that they wouldn't have had in the great laboratories. So there was also a change in the attitude of top researchers careers in these deep organizations were no longer as attractive as they were although I still know a few people who are in those organizations who are doing really theoretical work that has no immediate or foreseeable impact on the company's profits but both they and the company believe that it's in their interest to stay around.
PhilippSo with with with with that um you know bringing it back to like some of the decisions today around boards uh and how we you know take some of the deep work but also think about you know investing in innovation which has an impact in the next two to three years you know how can you advise boards um specifically you know when it comes to these unintended consequences how how do you think about these second order risks? You know how do you know if it's a good or bad idea for Bell Labs back then to keep this researcher, to work on these specific topics, to invest in this part um the same as you know, today for any any big corporation, how do they think about second order risks? And um how should boards be able to anticipate those risks and what type of exercises can they do if it's scenario planning to make sure that these moments don't happen like as a Hindenburg again and other of these tragic developments?
EdwardThe most important neglected risk for boards is reputational risk. What you absolutely don't want is outrage and what creates crises for reputations is not only a disaster but but something that really is appalling something that looks totally preventable and disgraceful. I in in my seminar on disasters at Princeton uh there is a unit that we're on now uh in on on uh space disasters and we're starting with Apollo one and then next week we are talking about Challenger uh and uh then Columbia the Challenger was a tragedy a tragic loss of life as was Apollo 1 but Apollo 1 was a disaster among astronauts whose job it was to take extraordinary risks with Challenger there was an important innovation of having a teacher in space and NASA and others had been promoting a normalization of space that space travel is really safe and we're all going to go into space. It'll be like transatlantic uh plane service right and they were so much into that set of assumptions that they didn't realize that there were still the same risks of new technologies. And the Challenger w was as fragic as it was because there was a civilian who had not been fully informed about the risks that that she was taking and a public that also had not been candidly informed about the risks of space. So Challenger is the subject of probably more books than any disaster since Titanic and for good reason because of that outrage. So the first responsibility of a board is to look for the possibility of outrage and to take strict measures to avoid any situation that will lead to public condemnation which can have lasting impact on the brand.
SteveAnd I'm curious when you look around boards and the leaders on boards what kind of leaders do boards need do they need more heroes? Do they need more system thinkers? What do you think?
EdwardBoards need a certain kind of person. I have known many scientists many engineers many humanists in each field there is only a minority of people who have the kind of all-around balanced judgment that I needed when I was evaluating manuscripts or looking for potential authors so for a board it's necessary to to look for people who have that mature holistic judgment and and wide reading and and temperament. It's very difficult to read what people are writing and to have a sense of what makes a good board member you really have to talk to somebody you have to spend time with them you have to see how well they understand the total picture of your organization and its challenges.
PhilippSo you are uh providing a lot of um yeah education insights uh which should be relevant for boards and I mean I even uh wrote a paper uh based on based on your research uh which will be published uh in a in a few weeks um on the um na cd directorship uh magazine but you also teach a lot of young uh people of students uh you you you speak on TED I think you had two or three TED talks can you share um kind of like what are the yeah what are the topics and what are the young leaders you know interested in what are they curious about what are they concerned about and and uh what are your responses to this with your wealth of experience in this field?
EdwardWhat we deal with in this seminar is how many disasters are caused by extremely competent people in leading very well managed organizations. Titanic and the White Star line being the first of them but there were many others. The question is how elites go wrong and elites go wrong because they have been successful but in being successful they've developed playbooks. They've developed an understanding of how things work and they haven't always paid attention to what's been changing in the background or to unusual circumstances. For example the US government's response to COVID 19 so there was a there was a playbook in how you dealt for example with rumors with with uh with uh fringe ideas and uh there was an assumption that you could just uh put pressure on other organizations to suppress them which of course bit back in a spectacular way but which was really preventable if you were following the way in which opinion and media were going at the time but these people uh from Fauci downward uh had had amazing records of success they were the world's leading people in the field and COVID-19 which we'll be studying later in the seminar is a reminder that eminence is no protection against causing a disaster. In fact eminent people so often help cause disasters or respond poorly to disasters uh just because they are now in a position to have the greatest influence on the on the response to disaster so the key thing then is to step back and look at what has been changing rather than to assume that your playbook is still going to be the solution.
SteveUh and thank you for for spending time with us and sharing your perspectives why don't we uh close if we will if you were advising the board uh of a fortune 500 company today what would be your your first non-negotiable my non-negotiable would be to insist on a decision of at least 20 to 25 years to focus on the long-term position of the corporation its status in the industry and especially in its foundations in in R D.
EdwardI love it and and by the way and and in avoiding outrage.
SteveAbsolutely thank you so much Ed was such a pleasure to have you on the show thank you thank you many of the failures Ed describes don't come from bad actors or weak organizations. They come from success from confidence built on strong track records from systems that once worked exceptionally well and quietly stopped adapting. For boards that's a real risk not just what you approve but what you assume is already handled. Ed's advice is practical and timely designed for reversibility protect deep expertise and treat reputational risk as seriously as financial risk. If you're navigating AI, autonomy or any of the complex systems where second order effects matter as much as first order gains, this is a perspective worth carrying into the boardroom. Ed, thank you for your insights and as always thanks for listening to Inside CVC. We'll see you next time, you know, you're not gonna be able to do it